Scripted Check

Introduction

Apica's "Scripted Checks" are ASM checks written in several scripting languages. So, instead of needing a custom scripting tool or a proprietary scripting format, developers and monitoring teams can use familiar languages to create their custom monitoring scripts and metrics for the long-term understanding of their applications.

Video versions of this guide are available on the Apica Systems YouTube account here: https://www.youtube.com/playlist?list=PL7P4sd6wT60B5JAxU3l3Rzjhf7v01lqQz

Why Use a Scripted Check?

If you are in a DevOps shop, you can use the DevOps Toolchain that you might already have in place. If you've got the resources to write and edit Java, Python, or JavaScript, you can create a long-term global script for ASM without needing a unique scripting tool or proxy setup. If you know what URLs you want to monitor, this allows you to become less reliant on proprietary scripting solutions and also frees these script languages from being unable to do more than a test application performance from a single QA/developer machine.

Scripted Check Overview

The term “Scripted Check” refers to checks which run scripts which customers create on their own and upload to the ASM Platform for monitoring. Currently, Java, Python, JavaScript, and AWS Lambda scripts are supported within ASM. Scripts may be stored and uploaded to an HTTP server or GitHub repository. When you download a script for execution, you must specify either the HTTP URL or the GitHub Repo URL.

Note: this page will cover only the GitHub repository method of script uploading and storage.

  1. Setup GitHub to store the scripts.

  2. Script the check (in Java, Python, or JavaScript)

  3. Upload the Script into GitHub.

  4. Create the ASM Check to any one of Apica's global agents on the Apica Monitoring Platform.

  5. Collect, compare and analyze the ASM Check Results OR send the results to integrated systems that use ASM as a data source.

Creating an Example Scripted Check

The following guide utilizes Python code, but the workflow can apply to Java checks, JavaScript checks, or any other Scripted check type.

1. Scripting the Check

Next is a simple example of writing your script and running it in ASM.

We will be coding a straightforward Python check for its use in ASM. This Python check will call a URL that we specify, and it will return the response that we received from this URL.

Step

Screenshot

Step

Screenshot

Import Libraries (as needed)

import requests

import sys

import JSON

import time

After Python is up, import the requests {a library for doing any URL call}, sys, JSON, and time libraries into the virtual environment.

 

 

Set the URL Request

Set the URL call to be an argument.

Make a GET request against that URL.

The response will need to set a default URL to call, if we don't have an argument. Set exceptions as 'e'. and The URL will be http://google.com.

Our script returned a 200 status code. Our script will call either http://google.com or a URL that we provide.

try: url = sys.argv[1] except Exception as e: url = 'https://google.com' response = requests.get (url) print (response.status_code) url = sys.argv[1)

Add JSON format

What JSON format does Apica’s back-end system expect? Apica’s back-end is based on MongoDB.

MongoDB allows us to have an expandable result format. So that's a result format that you can upload almost anything to, and it will become a part of the result.

We have the start and end times that we need. So the start and end times will be the start and end times of your check in ASM (These will show up in the check Result view in ASM).

  • Set start_time = time.time().

  • Set end_time = time.time().

Set a message. So our message is “URL call returned status code”, then adds a string response (the returned status code) the value that you see here, in the JSON format, it will be the value of the results.

So this is the main value that you will see. Usually, it's the duration, but it could be anything. To show this, let's say that it's the status code, because this is what we're saying in the message. So we'll set it to response.status.code.
After running this, we have our JSON output; by itself, a valid result.

try: url = sys.argv[1] except Exception as e: url = 'https://google.com' start_time = time.time() response = requests.get (url) end_time time. time() json_return = ( "returncode": 0, "start_time": start_time, "end_time": end_time, "message": "URL call returned status code: " + str(response.status_code), "unit": "ms", "value": response.status.code, } print(json.dumps(json_return))

 

 

 

 

 

 

 

 

 

 

Expanding the Returned values

Let's expand this a little bit; we have an expandable JSON format, so let's give ourselves more content and data.

How many headers do we have here?

What is the length of the returned content?

Add these lines below the "value" (line 16 above) to return the response header count and the size of the content.

"header_count": Len(response. headers),

"content_size": len(response.content)

 

 

Although simple, the above is a perfectly valid example of a Python Scripted Check. It uses Python standard libraries and the 'requests' library, included in the Apica Scripted Check Private Agent installation.

Apica's Scripted Checks are very flexible; if your script requires additional Python libraries, you may simply add those libraries to your Scripted Check Private Agent.

Advanced JSON

Some additional points to the previous steps.

Step

Screenshot

Step

Screenshot

About Adding JSON Return Values

When we simply added some values into the JSON, we could have added any sort of content that you want to include. In the previous case, we added these:

"header_count": len(response.headers),

"content_size": len(response.content)

 

Adding More Values

A very powerful concept that Apica supports with Scripted Checks: Add more fields to add more values.

Let’s capture the headers coming out of our response by creating another field and calling it 'headers.' This is going to be an actual inner JSON object that contains our headers.

Add dict_response.headers and rerun the check. The result shows that we have our header, JSON, inside this field.

Anything that JSON supports is supported in this result format. So you can add lists, inner dictionaries, null values, integers, booleans, etc. to this JSON.

Test your check locally before uploading to a repository which is linked to ASM. If you have a private agent with the necessary software installed and are planning to run the script on that agent, it is possible to test the script locally on the agent before uploading to your repository. See Scripted Check | Testing a Script on the Executing Agent Itself for more information.

2. Uploading the Check to a Repository

When you have finished writing the check and testing locally, upload the check to a repository which has been linked to ASM. See Manage Repository Profiles for instructions on linking a repository for use within ASM. A repository can be added directly via the check creation wizard, when editing the check via the Edit Check page, or within the “Manage > Repository Profiles” page on the top ASM navigation bar.

3. Uploading and Running the Check in ASM

When the script is complete and uploaded to your repository profile, you are ready to create a Scripted check in ASM in order to utilize the script.

Step

Screenshot

Step

Screenshot

Open ASM

Navigate to New Check+

Add Script via Run Python.

The Run Python Scripted Check type icon should be displayed. If you don't have this, you may need to get it unlocked. Please ask your sales team for access because this is a more advanced check, not available for customers by default.

Creating a Run Python Check, Step 1

Enter "New Test Check." Add any description and relevant tags, and then click Next.

Run Python Step 2

Configure this check

  • Resource URL/Github URL

  • Resource Auth Type

  • Resource Auth

  • Resource Path

  • Secondary Resource

  • Script Runner 

  • Script Arguments

  • Location

     

Resource URL/Github URL: This answers the question, "Where do we find your script?" This could be an HTTP download link, or it can be to your GitHub repository. For this example, go to your repository and copy+paste the URL here, ending with the branch (master/main). Ours is main.

Enter the URL that this script resides at. In this example, it resides in a GitHub Repo at https://github.com/[username]/NewTestRepository/main

Resource Auth Type: This type of resource authorization will be needed, GitHub or HTTP. This example uses GitHub. But if you have your file on an HTTP server, you could use HTTP as the type.

Resource Auth: Resource authorization is required. The authorization header allows you to download resources.

  • It's a basic authorization header when your resource authorization type is HTTP.

    • If you have an HTTP server with no protection, you may do it that way, but Apica does not recommend it because it's not secure.

  • If your auth type is GitHub, this form <USERNAME>:<TOKEN>.

    • Remember, the token is the Personal Access Token that we created back in the first step [it can also be empty if your repository is public].

    • Example if your username is foobar: foobar:ghp_JlvGv7PGTrAzI2LWVIQZDhRthYBBQI1TGl0J

To set the Resource Auth, remember that it is a hidden field, so you won't be able to see anything you type here. Apica recommends taking your username as your username without the email domain and then assembling it with the colon and Personal Access Token so you can see it in another location.

For example: if your GitHub username is foobar@gmail.com, your username will be 'foobar,' without @gmail.com.

Then append the colon ':.'

Finally, add the Personal Access Token, and your resource authorization looks like this and is ready to copy into that field:

foobar:ghp_JlvGv7PGTrAzI2LWVIQZDhRthYBBQI1TGl0J

Resource Path: This is the path inside your repository to the scripts you want to run. Our example scripts are just in the base level repository, so enter main.py

Secondary Resource: If your script requires any sort of additional files, you can use this secondary resource to download another file. However, you can also start your script off by downloading the file directly: That way, you can use any sort of security you want to protect it. For example, you could have a secondary resource, like a certificate protected by OAuth: Your script could go through the whole OAuth process and then use the local file.

In this example, the secondary resource will be blank because it is unnecessary.

 

It is possible to reference subfolders from a base directory using the “Secondary Resource” field. For instance, if your use case requires a “/python/main.py” file and main.py depends on a module defined in /python/modules, you can specify /python, and the check runner will recognize the module because it is able to “search” the /python folder for secondary resources.

For example, if “local_module_sample.py” depends on a subfolder in /python, you can specify the project like so:

Script Runner: Python is pre-selected (as the only choice).

Script Arguments: These will be provided if we enter them on the command lines. Enter http://example.com So, we will pass this argument to our script.

Step 3 Interval, Thresholds & Monitor Groups

In this example, we will be creating a manual check.

Select an interval, if needed, and check the groups you want this check to be a part of.

Click Next.

Confirm Your Check

A Confirmation Page will be displayed for you to either go Back to edit it or Create to continue.

If you are satisfied, you click Create to create the check.

Check Created

  • Uncheck Enable Failover (which is checked by default) because we don't want to have that enabled right now, as this is just for demo purposes.

  • Set the max attempts to 1 because we want the check to fail quickly for the test.

  • Click Save.

Apica generally recommends these settings for testing because what can happen is too long with the default behaviors. If Max Attempts remains at three and the Attempt Pause for each attempt is 30 seconds, this means that your test check could wait up to 90 seconds if it's failing. And so these settings don't help when trying just to debug something; it's better to have the information that your check failed from the beginning.

Click the Check Details button in the upper right as we're ready to run our check.

Check Details Page

The Check Details page has a section called "Status Last 24 Hours," and beneath that will be a "Run Check" icon. Click to run manually.

Check Results

In this example, we set the “Last Value” to the status code by assigning it to the variable “value” in the script.

 

 

 

Drill down

Drilling into these results, we see the Result value (ms) is 200 because, even though the typical value for a result is the number of milliseconds it took to respond, we specified in our JSON that the value would be the response status code, so the 200 is displayed in its place. The number of Attempts is shown as 1, and beneath the result, code is the JSON that we specified:

json_return = { "returncode": 0, "start_time": start_time, "end_time": end_time, "message": "URL call returned status code: " + str(response.status_code), "unit": "ms", "value": response.status_code, "headed count": len(response.headers), "content_size": len(response.content), "headers": dict(response.headers) }

Messaging via JSON

From Returned Value Table View, note the message it says URL call return status code 200. But this is the message that we sent inside of our JSON. So message when you set it will be placed here.

So you can record any data you like, and it will show up here.

Any metrics data that you want to record, you can keep for any data mining.

Results are stored for 13 months. So you'll have this data over a long time. It's a potent tool to make your customized results and even retrieve them in your front end.

In the next section, we will review how to retrieve your check information through the API.

3. Interpreting the Check Results in ASM

After creating our new check, using a Python script that we uploaded into GitHub, we know that the script presents the HTTP status code of the URL called as the value of the result in ASM. Next, we will use the ASM API to get information about this check.

 

Step

Screenshot

Step

Screenshot

Open ASM

Navigate to Tools, API

Select a check using the drop-down box

Select the Target check for the API

We've selected the Test Demo check. Beneath that check selection is some example API calls to help you get started quickly.

We've copied the Last Result call pasted it into Postman to run it.

 

 

 

Postman Results of Standard API Check Endpoint

Here, via API, is the last value of your check run, 200.

200 is the last status code of the URL. This is nice but is just a raw number without data or context, and there's no JSON used. This could be used just for a small script or something you can pull the last result of your check maybe test it for something.

A better API endpoint is the Checks Generic Check ID Results API endpoint.

Apica API for Generic Results

This API endpoint looks up the results for checks that present a result type of generic. 'Generic' checks mean that they have the expandable JSON result format we saw in Step 5 above.

Generic type Checks: Run Python scripts, Run Javascript, Run Java, and (when released) Run Azure Cloud, Run Lamda, etc.

Postman Results of Generic API Check Endpoint

In Postman, using this API endpoint:

https://api-asm1.apica.io/dev/Checks/generic/43454/results?

auth_ticket=18FFE***-****-****-****-****0DCO

Instead of the earlier (for comparison):

https://api-asm1.apica.io/dev/Checks/49454/lastvalue?auth_ticket=18FFE***-****-****-****-****0DCO

The API documentation for Generic Check shows these capabilities:

  • Set a filter with a range

    • Return the most recent results

    • Return results that occurred in between defined two-millisecond values that, for example, answer the question, "What results came in between 1.2 and 2.3 seconds?"

  • Define a period to query (between two UTC stamps)

  • Return specific results IDs.

This is a POST endpoint:

Note the JSON results returned above. So you may need to use these in some other API call to lookup even more information. In this example, we're just going to use the most recent because that is the simplest and easiest to show.

The Result Object

Note the headers that returned and (not shown in the screen capture) the content size, the header count, etc. All of the information you recorded in your script comes through the API.

What you choose to do next with these metrics is all up to your needs.

  • You could create a script that scrapes this URL every once in a while, looks up to the last hour of results, and parses the JSON for data that you need.

  • You could even create another check that would read this information and then crunch the data to present other results, e.g., the average size of the headers or content length.

  • There is much more, only limited by your use cases.

Review:

  1. We've scripted our check (in this example, Python).

  2. We've uploaded a script to GitHub.

  3. We’ve created our ASM check using a script in a GitHub repository.

  4. We've run our check in ASM and viewed the results.

  5. We've pulled the results for the API, including the custom JSON.

Appendix

Adding a Custom Python or NodeJS Module to ApicaNet for use with Scripted Checks

Adding a custom python or node.js module to your private Browser agent is very simple and should take less than 5 minutes. This guide assumes you have administrator access to your agent or have an operations team that will perform these steps for you.

  1. Determine the modules you need to install and log in to the private Browser agent.

  2. The worker runs apicanet in a chroot shell. This means you cannot simply run the following commands but must enter a chroot shell. To do this, run the following command:

     

  3. You should now be in a chroot shell. From here, you may interact with pip3 and npm (package managers for python 3.5 and nodejs).

  4. Install the necessary packages by running the following commands:

     

  5. Your packages are now installed and ready to use. If you wish, you can even test your script by opening a new shell instance and copying your script to /opt/asm-browser-agent/embedded/. You may then run your script in the chroot environment by using “node“ or “python3”. The script you placed in /opt/asm-browser-agent/embedded should be in the root folder of the chroot environment.

 Testing a Script on the Executing Agent Itself

If you are running a script from a private agent, It is possible to run it locally in order to ensure that all packages are installed in the correct location and that no syntax errors, etc. are present. It is an excellent step to use when troubleshooting Python checks which are not running correctly.

  1. Copy the script onto /opt/asm-browser-agent/embedded on the agent itself

  2. cd ../ into /opt/asm-browser-agent

  3. run ./chroot_shell.sh (you can see this shell script if you run ls)

  4. The script you copied into /opt/asm-browser-agent/embedded should be in the root folder. Run ls to verify.

  5. You can run the script from the chroot shell using “node” or “python3”. Use the output to verify that the check is working without issue.

Can't find what you're looking for? Send an E-mail to support@apica.io