Executing Load Test Programs
- 1 Introduction
- 2 Load Test (*.class) files.
- 3 Execute Load Test Steps in the Project Navigator
- 3.1 Execute LT Input Fields
- 3.2 Starting Exec Agent Jobs
- 3.3 Real-Time Job Statistics (Exec Agent Jobs)
- 3.4 Response Time Overview Diagrams (Real-Time)
- 3.4.1 Input Fields
- 3.5 URL Response Time Diagram (Real-Time)
- 3.5.1 Input Fields
- 3.5.1.1 Info Box / Measured Values
- 3.5.1 Input Fields
- 3.6 Error Overview Diagrams (Real-Time)
- 3.6.1 Input Fields
- 3.7 Statistical Overview Diagrams (Real-Time)
- 3.7.1 Real-Time Comments
- 3.8 Loading the Statistics File
- 4 Starting Cluster Jobs
- 5 Load Test Jobs Menu
- 6 Scripting Load Test Jobs
- 7 Rerunning Load Tests Jobs (Job Templates)
Introduction
Executing a Load Test is started in the Project Navigator. The icon to the right of the *.class files will have a red arrow.
The icon to the right of the Job Template (*.xml) files will have a green arrow and will directly open a Start Load Test Job, which is covered here towards the end of this page).
Load Test (*.class) files.
Execute Load Test Steps in the Project Navigator
After the Project Navigator has called the load test program, you must enter the test input parameters for the test run (a single execution of the load test program is also called “test run”).
The most important parameters are the Number of Concurrent Users and Load Test Duration. For short-duration Load Tests, Apica recommends 100% URL sampling. We also recommend entering a small comment about the test run into the Annotation input field.
If evaluating for browser performance, please select any Browser Emulation and Caching Options needed.
Execute LT Input Fields
Field | Description |
---|---|
Save as template | Stores all load test input parameters additionally inside an XML template. Later, this template can be used to rerun (repeat) the same load test. |
Execute Test From | selects the Exec Agent or the Exec Agent Cluster from which the load test will be executed. |
Apply Execution Plan | Optionally, an Execution Plan can be used to control the number of users during the load test. The dropdown list shows the Execution Plans' Titles, extracted from all formal valid Execution Plan files (*.exepl files), located in the current Project Navigator directory. Note that the titles of invalid Execution Plan files are not shown. If an Execution Plan is selected, then the following input parameters are disabled: Number of Concurrent Users, Load Test Duration, Max. Loops per User, and Startup Delay per User. |
Number of Concurrent Users | The number of simulated, concurrent users. |
Load Test Duration | The planned duration of the load test job. If this time has elapsed, all simulated users will complete their current loop (repetition of web surfing session) before the load test ends. Thus the load test often runs a little bit longer than the specified test duration. |
Max. Loops per User | This limits the number of web surfing session repetitions (loops) per simulated user. The load test stops if the limit has been reached for each simulated user. Note: this parameter can be combined with the parameter "Load Test Duration" The limitation which first occurs will stop the load test. |
Pacing | Minimum Loop duration per User. Enabling this option sets a minimum time for all in the iteration executed page breaks and URL calls which must be elapsed before the next iteration starts. |
Startup Delay per User | The delay time to start an additional concurrent user (startup ramp of load). Used only at the start of the load test during the creation of all concurrent users. |
Max. Network Bandwidth per User | The network bandwidth limitation per simulated user for the downlink (speed of the network connection from the webserver to the web browser) and the uplink (speed of the network connection from the web browser to the webserver). By choosing a lower value than "unlimited," this option allows simulating web users with a slow network connection. |
Request Timeout per URL | The timeout (in seconds) per single URL call. If this timeout expires, the URL call will be reported as failed (no response from the webserver). Depending on the corresponding URL's configured failure action, the simulated user will continue with the next URL in the same loop, or it will abort the current loop and then continue with the next loop. |
Max. Error-Snapshots | Limits the maximum number of error snapshots taken during load test execution. The maximum memory used to store error snapshots can be configured (recommended - for cluster jobs: value overall cluster members). The maximum number of error snapshots per URL can be configured (not recommended for cluster jobs: value per Exec Agent). |
Statistic Sampling Interval | statistic sampling interval during the load test in seconds (interval-based sampling). Used for time-based overall diagrams like, for example, the measured network throughput. If you run a load test over several hours, you must increase the statistic sampling interval to 10 minutes (600 seconds) to save memory. If the load test runs only some minutes, you may decrease the statistic sampling interval. |
Additional Sampling Rate per Page Call | Captures the measured response time of a web page when a simulated user calls a web page (event-based sampling). Used to display the response time diagrams in real-time and in the Analyse Load Test Details menu. For endurance tests over several hours, it is strongly recommended that the sampling rate for web pages is set between 1% and 5%. For shorter tests, a 100% sampling rate is recommended. For endurance tests over several hours, Apica strongly recommends setting the sampling rate for web pages between 1% and 5%. We recommend a 100% sampling rate for shorter tests. |
Additional Sampling Rate per URL Call | captures the measured response time of a URL each time when a simulated user calls a URL (event-based sampling). Used to display the response time diagrams in real-time and in the Analyse Load Test Details menu. |
In addition to capturing the URL calls' response time, further data can be captured using one of the Add options. | --- recommended: no additional data are captured |
Debug Options Choosing any debug option (other than "none") affects that additional information is written to the *.out file of the load test job. The following debug options can be configured: | none - recommended |
Additional Options Several additional options for executing the load test can be combined by adding a blank character between each of the options. The following additional options can be configured. | -multihomed -dnssrv <IP-name-server-1>[,<IP-name-server-N>] -dnsperloop |
SSL: specifies which HTTPS/SSL protocol version should be used: | All: Automatic detection of the SSL protocol version. ZebraTester prefers the TLS 1.3 or TLS 1.2 protocol, but if the Web server does not support this, TLS 1.1, TLS 1.0, or SSL v3 is used. This is the normal behavior that is implemented in many Web browser products. v3: Fixes the SSL protocol version to SSL v3. |
Browser Emulation User-Agents and Caching |
|
Annotation | Enter a short comment about the test run, such as purpose, current web server configuration, and so on. This annotation will be displayed on the result diagrams. |
Starting Exec Agent Jobs
If you have specified that a single Exec Agent executes the load test program (but not by an Exec Agent Cluster), the load test program is transmitted to the local or remote Exec Agent, and a corresponding load test job - with a job number - is created locally within the Exec Agent. The job is now in the state “configured”; that is, ready to run, but the job is not yet started.
Hint: each Exec Agent always executes load test jobs as separate background processes and can execute more than one job at the same time. The option Display Real-Time Statistic only means that the GUI opens an additional network connection to the Exec Agent, which reads the real-time data directly from the corresponding executed load test program's memory space.
Click the Start Load Test Job button to start the job.
If you have de-selected the checkbox Display Real-Time Statistic, the window will close after a few seconds; however, you can - at any time - access the real-time statistic data, or the result data, of the job by using the Jobs menu, which can be called from the Main Menu and also from the Project Navigator.
Alternatively, the load test program can also be scheduled to be executed at a predefined time. However, the corresponding Exec Agent process must be available (running) at the predefined time because the scheduling entry is stored locally inside the Exec Agent jobs working directory, which the Exec Agent itself monitors. Especially if you have started the local Exec Agent implicitly by using the ZebraTester Console - AND if the scheduled job should run on that local Exec Agent, you must keep the ZebraTester Console Window open so that the job will be started ¹.
¹ This restriction can be avoided by installing the Exec Agent as a Windows Service or as a Unix Daemon (see Application Reference Manual).
Real-Time Job Statistics (Exec Agent Jobs)
Real-time statistics shown in this window are updated every 5 seconds for as long as the load test job is running.
You may abort a running load test job by clicking on the Job Actions button, which includes.
Abort Job: Aborting a job will take a few seconds because the job writes out the statistic result file (
*.prxres
) before it terminates.Suspend Job
Increase Users
Decrease Users
Abort Increase/Decrease Users
Extend Test Duration
Reduce Test Duration
Item | Description |
---|---|
<Exec Agent Name> or <Cluster Name> | The Exec Agent's name - or the Exec Agent Cluster's name - executes the load test job. |
Job <number> | Unique job ID (unique per Exec Agent, or unique cluster job ID). |
Real-Time Comment | If real-time comments are entered during test execution, these comments are later displayed inside all time-based diagrams of the load test result detail menu. |
Job Parameter | The name of the load test program and the program arguments - test input parameter. |
Diagram | Description |
---|---|
Web Transaction Rate
| The Web Transaction Rate Diagram shows the actual number of (successful) completed URL calls per second, counted overall simulated users. By clicking on this diagram, the Response Time Overview Diagrams are shown.
|
Session Failures / Ignored Errors | The Session Failures / Ignored Errors Diagram shows the actual number of non-fatal errors (yellow bars) and the number of fatal errors (red bars = failed sessions) counted over all simulated users.
|
The Number of Users / Waiting Users Diagram shows the total number of currently simulated users (red bars) and the actual number of users who are waiting for a response from the webserver (purple bars). The users waiting for a response is a subset of the currently simulated users. By clicking on this diagram, the Statistical Overview Diagrams are shown.
|
More actual measurement details are available by clicking on the Detailed Statistic button. Especially, an overview of the current execution steps of the simulated users is shown:
The most relevant measured values of the URLs are shown for the selected page by clicking on the page's magnifier icon.
Using this menu, you can also display and analyze error snapshots by clicking on the magnifier icon next to the failure counter. In this way, you can begin analyzing errors immediately as they occur - during the running load test.
By clicking on a URL, the corresponding URL Response Time Diagram is shown.
All of these detailed data, including all error data, are also stored inside the final result file (.prxres
), which can be accessed when the load test job has been completed.
Response Time Overview Diagrams (Real-Time)
Description: displays during the load test (at real-time) a diagram per web page about the measured response times.
Please consider that maybe only a fraction of the response times are shown, depending on the Additional Sampling Rate per Page Call, which was selected when the load test was started. For example, only every fifth response time is shown if the Additional Sampling Rate per Page Call was set to 20%.
Input Fields
| Allows to select the period, from the current time back to the past, are shown in the diagrams within the response times. |
| Allows selecting if the bars inside the diagrams are shown as average values or as max. values. Please note that there is only a difference between the max. values and the average values if multiple measured samples of the response time fall inside the same pixel (inside the same displayed bar):. |
Diagram | Description |
---|---|
| The tables at the right side of the diagrams contain the response times for all URLs of the web page. Also, these response times are either average values or max. values, depending on the selection in the Time Bars drop-down list. However, these values are calculated since the load test was started and always "accurately" measured, which means that they do not depend on the value chosen for the "Additional Sampling Rate per Page Call." |
| You can click on a URL response time to show the corresponding URL Response Time Diagram. |
| On the left side inside the diagram, the web page's average response time is shown as red-colored text, calculated since the load test was started. But depending on the selected period, this value may not be displayed in every case. On the right side inside the diagram, the last measured value is shown. |
URL Response Time Diagram (Real-Time)
Description: displays during the load test (at real-time) the response times of a URL and also a summary diagram about the measured errors of the URL.
Please consider that maybe only a fraction of the response times are shown, depending on the Additional Sampling Rate per URL Call, which was selected when the load test was started. For example, only every fifth response time is shown if the "Additional Sampling Rate per URL Call" was set to 20%.
Input Fields
| Allows to select the period, from the current time back to the past, are shown inside the diagram within the response times. |
| Allows selecting if the bars inside the diagram are shown as average values or as max. values. Please note that there is only a difference between the max. values and the average values if multiple measured samples of the response time fall inside the same pixel (inside the same displayed bar). |
Info Box / Measured Values
All values in this infobox are calculated overall completed calls of the URL, measured since the load test was started. These values are always "accurately" measured, which means that they do not depend on the value chosen for the "Additional Sampling Rate per URL Call."
| the total number of passed calls for this URL. |
| the total number of failed calls for this URL. |
| the average size of the transmitted + received data per URL call. |
| the maximum response time ever measured. |
| the minimum response time ever measured. |
| the average time to open a new network connection to the webserver, measured for this URL. a blank /[ |
| the average time to transmit the HTTP request header + (optionally) the HTTP request content data (form data or file upload data) to the webserver, measured after the network connection was already established. |
| the average time for waiting for the first byte of the web server response (-header), measured since the request has (completely) transmitted to the webserver. |
| the average time for receiving the HTTP response header's remaining data, measured since the first byte of the response header was received. |
| the average time for receiving the response content data, for example, HTML data or the data of a GIF image. |
| the average response time for this URL. This value is calculated as \\. |
URL Errors / Real-Time Profile of Error Types: This diagram shows an overview of what kind of errors did occur for the URL at which time, measured since the load test was started. This "basic error information" is always "accurately" measured independently of the value chosen for the "Additional Sampling Rate per URL Call" - and captured in every case, also if no more memory is left to store full error snapshots.
Error Overview Diagrams (Real-Time)
Description: Displays during the load test (at real-time) an overview of all occurred errors.
Failure Diagrams: The first diagram shows an overview of what kind of errors did occur, counted overall URLs, and measured since the load test was started. This "basic error information" is always captured in every case, also if no more memory is left to store full error snapshots.
The succeeding diagrams, shown per web page, provide only information at which time errors occurred. The tables on the right side of the diagrams show the number of errors that did occur on the URLs of the web page. You can click on an error counter to show the error detail information (error snapshots) for the corresponding URL. First Error Snapshots: Displays a list of errors at first (at the start of the load test). By clicking on a magnifier icon, the corresponding error detail information (error snapshot) is shown.
Latest Error Snapshots: Displays a list of the latest (newest) errors. By clicking on a magnifier icon, the corresponding error detail information (error snapshot) is shown.
Input Fields
| effects that all errors about failed URL calls are shown (non-fatal and fatal errors). |
| effects that only fatal errors about failed URL calls are shown (session failures). |
Error Snapshot Memory: % used +: By clicking on the + (plus sign), you can increase the amount of memory available to store error snapshots. Please Note: when the memory is already 50% or more used, no additional error snapshots for non-fatal errors are captured. This means that increasing the memory may also re-enable capturing for non-fatal errors.
Statistical Overview Diagrams (Real-Time)
Description: displays statistical overview diagrams (in real-time) about a load test job.
| The total number of simulated users. |
| The number of users who are waiting for a response from the webserver. |
| The number of failed sessions - which is the same as the number of fatal errors. |
| The session time for one loop per simulated user. This value is the sum of "the response time of all URLs and all user's think times" per successfully completed loop. |
| The number of (successful) completed URL calls per second measured overall simulated users. |
| The number of (successful) completed loops (sessions) per minute measured overall simulated users. |
| The time in milliseconds (per URL call) to open a new network connection to the webserver. |
| The total network traffic which is generated by this load test job, measured in megabits per second. |
Real-Time Comments
Description: supports entering comments during the load test execution.
Real-time comments are notes or Tips, which you can enter during the load test execution:
These comments are later displayed inside all time-based diagrams of the load test result detail menu :
You can also modify, delete or add real-time comments before you generate the PDF report. However, all retroactively entered real-time comments are not permanently stored inside the result data.
Loading the Statistics File
After the load test job has been completed, the statistic results file (*.prxres) is stored in the local or remote Exec Agent's job directory. To access this results file, you must transfer it back to the (local) Project Navigator directory from which the load test program was started.
This menu shows all the load test files' files; however, only the statistics results file is usually needed, and this is already selected. The "*.out" file contains debug information, and the "*.err" file is either empty or contains internal error messages from the load test program itself.
By clicking on the Acquire Selected Files button, all selected files are transferred (back) to the (local) Project Navigator directory.
If the checkbox Load *.prxres File on Analyze Load Test Menu is selected, the statistics results file is also loaded into the memory area of the Analyze Load Tests menu, where the statistics and diagrams of the measured data can be shown, analyzed, and compared with results of previous test runs.
Starting Cluster Jobs
If you have specified that an Exec Agent Cluster executes the load test program, the load test program is transmitted to the local cluster job controller, coordinating all cluster members (Exec Agents). The cluster job controller creates a cluster job and allocates a cluster job number. The cluster job is now in the state “configured” (ready to run, but not yet started).
The number of concurrent users will be automatically distributed across the cluster members, depending on the individual computer systems' capability - called "load factor."
If the load test program uses Input Files, you are asked for each Input File - if you wish to split the Input File content. This can be useful, for example, if the Input File contains user accounts (usernames/passwords) but the web application does not allow duplicate logins. In this case, each cluster member must use different user accounts. By clicking on the corresponding magnifier icon, you can view how the Input File data would be distributed across the cluster members. If you do not use the split functionality, each cluster member will receive an entire Input File copy.
The distribution of users across the cluster members can also be modified manually; however, this is useful only if a cluster member is unavailable (marked with light red background). The cluster job can not be started. In this case, you can assign the unavailable cluster member users to other cluster members and then try to start the cluster job again. This redistribution may take a few seconds to complete.
Alternatively, the load test program can also be scheduled to be executed at a predefined time. However, the local Job Controller process must be available (running) at the predefined time because the scheduling entry for the cluster job is stored inside the Job Controller working directory, which the Job Controller itself monitors. If you have started the Job Controller implicitly by using the ZebraTester Console, you must keep the ZebraTester Console Window open so that the cluster job will be started ¹. ¹ This restriction can be avoided by installing the local Job Controller as a Windows Service or as a Unix Daemon.
After the cluster job has been scheduled, you can leave this menu by closing the window, and you can use later the Jobs menu to cancel or modify the schedule of this job.
Real-Time Cluster Job Statistics
The real-time statistics of a cluster job show the most important measured values, similar to the values shown in the Real-Time Statistic of Exec Agent Jobs. The cluster job itself contains Exec Agent jobs that the local cluster job controller has created. By clicking on a cluster member's magnifier icon, the corresponding Exec Agent job's real-time statistics can be displayed in its own window.
If you want to abort the cluster job, you must do it at this level, as this will also abort all Exec Agent jobs. Aborting a single Exec Agent job will not interrupt the cluster job.
The same applies to the statistics result file (*.prxres), which must be accessed at this level.
Loading the Statistics File of Cluster Jobs
The statistics results file of a cluster job contains the consolidated (merged) measurements for all cluster members. The calculations for merging the results are extensive; therefore, it may take up to 60 seconds to show the result file. The individual measurements of the Exec Agents are embedded separately inside the same consolidated result file.
The consolidated statistics results file is marked with a grey/blue background and is already selected for you, depending on your ZebraTester version.
Click on the Acquire Selected Files button to get you to the Load Test job's associated files.
By clicking on the magnifier icon, you can access the "*.out" and "*.err" files of the corresponding Exec Agent jobs.
Usually, you would work inside the Analyze Load Tests menu with the consolidated measurement results only. However, it is also possible to expand the measurement results to access the results of each Exec Agent job:
This feature can be used to check if all cluster members have measured approximately the same response times; however, variations in a range of ± 20% or more may be normal:
Load Test Jobs Menu
All load test programs started from the Project Navigator are always executed as "batch jobs" by an (external) Exec Agent process or by an Exec Agent Cluster. This means that it is not required to wait for the completion of a load test program on the “Execute Load Test” window: you can close the "Execute Load Test" window at any time, and you can check later the result, or the actual effort, of all load test jobs by using this menu.
If a load test job has been completed, you will acquire the corresponding statistic result file (*.prxres
). If a load test job is still running, you are disposed to the job's temporary live-statistic window.
Input Fields
| shows all Exec Agent Cluster jobs. |
| allows selecting the Exec Agent from which a list of all load test jobs is displayed. |
| deletes all jobs except running and scheduled jobs.\\. |
| deletes all completed jobs except the newest one. This button is only shown if, at minimum, two jobs have been completed. |
Columns of the job list:
Item | Description |
---|---|
Job | Each job has its unique ID, which was automatically assigned when the job was defined. However, the ID is unique per Exec Agent. Cluster jobs have a known, separate ID (own enumeration counter). |
[Search Icon] | Allows to acquire the statistic result file ( |
[Delete Icon] | Deletes all data (-files) of a completed load test job. Consider that you must first acquire the statistic result file ( |
Date | Displays the date and time when the job has been defined or when the job has been completed, or - for scheduled jobs - the planned time when the job will be started. |
State | Displays the current job state: configured (ready to run), scheduled, running, or completed. The state "???" means that the job data are corrupted - you should delete all jobs which have the state "???" because they delay the display of all jobs in this list. |
Load Test Program & Arguments | Displays the name of the load test program and the arguments of the load test program. |
Released from GUI(IP) | Displays the TCP/IP address (remote computer) from which the job has been initiated. |
Load Test Program Arguments
Argument / Parameter | Meaning |
---|---|
| Number of concurrent users |
| Planned test duration in seconds. 0 = unlimited |
| Request timeout per URL call in seconds |
| Startup delay between creating concurrent users in milliseconds |
| Max. number of loops (repetitions of web surfing session) per user. 0 = unlimited |
| Network bandwidth limitation per concurrent user in kilobits per second for the downlink (web server to web browser) |
| Network bandwidth limitation per concurrent user in kilobits per second for the uplink (web browser to the webserver) |
| Statistical sampling interval in seconds (interval-based sampling). Used for time-based overall diagrams like for example, the measured network throughput |
| Additional sampling rate in percent for response times of web pages (event-based sampling, each time when a web page is called) |
| Additional sampling rate in percent for response times of URL calls (event-based sampling, each time when a URL is called) |
| Max. number of error snapshots per URL (per Exec Agent), 0 = unlimited |
| Max. memory in megabytes which can be used to store error snapshots, -1 = unlimited |
| Replaces the recorded value of the HTTP request header field User-Agent with a new value. The new value is applied for all executed URL calls. |
| Disables writing any date to the |
| Debug failed loops |
| Debug loops |
| Debug headers & loops |
| Debug content & loops |
| Debug cookies & loops |
| Debug keep-alive for re-used network connections & loops |
| Debug information about the SSL protocol and the SSL handshake & loops |
| Forces the Exec Agent(s) to use multiple client IP addresses |
| Using this option combined with the option |
| Use fixed SSL protocol version: |
| The timeout of SSL cache in seconds. 0 = cache disabled |
| Disable support for TLS server name indication (SNI) |
| Effects: The load test job uses a known DNS hosts file to resolve hostnames rather than the underlying operating system's hosts file. Note that you have to ZIP the hosts file together with the load test program's compiled class. To automate the ZIP, it's recommended to declare the hosts file as an external resource (w/o adding it to the CLASSPATH). |
| Effects that the load test job uses specific (own) DNS server(s) to resolve hostnames – rather than to use the DNS library of the underlying operating system. |
| Enable consideration of DNS TTL by using the received TTL-values from the DNS server(s). This option cannot be used in combination with the option -dnsperloop. |
| Enable DNS TTL by using a fixed TTL-value of seconds for all DNS resolves. This option cannot be used in combination with the option -dnsperloop. |
| Perform new DNS resolves for each executed loop. All resolves are stable within the same loop (no consideration of DNS TTL within a loop). This option cannot be used in combination with the options -dnsenattl or -dnsfixttl. |
| Effects that statistical data about DNS resolutions are measured and displayed in the load test result using a known DNS stack on the load generators. Note: There is no need to use this option if any other, more specific DNS option is enabled because all (other) DNS options also effect implicitly that statistical data about DNS resolutions are measured. If you use this option without any other DNS option, the (own) DNS stack on the load generators will communicate with the default configured DNS servers of the operating system - but without considering the "hosts" file. |
| Time zone (see Application Reference Manual) |
| Comment about the test-run |
Scripting Load Test Jobs
Several load test jobs can be started from the GUI at the same time. However, the GUI does not have the ability to automatically run sequences of load test jobs, synchronize load test jobs, or automatically start several jobs with a single mouse click.
To perform these kinds of activities, you must program load test job scripts written in the “natural” scripting language of your operating system (Windows: *.bat files, Unix: *.sh, *.ksh, *.csh … files). Inside these scripts, the PrxJob utility is used as the interface to the ZebraTester system. When the Windows version of ZebraTester is installed, the installation kit creates the directory ScriptExamples within the Project Navigator, and this directory contains some example scripts.
The PrxJob utility allows you to start load test jobs on the local as well as on a remote system. It also provides the capability to create cluster jobs, synchronize jobs, obtain the current state of jobs, and acquire the statistics result files of jobs. More information about the PrxJob utility can be found in the Application Reference Manual, Chapter 4.
Rerunning Load Tests Jobs (Job Templates)
Whenever a load test is started, an additional job definition template file is stored in the actual Project Navigator directory (in XML format). Such a job definition template file contains all configuration data needed to rerun the same load test job.
If you click the corresponding button of a job definition template (XML) file in Project Navigator, the load test job, inclusive of all of its input parameters, is automatically transferred to the Exec Agent or the Exec Agent Cluster and immediately ready-to-run.
In the screenshot below, the job was preconfigured to run from a cluster of defined servers with a predefined set of Load Test Program Arguments.
Additionally, if you wish to trigger several load test jobs simultaneously to be ready-to-run (by using only one mouse click), you can zip several templates to one zip archive. After this, click the corresponding button of the zip archive:
Example XML LoadTest Template
<?xml version="1.0" encoding="UTF-8"?>
<loadTestTemplate>
<proxySnifferVersion>V5.5-F</proxySnifferVersion>
<loadTestProgramPath>/Applications/ZebraTester/MyTests/CL_Demo_FF_Demo.class</loadTestProgramPath>
<startFromExecAgentName></startFromExecAgentName>
<startFromClusterName>Cluster 1</startFromClusterName>
<isPureJUnitLoadTest>false</isPureJUnitLoadTest>
<executionPlanFilePath></executionPlanFilePath>
<concurrentUsers>1</concurrentUsers>
<testDuration>60</testDuration>
<loopsPerUser>0</loopsPerUser>
<pacingPerLoop>0</pacingPerLoop>
<startupDelayPerUser>200</startupDelayPerUser>
<downlinkBandwidth>0</downlinkBandwidth>
<uplinkBandwidth>0</uplinkBandwidth>
<requestTimeout>60</requestTimeout>
<maxErrorSnapshots>-20</maxErrorSnapshots>
<statisticSamplingInterval>15</statisticSamplingInterval>
<percentilePageSamplingPercent>100</percentilePageSamplingPercent>
<percentileUrlSamplingPercent>20</percentileUrlSamplingPercent>
<percentileUrlSamplingPercentAddOption>0</percentileUrlSamplingPercentAddOption>
<debugOptions></debugOptions>
<additionalOptions></additionalOptions>
<sslOptions>all</sslOptions>
<pmaTemplateFileName></pmaTemplateFileName>
<testRunAnnotation>test</testRunAnnotation>
<userAgent>Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; rv:11.0) like Gecko</userAgent>
<browserLan>en</browserLan>
<enableBrowserCache>true</enableBrowserCache>
<checkNewVersion>true</checkNewVersion>
<browserCacheOptions></browserCacheOptions>
<userInputFields/>
</loadTestTemplate>
XML Load Test Template Attributes
Attribute Name | Description |
---|---|
| Absolute file path to compiled load test program (*.class) or load test program ZIP archive |
| Name of the Exec Agent on which the load test is started (empty value if cluster job) |
| Name of the Exec Agent Cluster on which the load test is started (empty value if no cluster job) |
| Number of concurrent users |
| Planned test duration in seconds (0 = unlimited) |
| Number of planned loops per user (0 = unlimited) |
| Startup delay per user in milliseconds |
| Downlink bandwidth per user in kilobits per second (0 = unlimited) |
| Uplink bandwidth per user in kilobits per second (0 = unlimited) |
| Request timeout per URL call in seconds |
| Limits the number of error snapshots taken during load test execution (0 = unlimited). Negative value: maximum memory in megabytes used to store all error snapshots, counted overall Exec Agents (recommended). Positive value: maximum number of error snapshots per URL, per Exec Agent (not recommended). |
| Statistic sampling interval in seconds |
| Additional sampling rate per Web page in percent (0..100) |
| Additional sampling rate per URL call in percent (0..100) |
| Additional URL sampling options per executed URLcall (numeric value):<br>0: no options<br>1: all URLperformance details (network connect time, request transmit time, …)2: request header3: request content (form data)4: request header & request content5: response header6: response header & response content7: all – but without response content8: all – full URL snapshot |
| Debug options: (string value)“-dl”: debug loops (including var handler)“-dh”: debug headers & loops“-dc”: debug content & loops“-dC”: debug cookies & loops“-dK”: debug keep-alive & loops“-dssl”: debug SSL handshake & loops |
| Additional options (string) |
| SSL/HTTPS options: (string value)<br>“all”: automatic SSL protocol detection (TLS preferred)“tls”: SSL protocol fixed to TLS“v3”: SSL protocol fixed to v3“v2”: SSL protocol fixed to V2 |
| Annotation for this test-run (string) |
| Label, variable name, and the default value of User Input Fields |
Can't find what you're looking for? Send an E-mail to support@apica.io