Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 4 Next »


To discover the maximum possible capability of the web server or web application, you must run the same load test program several times, each time with a different number of users.

We recommend increasing the load in each successive test run logarithmically in order to get a good overview; for example, successive test runs with 1, 2, 5, 10, 20, 50, 100, 200, 500, 1000 .. users.

The results of these test runs can be combined to produce load curves which will provide an excellent overview of the response time behavior, the throughput, and the stability of the web server or web application, and how they vary depending on the number of users.

With small loads, the response times are constant and are independent of the number of users. If the load is increased, and thereby the maximum throughput of the server is reached (measured in URL calls per second, which is the web transaction rate - or also called hits per second), the response times will rise in an at least linear relationship with the number of users.

Web pages and/or URL calls, whose response times rise more strongly than others while under load, are potential tuning candidates; that is, the reason for the sudden, strong rise in their response times should be investigated.

Please note that not all web servers or web applications show a linear response time behavior if they are overloaded. A web server may collapse in this situation; in this the case, the throughput falls after a specific load point has been exceeded.

To produce the load curves, you must select - from inside the Analyze Load Tests menu - several test runs which have been made with the same load Test program, but with a different number of users.

  • Choose the diagram type Load Curve

  • Click the Compare button.

Overall Load Curves

In the right upper corner, inside the title of the window, you can generate a PDF report and you can also export the performance data.

You can click within the diagrams on the red rhombuses to display the detailed results of the corresponding test run.

9 different diagrams are displayed:

DiagramDescription
 Average Session Time per User - per Loop cumulative time for a loop per user; that is, response time behavior of the server.
 Web Transaction Rate - Hits per Second number of successfully-executed URL calls per second (hits per second); that is, server throughput.
 Session Failure Rate percentage of failed loops; that is, server stability.
 Average TCP Socket Connect Time average time per URL call to open a network connection; that is, network performance, in combination with the TCP/IP stack performance of the server.
 Users Waiting for Response average of the number of users which are waiting for response from the server..
 URL Error Rate percentage of failed URL calls.
 HTTP Keep-Alive Efficiency percentage of reused network connections.
 SSL Session Cache Efficiency percentage of abbreviated SSL handshakes.
 Completed Loops per Minute the number of successful completed loops per minute (sessions per minute)..
 Overall Network Throughput total network throughput; that is, network load.


Response Time per Page

This menu option displays the load curves of all web pages (average response times and 90% percentile value of the response times). Again, you can click within the diagrams on the red rhombuses to display the detailed results of the corresponding test run.

Response Time per URL

This menu option displays the load curves of all URL calls (average response times and 90% percentile value of the response times). Again, you can click within the diagrams on the red rhombuses to display the detailed results of the corresponding test run.

Session Failures

This menu option displays a summary about all errors which did occur in the test runs. By clicking on an error counter the detailed results of the corresponding test run is shown.



Unable to render {children}. Page not found: ZebraTester Knowledge Base.

  • No labels