In general, performance testing is a set of methods, tools, and metrics with a target goal of knowing of how our service performs under the load. Managers come to the performance testers saying, guys we developed a pretty fast performed super duper service and want to know, of how it actually works under the load or in another words, if terabyte traffic hits us, would we be able to keep responding within milliseconds, microseconds, nanoseconds, etc .
That is generally translates to the set of the metrics more or less similar to these:
– Latency, or response time as a function of the number of the requests (transactions) per second that service generates over the time interval
– availability, or percentage of the positive/negative responses that service generates as a function of the number of the requests (transactions) per second over the time interval
Generally, these metrics should be generated for different type of the load separately, for set and get requests, for example, or for the mix of the requests in certain representations.
That are great metrics, no doubts about them, however, that is not it. Load can be absolutely different.
And now we are talking about fast and slow load. Slow clients generates much more problems for the services compare to the fast clients.
It takes time to open the connection, takes more time to transfer data as a request, and takes much more time to get a response back. It would literally take an hour 🙂 🙂 🙂 and during this hour resources are allocated, and cannot be released. Resources, both application resources (service itself) and system core resources (amount of the memory, number of the connections, etc) are used, but used much quickly. Compare to the fast client who do it fast and close the connection (or connection will timeout, eventually). Service can simply stop processing requests generated by slow clients or literally die while processing them.
Compare, for example 200 fast requests generated every second ( and took milliseconds to execute, respond, and disconnect) and 200 slow requests generate every same every second each, will use system resources for 10 seconds.
So, ideally, performance tests should be execute using both fast and slow clients. I would add the following metrics to the original list of metrics above
– memory usage as a function of the number of the connections per second
– CPU utilization as a function of the number of the connections per second
You can try running these tests with different slowness, and get much more metrics.
You can always timeout or dtrottle slow connections and defense your service, but that is not a scope of this post.
The question left is of how to simulate running performance test using slow clients. Well, you can always buy 10000 phone modems with speed from 2400 to 9600 bits per second on eBay, subscribe to tons of the phone lines, and job is done. It would actually a great solution, it will help to utilize old modems, lift the sales of pc-recycles sellers on eBay, etc. However, there is another solution, using the software.
Apache JMeter, has some Slow*.java classes simulating the given functionality. They were developed to simulate modems – exactly what we need.
SlowHttpClientSocketFactory.java
SlowSocket.java
This functionality is implemented in the sample class showing the usage of Slow connections (slow call, and normal call) in the following code. You just need to use the following jars from Apache JMeter: ApacheJMeter_core.jar, ApacheJMeter_http.jar, commons-codec-1.6.jar, commons-httpclient-3.1.jar, commons-logging-1.1.1.jar
Ant script to execute it and overall zipped solution is provided here.
Enjoy using it
Java code:
package example; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.methods.GetMethod; import org.apache.commons.httpclient.protocol.Protocol; import org.apache.jmeter.protocol.http.util.SlowHttpClientSocketFactory; import java.io.IOException; public class SlowConnectionSample { /* http://jmeter.512774.n5.nabble.com/Is-jmeter-can-be-used-to-simulate-network-bandwidth-td533487.html To emulate a bandwidth of, say 100 kbps, define a CPS of ( 100 * 1024 ) / 8, i.e. 100kB divided by 8, so, that would be httpclient.socket.http.cps=12800. @param kbpsRate, rate in *kilobits per second* */ private static int calculateBandwidth(int kbpsRate) throws Exception { if (kbpsRate <= 0) throw new IllegalArgumentException("Rate (kbps) <= 0"); return (kbpsRate * 1024) / 8; } public static void main(String args[]) throws Exception { /* Only the first call goes through the slow connection factory */ int cps = SlowConnectionSample.calculateBandwidth(2); for (int i = 0; i < 2; i++) { if (0 == i) Protocol.registerProtocol("http", new Protocol("http", new SlowHttpClientSocketFactory(cps), 80)); HttpClient httpClient = new HttpClient(); GetMethod httpGet = new GetMethod("http://www.google.com"); try { long timerBefore = System.currentTimeMillis(); httpClient.executeMethod(httpGet); long timerAfter = System.currentTimeMillis(); System.out.println("Call #" + i + "\tResponse: " + httpGet.getStatusLine() + "\tLatency: " + (double) (timerAfter - timerBefore) / 1000 + " sec."); } catch (IOException e) { System.out.println("Error: " + e.getMessage()); } finally { httpGet.releaseConnection(); } if (0 == i) Protocol.unregisterProtocol("http"); } } }
Output:
d:\etc\SlowConnection>ant -f build.xml Buildfile: build.xml .... run: [java] Call #0 Response: HTTP/1.1 200 OK Latency: 8.406 sec. [java] Call #1 Response: HTTP/1.1 200 OK Latency: 0.048 sec. BUILD SUCCESSFUL Total time: 1 minute 5 seconds
Thank you so much. I was really stuck with this. I am using java veosirn 1.6.0_11 . Recorded https with badboy. Now when I tried HTTP Request HTTPClient and it is working Thanks again