Related Entries

Quick Start: Git for personal use
SVN client over SSH to remote Unix server from Windows
Quick Start Grinder - Part V
Quick Start Grinder - Part IV
Quick Start Grinder - Part III

« Quick Start Grinder - Part I
» Gloomy and boring weekend

Quick Start Grinder - Part II

Sample project

We will do a simple project of doing a Google search for Grinder. You need to have some fundamental knowledge about HTTP requests, GET, POST, headers etc.

Parts I and III of the series.

Project Structure

Choose a directory to setup your sources. Let us say it is c:\projects\grindersample. Now, create the following directories inside that.

Sample Script

# vim: ts=2:et:sw=2:ai:sm:nolist:foldmethod=indent:

# standard Grinder imports
from net.grinder.script import Test
from net.grinder.script.Grinder import grinder
from net.grinder.plugin.http import HTTPPluginControl, HTTPRequest
from HTTPClient import Codecs, NVPair

#add any specific modules you want to import; like regex
#~ import re

#### define other global variables you may need inside your script
#this is the main site we are going to test. The URL should point only
#to the server:port part. It should not have path within the server.
#reason : we will make a request object to this and then reuse that object
#         for subsequent calls.
url_test = ""
#if you are behind proxy, fill this up with address and port. If you don’t, 
#keep it as empty
proxy_server = ("", 80) 
#proxy_server = ()

#in general, you shouldn’t be coding your websites looking at difference in
#headers; so we will use just one header. Note that this will fail if you use
#values of things like Referer within your server side code for logical 
#decision making.
common_header= ( NVPair('Accept', '*/*'), NVPair('Cache-Control', 'no-cache'), )

#couple of variables that will be of use to identify which agent and 
#process are running, within the script
agentID = int(["grinder.agentID"])
processID = int(grinder.processName.split("-").pop())

#utility function shortcuts. Use this instead of print or system.out.println
log = grinder.logger.output
error = grinder.logger.error

connectionDefaults = HTTPPluginControl.getConnectionDefaults()
httpUtilities = HTTPPluginControl.getHTTPUtilities()
if len(proxy_server)>=2:
  connectionDefaults.setProxyServer(proxy_server[0], proxy_server[1])
  log("** Set proxy server")

# we will just use default stuff
connectionDefaults.defaultHeaders = (
  NVPair('User-Agent', 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.04506.648; InfoPath.1)'),
  NVPair('Accept-Encoding', 'gzip, deflate'),
  NVPair('Accept-Language', 'en-us'),

#this is the request that we will reuse for all our testing
rq_test = HTTPRequest(url=url_test, headers=common_header)

class TestRunner:
  """A TestRunner instance is created for each worker thread."""

  def __call__(self):
    """This method is called for every run performed by the worker thread."""
    log("agentID %s, processID %s, threadID %s" % (agentID, processID, grinder.threadNumber))
    #this is the first test - getting the search form
    #this is the second one - actually searching for data

  def doGetSearchPage(self):
    """Fetch the search form from Google"""

    #google page is the root level page
    log("**Get the search form")
    result = rq_test.GET("/")

    #now, there are a bunch of images, stylesheets etc. in this page.
    #Browser will get those normally, so if your intention is to test
    #the whole thing, we should get all that too. For now, we will just
    #get the logo.
    result = rq_test.GET("/intl/en_com/images/logo_plain.png")
    log("**Got the search form")

  def doSearch(self, search_for=''):
    """Send the search request too Google and get the resultant page"""

    if search_for.strip() == '':
      return #let us not bother Google unnecessarily

    #Google has one form that searches for a given string using GET
    #request is like search?hl=en&source=hp&q=GRINDER&btnG=Google+Search&meta=&aq=f&oq=
    log("**Sending request for %s" % search_for)
    result = rq_test.GET("search?hl=en&source=hp&q=%s&btnG=Google+Search&meta=&aq=f&oq=" % search_for)
    #now also we need to get stuff like adsense, images etc.
    #we will skip that because we will assume that all we are interested
    #is in figuring out how to test the search component and not the
    #paraphernalia around it.
    log("**Got results %s" % search_for)

def instrumentMethod(test, method_name, c=TestRunner):
  """Instrument a method with the given Test."""
  unadorned = getattr(c, method_name)
  import new
  method = new.instancemethod(test.wrap(unadorned), None, c)
  setattr(c, method_name, method)

# Replace each method with an instrumented version.
# The first parameter is a test id. I usually let it run from 100 to 1000
# That gives you enough space to insert new tests in between
instrumentMethod(Test(100, 'Get Google Search Page'), 'doGetSearchPage')
instrumentMethod(Test(150, 'Do Search'), 'doSearch')

Run time properties

#name of the script that executes the test

# we will run it as a single process
# and a single thread within the procesds
#run the test once and exit

#agentID, if you are running this from different agents
#if you run into memory issues, tweak the java options
grinder.jvm.arguments = -Dpython.home=c:/tools/jython  -Dpython.cachedir=c:/tools/grinder/cachedir -Xms32m -Xmx32m

We need to prepare a file in the project folder. Here is how mine looks.

Running It

Open two console windows (Windows - Run - cmd) and navigate to the project directory

In one, run bin\startConsole.cmd. Wait for few seconds and run bin\startAgent.cmd in the other one. After some time, you can see the Grinder console open up.

In the console, go to Distribute - Set Directory and choose your project directory. Now, when you click on the Script tab, you should see your files in an Explorer like form. Double click on and to open those. Now do a Distribute - Distribute Files to deploy the files to Grinder agent - you can see whether this got done or not by looking at the text that just scrolled by in the second console window.

Note : whenever you change any of the sources or properties, you need to do the Distribute again.

Now, click on the play icon and see the script executing. If things are not working, you might want to stop data collection and start again before executing the script again.

In the Graph tab, you can see how long it took to run each test. Similarly, the output of your log and error calls can be seen under log sub directory. For each run, there are also

Additionally, you see files named data*.log. These are simple CSV files that you can get into spreadsheets to analyze and chart how your tests performed.