Category Archives: Performance

Nice Reading: GitHub’s CSS Performance

GitHub CSS Performance, Title Slide

Another must read for the performance-aware programmer and tester. A nice slide deck about CSS performance issues at GitHub. Includes solutions as well.

A talk on some problems solved related to CSS Performance at GitHub. The talk was given at CSS Dev Conference in Honolulu, HI 2012. I recorded the presentation from my laptop and posted it here https://vimeo.com/54990931

Enjoy and keep in mind that performance matters.

How fast are we going now?

Steve Souders summarizes his HTML5 Developer Conference keynote in a nice blog post.

I enjoy evangelizing web performance because I enjoy things that are fast (and efficient). Apparently, I’m not the only one. …

… Vendors are pitching a faster web. Consumers are expecting a faster web. Businesses succeed with a faster web. But is the Web getting faster? Let’s take a look.

So hop over and keep reading: How fast are we going now?

Blog Performance – We Kicked it up a Notch

Usually, we just  measure the performance of our customer’s applications and talk about it, but from time to time we have to set an example ourselves.

In the last couple of weeks, we increasingly felt that our blog isn’t loading fast enough to deliver a satisfying experience. You know that when you can feel it, it might be too late already. Additionally, SEO is about content and performance and our blog is an important marketing tool for us.

That’s why we went on a quest for improving the performance of our company’s WordPress-based blog. Our motto: “Don’t just complain about the lack of performance, do something about it!”

Step 0 – Measuring

Our initial blog performanceMeasuring is believing and so we started with this WebPageTest result. As you can see, the initial performance is bad, a lot of content is not properly cached, and rendering started after 2.6 sec. Time for some serious tuning.

Step 1 – Reading

Tuning requires you to know what to tune. Thus, we read the famous Best Practises for Speeding Up Your Web Site by Yahoo. A similar article by Google can be found here. If you deal with web site performance in any way, you should read this. We consider it mandatory for performance and web testers.

Continue reading Blog Performance – We Kicked it up a Notch

The Art of Reading Performance Test Charts

Powerful load and performance test tools don’t only retrieve pages of your website randomly with zillions of users at the same time, but they also cover realistic scenarios simulating the real-world user. It’s a given that they can deliver lots of useful information and plot interesting charts. To fully take advantage of these benefits, however, you need to be able to interpret this information and draw the right conclusions.

It is this need for the correct interpretation of test results, the mapping of all you see against actual application behavior that makes performance and load testing a non-trivial task. It requires much experience to decide on the right actions, make the right assumptions, or simply come up with a reasonable explanation of why something happened this way and not the other.

In today’s article, we’d like to present you a couple of charts displaying typical response time patterns, and discuss what they could indicate.

Disclaimer: Of course, the reasons for a certain behavior vary a lot, depending on your application and testing. However, as there’s no fixed manual for the interpretation of load testing charts, we want to provide you with a couple of basic guidelines to help you get better in interpreting them yourself and make the most out of your test results. Feel free to comment whether or not you agree with the ideas and explanations we come up with.

The Warm-up

These charts might indicate a system with a cold cache, for instance, when the system has just been started and the caches aren’t filled yet.

The basic characteristics of such a behavior are high response times in the beginning, followed by gradually lower response times until eventually a  certain degree of runtime stability is reached. This time frame is often referred to as the system’s warm-up period. Throughout this period, a couple of things can happen under the surface. If you know the system under test well, you’ll probably come up with the following: database and file system caches are filled, proxies learn about the data and store them, the system under test scales up because it sees traffic, page snippets are cached and so the computing overhead reduces… you name it.

Also keep in mind that it might be the testing process itself that causes such a response time profile. If the system is perfectly warmed up and you hit it, your sent traffic might be too uniform in the beginning. That being the case, randomization kicks in so that the traffic eventually distributes better over time. Furthermore, take into consideration that your load software and hardware are possibly not warmed up either.

The Caching

These charts depict either a typical cache clean or job patterns. In case of a cache clean, system-internal caches expire every half hour. If that’s not the case, the charts may indicate  a running background job draining power from the database or consuming lots of system bandwidth.

Both charts display the same test; however, this test has been executed for different time periods. While two spikes could signify a random event (despite the fact that the temporal distance of 30 minutes is suspicious), the longer test run seems to prove our first assumption: something is going on every half hour.

In any case, make sure that such a behavior is not produced by the test machines themselves, for example, because they’re busy writing or backing up data.

The Spiking

This is what we call a forest of spikes: many spikes that don’t seem to follow a comprehensible pattern; longer runtimes just occur occasionally, often caused by requests accessing certain data or URLs that produce long runtimes. To get behind that mystery, you have to dig into the results in more detail, find the calls behind the spikes, and derive a pattern based on the information you find. Often you’ll come across similar URLs, request parameters, or maybe response codes. Don’t forget any application logs you might have access to, such as web server, error, information, or debug logs. In a perfect world, your application under test offers the necessary tools to get to the bottom of this problem.

XLT lets you easily find this information. All test result data are accessible as CSV files that are quickly readable and documented. Feel free to work with this information and go beyond the scope of the reports available.

The worst outcome here is a non-identifiable pattern and no information on the server side as to what might have happened. In such a case, you have to repeat the test and narrow down your test setup later on to exclude as many variables as possible to find the cause. This is actually also a good time to ask for development or tech-ops support.

The 3rd-party Calls

The first chart is typical for issues with 3rd parties, especially in the field of e-commerce. We’re not talking about direct calls to 3rd parties here, such as analytics vendors or recommendation engines, but calls from one server to the other. Thus, the response time we see is the response time of two systems. Of course, it’s good to know the area where 3rd party calls typically happen, but you have to know the application under test anyway to test it efficiently. So when the final order steps start to act weird, you can easily narrow down the potential reasons.

The second chart looks more like the cache clean or expiration problem described above, but since you know the application, you also know that this area doesn’t use the typical caching logic but is highly dynamic instead. This means that the errors occurring every 50 minutes point into a different direction: as we know that 3rd parties are attached and called during shipping, we can conclude that the 3rd party failed on us.

Verdict

Knowing typical response time patterns helps you specify a certain problem so that you can give hints to the development or further shape the path of testing. If you can read charts and derive the right conclusions or at least know which questions you have to address, you’ll be ahead of the crowd. Be aware that knowledge on the system under test is very important – the production and measurement of a certain load doesn’t make much sense when you’re not able to actually interpret and explain what you’ve measured. Always remember: 42 is not a valid answer for everything. :)

Statistics, Facts and Figures: Web Performance on Black Friday and Cyber Monday

According to onvab.com CyberMonday 2012 was the biggest buying and spending day ever!

Whether you are a fan of facts and figures or find such data rather boring – statistics published on dzone.com are really interesting: http://css.dzone.com/articles/black-friday-cyber-monday-2012

Short Summary

  • Most of the shop seemed well prepared. Congratulations!
  • Big players like Amazon and Apple came in with good results and obviously did their homework.
  • Barnes & Noble should review their site really carefully: Up to 262 requests per page and about 10 seconds until a page was fully loaded – this upsets even patient customers.

Now imagine you run a retail site that doesn’t handle the increased traffic on such an important day. Without doubt this will bring revenue down big time, which leads to an upset CEO and rolling heads. Not to mention all the nasty comments on Twitter and Facebook.

But how to prepare for THE day?

Option 1

You could hire thousands of students (each of them owning a notebook, smartphone or tablet) and let them shop your site. If you want to repeat this scenario, you obviously have to pay them twice or three times or…

Option 2

You could automate common page flows using a good test automation tool and run load tests frequently, watch the trends, and really run your own BlackFriday before BlackFriday.

It´s up to you! Afraid we can’t help you with option 1…

Understand and choose a load test model

When running load tests it is all about the concept of how you setup and run a test. Previously we already talked about getting to numbers, today we will talk about the load models you can use to run your tests.

One of the decisions you have to make is for the right load model. Sounds complicated – but in fact isn’t. A load model basically describes what basic characteristic you can influence to reach a certain load and performance behavior.

Just one more thing before we talk about the models, because the definitions used in the following paragraph are not unique across the industry. So each vendor uses them slightly differently.

Transactions: “A transaction is an execution of exactly one test case or test scenario. In order to perform the scenario, the page flow is modeled in code. The test scenario is implemented as a test case, which itself executes a sequence of one or more actions.”

Action: “An action can be defined as one irreducible step within a test case. So an action interacts with the current page and – as a result – loads the next page. That page is associated with this action and becomes the current page for the next action in the test scenario. Generally, an action triggers one or more requests.”

Requests: “This level is equivalent to the HTTP request level used in web browsers or in any other application that relies on HTTP communication. You do not have to deal with requests directly because they are automatically generated by the underlying HtmlUnit framework when performing actions on HTML elements.”

Let’s check out an example. Before running the next marketing campaign your company’s online shop will have to be tested to make sure it won’t break under that much traffic. Two of your teammates are discussing the problem:

Head of IT: “The shop will have to handle 2000 concurrent users.”

Head of marketing: “With our campaign the the shop will have to handle 500 orders per hour.”

Both requirements can lead to two different load models, depending on the exact goal that is set.

  1. User Count Model: This is the approach of the IT guy. Define a certain number of concurrent users the system will have to handle. At any given time during the test, the target system has to handle 2000 concurrent users, no more, no less. The number of transactions that can be achieved depends on the time the target system needs to respond.

  2. Arrival Rate Model: The money driven marketing guy tends to another criteria: The number of transactions (or here: orders) per hour. The scenario will be performed 500 times, equally distributed across a period of one hour. As many concurrent users as necessary to fulfill the given arrival rate are used.

Okay. Understood. Head of IT is happy with the user count model. He’s pretty sure the shop will be stable with 2000 concurrent users.

So what will happen if the response time of the system increases during the test period?

When using the user count model simply less transactions will be finished. In comparison the arrival rate model will increase the number of concurrent users as well, to make sure the given number of orders will be performed.

You can see the difference here? The arrival rate model works feedback based and will react to response time changes during the test period. This way, the generated load is somewhat unpredictable – and that’s how the real world behaves.

Or to use a more common analogy, people will usually still enter the supermarket despite the fact that there are long line at the cashier. That what happens in your online shop when the response time increases and you use the arrival rate model. More user are coming in, because they do not know that there are already lines in the store.

The chances for picking the arrival rate model are getting higher, but the IT guy is just a little bit afraid of the aggressive behaviour of such a test. Just imagine:

  • response time increases
  • more concurrent users are used
  • response time will increase even more due to heavier load
  • more concurrent users are used
  • recursion here

Load Test Models: Arrival Rate Model

But there is a solution. To avoid a complete breakdown, you can define an upper limit to the number of concurrent users used in the arrival model. This can help you to restrict the total load on the system, if you want to avoid a total overload as a result of the feedback loop. That is not reality of course, but of the user feel a certain pain in reality, they might hold back as well.

Ok, the arrival rate model is going to be used for the next load test. But there are definitely scenarios in which the user count model could be a good choice? Here are some guidelines which model fits what purpose best.

The user count model is well-suited for:

  • a simple base line test (single-user test) to assess the base performance of the system under almost no load,
  • a real load or performance test to assess the performance under a high, but predictable aka stable load,
  • a test that should be easily repeatable and its load factor is not influenced by the system under test.

The arrival rate model is best used if the load test should prove that a system is indeed able to handle a certain number of transactions per hour. Since this is the primary purpose of load and performance tests, the arrival rate load model is the best choice for most of your test tasks.

So get testing and give all models a try. Load tests often start with a fixed user rate and once this runs fine, you move over to the more challenging arrival rate model.

XLT 4.1.8 Update Release

XLT 4.1.8 has been released. It contains a couple of improvements and bug fixes. You can get the latest version here: https://lab.xceptance.de/releases/xlt/4.1.8/.

These are the two most important changes.

Optimized scanning CSS rules

XLT simulates the download behavior of a browser as close as possible. With com.xceptance.xlt.css.download.images set to onDemand, XLT carefully considers the download of image resources referred to in CSS files. It checks all CSS rules if they apply to all elements in the DOM tree and, if so, extracts the resource information for later download. This process was previously very expensive and ran for up to several seconds when a page was complex and a lot of CSS rules had to be matched. This has been optimized heavily.

Configurable retry behavior for keep-alive connections

If persistent connections (keep-alive) are enabled, test cases might failed sporadically with an exception such as:
java.lang.RuntimeException: org.apache.http.NoHttpResponseException: The target server failed to respond

This exception occurs only when the request was sent while the connection was about to be closed on the server side at the same time.

The underlying HttpClient retries requests if the connection was closed by the server. However, by default it retries only idempotent operations such as GET and HEAD, but not POST and PUT. In order to avoid these connection errors, the user can now configure whether non-idempotent operations are to be retried as well. Common browsers seem to use the same behavior and most certainly assume that the server did not start to process the request when it closes the connection immediately without responding with a proper HTTP status code.

A new boolean property com.xceptance.xlt.http.retry.nonIdempotentRequests in config/default.properties controls whether or not operations such as PUT and POST are retried in case of certain networking problems. Note that idempotent operations are always retried.

Get the right load mix out of a few numbers

When testing ecommerce applications on SaaS environments, you often do not get enough numbers from clients because they simply do not know these numbers or only a few. One reason for that is, that the client simply have not had any only presence before. Often the client also does not have detailed numbers, because the previous hoster or the IT department just holds them back or simply cannot get to these numbers.

So what to do, when you do not know every detail about the current or future load pattern? We are describing one approach below that was very successful so far and always yielded satisfying results.

What we need

  • Visits per peak hour (example 10k)
  • Page views per peak hour (example 100k)
  • Orders per peak hour (example 200 orders)
  • Optionally we can use the conversion rate to get from visits to orders or vice versa.
  • Optionally we can take searches, “add to cart” operations, user registrations, and so on into account.

The mentioned scenarios are typical ecommerce scenarios and look like that. We will not talk about smaller scenarios such as address editing for a registered user.

  • TSingleClickVisit: Enters the store only, does not move beyond the start page
  • TBrowsing: TVisitor plus category and product browsing
  • TSearch: TVisitor plus keyword search plus browsing of the result
  • TAdd2Cart: TBrowsing plus add to cart operations
  • TGuestCheckout: TAdd2Cart plus checkout without an order placement (anonymous user)
  • TGuestOrder: TAdd2Cart plus full checkout (anonymous user)
  • TRegisteredCheckout: TAdd2Cart plus checkout without an order placement (registered customer)
  • TRegisteredOrder: TAdd2Cart plus full checkout (registered customer)
  • TRegistration: Account creation

What we assume

Ecommerce sites follow similar patterns and with a few exceptions, such as special promotions, certain behavioral patterns are nearly identical. So for instance, about 50% of all checkouts are stopped before the order is placed. About 20 to 50% of all created carts aren’t checked out at all.

What we calculate

Based on these assumptions, we put together a fairly simple but sufficiently accurate load mix. Of course, we can also analyze the current log files and try to come up with something more precise, but that will be a snapshot only. Traffic is very volatile and so we should be very generous when setting up this mix.

Since we do not take any daily averages as base but the peaks, we will have a pretty comfortable buffer for our daily ecommerce life anyway.

Bottom-Up

Let’s say, 200 orders are set as goal. Splitting them 50/50  between registered and anonymous users, we get 100 visits of each type. All numbers are per hour of course.

  • TGuestOrder = 100
  • TRegisteredOrder = 100

As a next step, we take our 50% checkout abandonment rate into account. We have 200 checkouts per hour that are stopped and 200 that run through and turn up as orders (as counted previously). So we need to add 200 visits. And because these visitors can either run with their preset account or without, we split them up in 100 guest and 100 registered checkout attempts.

  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

This gives us 400 visits per hour that go into the checkout. We now assume a low cart to checkout conversion rate, about 20% for instance, and so we take 400 checkout visits * 5 and get 2,000 visits that involve cart usage. Since we already have 20% converted into checkouts, we have 2,000 minus 400 visits that use the cart.

  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

We also know that many users do not continue after hitting the home page or any landing page. Let’s add some of these users now.

  • TSingleClickVisitor = 1,000
  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

But wait, what are we missing? Well, we have not registered any new accounts yet. Didn’t we? We did, because the registered checkout creates accounts if required and reuses them several times. But to get a more substantial customer growth, we simply add 200 visits that run registrations:

  • TRegistration = 200
  • TSingleClickVisitor = 1,000
  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

What is left to do? Well, we do not have any “I am just looking around”-visitors yet. We know that our total visit count is 10,000 and we already assigned 3,200 of these to cart, checkout, and registration, so we have 6,800 visits left we can now use for something else. Depending on the shop type (large store, small store etc), people tend to use search more or less. To put enough stress on search and refinements, we simply assume 50% of all people like to search. Thus the missing 6,800 visits will be 3,400 catalog browser visits and 3,400 visits with usage of search before browsing the search result.

The total mix is:

  • TBrowsing = 3,400
  • TSearch = 3,400
  • TRegistration = 200
  • TSingleClickVisitor = 1000
  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

Wait… where are my concurrent users? This is simple: “concurrent users” is an inaccurate way of describing traffic, so we have not used that number yet. Why is that?

To get to the bottom of that, we simply check how long a visit takes. Depending on the shop, an average visit might take 2 to 4 minutes. Successfully shopping might take 15 minutes. If we expect about 10 page views per visit and a page view takes 1 second to load and 20 seconds to read it (already a really really high number for an average), a visit would take 10 * 1 second + 9 * 20 seconds = 190 seconds.

Let’s go with the 190 seconds for a visit on average. If we just could serve one visitor at a time, we could serve 60 minutes (3600 seconds) / 190 seconds per visits = 19 visitors per hour. But because we would like to serve 10,000 per hour, we have to deal with 10,000 / 19 = 526 visitors at the same time. This is the famous concurrent user number.

If we now double the think time, we have 1,052 concurrent users/visitors. If we cut it down to 1 second think time, we will get a visit length of 19 seconds and therefore 10,000 visits / (3600 seconds / 19) = 53 concurrent visitors.

So we already have three different “concurrent user” numbers and are still simulating the same traffic. This shows that the number of concurrent users is a pretty questionable way of describing traffic.

It does not matter which number we take, because most of the time the servers will see the same traffic. Because we run against a SaaS environment that serves a multiple of other customers at the same time and is sized to serve the peak traffic for all customers at the same time, we have plenty of comfortable room around us. This permits us to run with 53 concurrent visitors for most of the testing. This will save us client hardware resources for the load generation. e.g. saves us money. We are basically only interested in the runtime of requests and not if the environment can handle that, because it can.

The goal of this test is to demonstrate that the implementation on the SaaS platform is efficient, not that the SaaS platform itself is fast and stable, because this is guaranteed by design and contract. Testing this would require way more traffic and generate huge costs, because the environment would suddenly no longer be a shared one but exclusively used for this testing purpose.

When finalizing the entire test and all tests turned out good, we are going to turn up the concurrent user count to 530 users and compare the result with the previous measurements. Just to satisfy the traditional test expectations.

Does that work for you?

Hope that gives you an idea how to come up with a nice user mix for testing without having too much data in the first place. Comments welcome.

Nice reading: CSS3 vs. CSS

Nice article about the advantages of CSS3 in terms of coding as well as download speed: CSS3 vs. CSS: A Speed Benchmark.

I believe in the power, speed and “update-ability” of CSS3. Not having to load background images as structural enhancements (such as PNGs for rounded corners and gradients) can save time in production (i.e. billable hours) and loading (i.e. page speed). At our company, we’ve happily been using CSS3 on client websites for over a year now, and I find that implementing many of these properties right now is the most sensible way to build websites.

Until today, all of that was based on an assumption: that I can produce a pixel-perfect Web page with CSS3 quicker than I can with older image-based CSS methods, and that the CSS3 page will load faster, with a smaller overall file size and fewer HTTP requests. As a single use case experiment, I decided to design and code a Web page and add visual enhancements twice: once with CSS3, and a second time using background images sliced directly from the PSD. I timed myself each round that I added the enhancements, and when finished, I used Pingdom to measure the loading times.

More…

Enjoy reading.

Load Testing Web Applications – Do it on the DOM Level!

This article was first published in the June 2010 issue of the magazine Testing Experience.

HTTP-level tools record HTTP requests on the HTTP protocol level. They usually provide functionality for basic parsing of HTTP responses and building of new HTTP requests. Such tools may also offer parsing and extraction of HTML forms for easier access to form parameters, automatic replacement of session IDs by placeholders, automatic cookie handling, functions to parse and construct URLs, automatic image URL detection and image loading, and so on. Extraction of data and validation are often done with the help of string operations and regular expressions, operating on plain HTTP response data. Even if HTTP-level tools address many load testing needs, writing load test scripts using them can be difficult.

Challenges with HTTP-Level Scripting

Challenge 1: Continual Application Changes

Many of you probably know this situation: A load test needs to be prepared and executed during the final stage of software development. There is a certain time pressure, because the go-live date of the application is in the near future. Unfortunately, there are still some ongoing changes in the application, because software development and web page design are not completed yet.

Your only chance is to start scripting soon, but you find yourself struggling to keep up with application changes and adjusting the scripts. Some typical cases are described below.

  • The protocol changes for a subset of URLs, for example, from HTTP to HTTPS. This could happen because a server certificate becomes available and the registration and checkout process of a web shop, as well as the corresponding static image URLs, are switched to HTTPS.
  • The URL path changes due to renamed or added path components.
  • The URL query string changes by renaming, adding or removing URL parameters.
  • The host name changes for a subset of URLs. For example, additional host names may be introduced for a new group of servers that delivers static images or for the separation of content management URLs and application server URLs that deliver dynamically generated pages.
  • HTML form parameter names or values are changed or form parameters are added or removed.
  • Frames are introduced or the frame structure is changed.
  • JavaScript code is changed, which leads to new or different HTTP requests, to different AJAX calls, or to a new DOM (Document Object Model) structure.

In most of these cases, HTTP-level load test scripts need to be adjusted. There is even a high risk that testers do not notice certain application changes, and although the scripts do not report any errors, they do not behave like the real application. This may have side effects that are hard to track down.

Challenge 2: Dynamic Data

Even if the application under test is stable and does not undergo further changes, there can be serious scripting challenges due to dynamic form data. This means that form field names and values can change with each request. One motivation to use such mechanisms is to prevent the web browser from recalling filled-in form values when the same form is loaded again. Instead of “creditcard_number”, for example, the form field might have a generated name like “cc_number_9827387189479843”, where the numeric part is changed every time the page is requested. Modern web applications also use dynamic form fields for protection against cross-site scripting attacks or to carry security-related tokens.

Another problem can be data that is dynamically changing, because it is maintained and updated as part of the daily business. If, for example, the application under test is an online store that uses search-engine-friendly URLs containing catalog and product names, these URLs can change quite often. Even worse, sometimes the URLs contain search-friendly catalog and product names, while embedded HTML form fields use internal IDs, so that there is no longer an obvious relation between them.

Session IDs in URLs or in form fields may also need special handling in HTTP-level scripts. The use of placeholders for session IDs is well supported by most load test tools. However, special script code might be needed, if the application not only passes these IDs in an unmodified form, but also uses client-side operations on them or inserts them into other form values.

To handle the above-mentioned cases, HTTP-level scripts need manually coded, and thus unfortunately also error-prone, logic.

Challenge 3: Modeling Client-Side Activity

In modern web applications, JavaScript is often used to assemble URLs, to process data, or to trigger requests. The resulting requests may also be recorded by HTTP-level tools, but if their URLs or form data change dynamically, the logic that builds them needs to be reproduced in the test scripts.

Besides this, it can be necessary to model periodic AJAX calls, for example to automatically refresh the content of a ticker that shows the latest news or stock quotes. For a realistic load simulation, this also needs to be simulated by the load test scripts.

Challenge 4: Client-Side Web Browser Behavior

For correct and realistic load simulations, the load test tool needs to implement further web browser features. Here are a few examples:

  • Caching
  • CSS handling
  • HTTP redirect handling
  • Parallel and configurable image loading
  • Cookie handling

Many of these features are supported by load test tools, even if the tools act on the HTTP level, but not necessarily all of them are supported adequately. If, for example, the simulated think time between requests of a certain test case is varied, a low-level test script might always load the cacheable content in the same way – either it was recorded with an empty cache and the requests are fired, or the requests were not recorded and will never be issued.

DOM-Level Scripting

What is the difference between HTTP-level scripting tools and DOM-level scripting tools? The basic distinction between the levels is the degree to which the client application is simulated during the load test. This also affects the following characteristics:

  • Representation of data: DOM-level tools use a DOM tree instead of simple data structures.
  • Scripting API: The scripting API of DOM-level tools works on DOM elements instead of strings.
  • Amount and quality of recorded or hard-coded data: There is much less URL and form data stored with the scripts. Most of this data is handled dynamically.

DOM-level tools add another layer of functionality on top. Besides the handling of HTTP, these tools also parse the contained HTML and CSS responses to build a DOM tree from this information, similar to a real web browser. The higher-level API enables the script creator to access elements in the DOM tree using XPath expressions, or to perform actions or validations on certain DOM elements. Some tools even incorporate a JavaScript engine that is able to execute JavaScript code during the load test.

Advantages

DOM-level scripting has a number of advantages:

  • Load test scripts become much more stable against changes in the web application. Instead of storing hard-coded URLs or data, they operate dynamically on DOM elements like “the first URL below the element xyz” or “hit the button with id=xyz”. This is especially important as long as application development is still ongoing. As a consequence, you can start scripting earlier.
  • Scripting is easier and faster, in particular if advanced script functionality is desired.
  • Validation of result pages is also easier on the DOM level compared to low-level mechanisms like regular expressions. For example, checking a certain HTML structure or the content of an element, like “the third element in the list below the second H2” can be easily achieved by using an appropriate XPath to address the desired element.
  • Application changes like changed form parameter names normally do not break the scripts, if the form parameters are not touched by the script. But, if such a change does break the script because the script uses the parameter explicitly, the error is immediately visible since accessing the DOM tree element will fail. The same is true for almost all kinds of application changes described above. Results are more reliable, because there are fewer hidden differences between the scripts and the real application.
  • CSS is applied. Assume there is a CSS change such that a formerly visible UI element that can submit a URL becomes invisible now. A low-level script would not notice this change. It would still fire the old request and might also get a valid response from the server, in which case the mismatch between the script and the changed application could easily remain unnoticed. In contrast, a DOM-level script that tries to use this UI element would run into an error that is immediately visible to the tester.
  • If the tool supports it, JavaScript can be executed. This avoids complicated and error-prone re-modeling of JavaScript behavior in the test scripts. JavaScript support has become more and more important in recent years with the evolution of Web 2.0/AJAX applications.

Disadvantages

There is one disadvantage of DOM-level scripting. The additional functionality needs more CPU and main memory, for instance to create and handle the DOM tree. Resource usage increases even more if JavaScript support is activated.

Detailed numbers vary considerably with the specific application and structure of the load test scripts. Therefore, the following numbers should be treated with caution. Table 1 shows a rough comparison, derived from different load testing projects for large-scale web applications. The simulated client think times between a received response and the next request were relatively short. Otherwise, significantly more users might have been simulated per CPU.

Scripting Level Virtual Users per CPU
HTTP Level 100..200
DOM Level 10..30
DOM Level + JavaScript execution 2..10

If you evaluate these numbers, please keep in mind that machines are becoming ever more powerful and that there are many flexible and easy-to-use on-demand cloud computing services today, so that resource usage should not prevent DOM-level scripting.

Conclusion

Avoid hard-coded or recorded URLs, parameter names and parameter values as far as possible. Handle everything dynamically. This is what we have learned. One good solution to achieve this is to do your scripting on the DOM level, not on the HTTP level. If working on the DOM level and/or JavaScript execution are not possible for some reason, you always have to make compromises and accept a number of disadvantages.

We have created and executed web-application load tests for many years now, in a considerable number of different projects. Since 2005, we have mainly used our own tool, Xceptance LoadTest (XLT), which is capable of working on different levels and supports fine-grained control over options like JavaScript execution. In our experience, the advantages of working on the DOM level, in many cases even with JavaScript execution enabled, generally by far outweigh the disadvantages. Working on the DOM level makes scripting much easier and faster, the scripts handle many of the dynamically changing data automatically, and the scripts become much more stable against typical changes in the web application.

Ronny Vogel is technical manager and co-founder of Xceptance. His main areas of expertise are test management, functional testing and load testing of web applications in the field of e-commerce and telecommunications. He holds a Masters degree (Dipl.-Inf.) in computer science from the Chemnitz University of Technology and has 16 years of experience in the field of software testing. Xceptance is a provider of consulting services and tools in the area of software quality assurance, with headquarters in Jena, Germany and a branch office in Cambridge, Massachusetts, USA.