Archive for category 'Performance'

Link: The critical rendering path

Friday, 3. January 2014 16:25

pagespeed-renderingpath

If you care about web page speed, that is the information you have to memorize first.

The most important concept in pagespeed is the critical rendering path. This is true because understanding this concept can help you do a very wonderful thing… Make a large webpage with many resources load faster than a small webpage with few resources.

Since most webpages have many different components, it is not always possible to just remove everything to make a page load faster. If you have ever wondered “What else can I do to make my pages fast?” or “How does Google expect pages to load in one second?” then this concept is for you.

Continue reading: http://www.feedthebot.com/pagespeed/critical-render-path.html

Category: Performance | Comments (0) | Author:

Concurrent Users – The Art of Calculation

Friday, 26. July 2013 4:13

Most of you probably know the term Concurrent User. In the context of load and performance testing, this metric is often claimed the measure of all things, accompanied by the mentioning of astronomically high numbers we can’t really verify and that sometimes are simply used as sales argument for overpriced software products.

Today’s article is meant to shed some light on the concurrent user metric and the misunderstandings and myths surrounding it. Since Xceptance focuses on the internet and e-commerce, illustrations and examples will mainly refer to webshops; keep in mind, though, that the topic isn’t restricted to the domain of e-commerce load testing. Feel free to comment below, whether affirmative or critical.

Let’s start with a couple of key terms to help you understand what we’re talking about:

  • Visit: In general, a visit occurs when you send a request to a server and, as a response, the website you requested is displayed. Has a duration starting with the first page view and ends with the last. Consists of one or more page views.
  • Session: Technical term for a visit, basically the technical picture underlying it. Visit and session are often used synonymously.
  • Page view or page impression: A single complete page delivered due to a request of an URL; in a world of Ajax, intermediate logical pages can be considered an impression or view. Can lead to further technical requests (HTML, CSS, Javascript, images etc.)
  • Request: Submission of a request to a server, in the case of web applications mostly via HTTP/HTTPS protocols. Requested content may be HTML, CSS, Javascript as well as images, videos, Flash, or Silverlight applications – HTTP can deliver almost everything.
  • Think time: Time period between two page views of a visit.
  • Scenario: The course of a visit in terms of a use case (for example, to search something, to order something, or both). Representation of test cases meant to be run as load tests.
  • Concurrent User: We don’t exactly know about them yet…

Load and Performance Test

A load test wants to reflect present load conditions or anticipated load conditions. In either case, it’s impossible for a load test to cover all eventualities and be economical at the same time. There’s a myriad of ways you can go to explore a webshop. Thus, you decide on the most typical ones at first and make a scenario out of them afterwards. Most of the time, we consider a scenario an isolated visit repeating the steps of the test case and thus using defined data (note that also random data is defined data).

Let’s assume three scenarios: a visitor that is just looking (Browsing), a visitor that puts products into the cart (Add2Cart), and a visitor that checks out as a guest and wants their ordered items to be shipped to an address (Order).

The users have to go through the following steps to completely cover the scenario:

Browsing user

  1. Homepage
  2. Select a catalogue
  3. Select a subcatalogue
  4. Select a product

Add2Cart user

  1. Browsing 1.-4.
  2. Put a product into the cart

Order user

  1. Add2Cart 1.-2.
  2. Proceed to checkout
  3. Enter an address
  4. Select a payment method
  5. Select a delivery type
  6. Place the order

The first challenge is choosing the content for the single actions, that is should we always go for the same product, the same catalogue, should the number of items or the size of the cart vary, etc. Only these three scenarios offer infinite possibilities of variation already. But let’s stick with the basic steps and the simple Browsing for now.

Concurrent Users

When testing against a server, the single running of Browsing would be a visit consisting of 4 page views and possibly further requests for static content. A second execution of the test with all data and connections (cookies, HTTP-keep-alive, and browser cache) having been reset would result in another visit. If you now run these two visits simultaneously and independently from one another, you end up with two concurrent users. Note that the notion “user” is actually not the exact right term as we’re talking about concurrent visits here. We prefer the term visit in this context and the person performing it is the visitor.

Both of our visitors execute 4 page views each, thus resulting in a total of 8 page views. As each page view has a runtime on the server, let’s say 1 sec, one visit takes at least 4 sec. Therefore, if one user repeats their visits for one hour, he or she completes 3,600 seconds / (4 seconds per visit) = 900 visits / hour. Two concurrent visitors result in 1,800 visits in total leading to an overall total of 1,800 visits x (4 page views per visit) = 7,200 page views.

We just said “if one user repeats”. Of course, a single user would never repeat a visit that many times. Just look at the user here as the load test execution engine repeating that independently of other “users”.

Think times

Now, the majority of users isn’t that fast, of course, which is why usually think times get included. The average think time currently amounts to something between 10-20 seconds, depending on the web presence. It used to be 40 seconds but today’s users are more experienced and user guidance has improved a lot so that they can navigate through a website much faster. Let’s assume a think time of 15 sec for our example.

A visit would now take (4 page views each takes 1 sec) + (3 think times each 15 sec). It’s only 3 think times because there’s none after the last click that terminates the visit. Accordingly, our visit duration is 49 sec. If we now have a visitor repeat that for an hour, we’ll end up with a user completing 3,600 sec / (49 sec per visit) = 73.5 visits per hour.

If we want to test 1,800 visits again, we need 1,800 visits / (73.5 visits per hour per user) = 24.5 users, about 25. The number of page views stays the same since 1 visit equals 4 page views and the number of visits is constant. These 25 users need to complete their visits simultaneously and in parallel but still independent of one another.

Any Number of Concurrent Users is Possible

We now have 25 concurrent users that produce the exact same traffic simulation as 2 users without a think time. The exact same traffic? No, of course not – this is where extreme parallelism and the unpredictability of both testing and reality comes into play.

If the requirement was the simulation of 1,800 visits per hour and 7,200 page views per hour, we could now randomly pick a think time and by doing so, determine any number of concurrent visits aka users between 2 and x. With respect to our simulation period of 1 hour, we get a new session (begin of a visit) every two seconds on the server side – 3,600 sec / 1,800 visits as our visits are equally distributed.

You can also question the numbers by approaching the problem from another perspective: if 100 users are simultaneously active, then they can simultaneously request 100 page views. In the worst case (note that 1 page view takes 1 sec on the server side), however, this would amount to 100 * 3,600 sec = 36,000 page views per hour. Since the requirement of 100 concurrent users is actually never bound to a certain period, you therefore have to assume that these users could potentially click at any time.

From this point of view, you’ll soon realize that the number of concurrent users can basically mean anything: much traffic, little traffic, little load, much load. Only by knowing the test cases and additional numbers such as visits and page views per time unit can you a) define a number of concurrent users and b) check each number by means of calculation against the other numbers.

Oh, and needless to say that 42 is always a good number of concurrent users… ;-)

Why 100,000 Concurrent Users Aren’t 100,000 Visits

What we want to emphasize here is that a temporal dimension is absolutely necessary. The requirement of 300,000 users would always imply they could click simultaneously which would produce 300,000 visits at one blow. Now, you may want to argue that they aren’t coming simultaneously. However, if the users aren’t simultaneously active aka started a visit, they aren’t concurrent users anymore and then you don’t need to simulate them in the first place.

Provided an equal distribution and an average visit duration of 49 sec, 300,000 users per hour that are often identified with visits (business-wise) in most cases, would result in the following: a user completes 3,600 / 49 sec visit duration = 73.5 visits per hour so that you end up with 300,000 / 73.5 = 4,081 concurrent visits aka real concurrent users at any given second. 4,081 concurrent visits produce 4 page views in 49 sec (visit duration) each, that is in 49 sec we have 16,324 page views, thus 333 page views per sec (see next paragraph).

In terms of page views without think times this means: 300,000 users are 1,200,000 page views (for our example above). Thus, you need to complete 1,200,000 page views / 3,600 seconds = 333 page views per second. Without any think time you would therefore need 333 users for the simulation.

Regarding the final result, the simulation of 4,081 users and 15 sec think time therefore equals the simulation of 333 users without think time. On the server side, both will result in the identical number of visits per time period, the identical number of page views, etc.

That’s impossible…

You may raise some objections to this and they are actually valid since, in reality, the think time would never be exactly 15 sec and the response time would never always be 1 sec. This is where coincidence comes into play. With respect to our example, let’s assume the think time to vary between 10 and 20 sec. The arithmetic mean would still be 15 sec.

What happens now results in the following calculation: In the worst case, the duration of all visits is only 4 sec + 3 * 10 sec = 34 sec. With 34 sec, our server now has to deliver as many visits and page views as it delivered within 49 sec before. Thus, our test wouldn’t cover 300,000 users with 4,081 concurrent test users but 3,600 / 34 * 4,081 = 432.105 visits per hour.

That means you need to define target numbers you want to support, or measure what the server is currently able to deliver. As soon as you say you have a number of x visits that could vary in their duration, you end up with a higher maximum number of visits you need to support but that you actually don’t want to test.

Note that our sole focus is set on the load and performance test here. You want to know if you can cope with the traffic x where you assume x to be a constant worst case that applies to a longer period of time. If you want to measure the server side beyond the maximum “good” case, you don’t aim at the performance anymore but at the overload behavior. Then you focus on stability and a predictable way of “decline”. All tests that are normally run at first, which is absolutely correct, are tests that want to identify or verify the good case.

But still…

Of course, it can make sense to test 4,081 users instead of 333 although there’s the same number of visits and page views per time period on the server side. 4,081 users can be concurrent users for a very short time and claim, for example, 4,081 webserver threads or sockets, while 333 users will never reach this number. Even if you keep the think time for 4,081 users at a constant level, the traffic wouldn’t be as synchronous as you planned it to be in the beginning.

In the worst case, you can’t test at all now because each test run leads to a different result. With the restriction to 333 users with none or just minimal think time, you restrict the “movement” of the system at first to measure it. If the system delivers what it should, the test may expand in its width aka both the think times and the number of concurrent users go up.

Steady Load Vs. Constant Arrival Rate

To resolve this dilemma a) without having to consider the server side while b) still being able to measure accurately, you can choose between two typical load profiles:

  • Steady Load: Runs a fixed number of users that wait for the server, for instance, when it has long response times. This way you can’t reach the desired number of visits because users depend on the server’s response behavior. The profile is suitable for controlled measurements.
  • Constant Arrival Rate: Users arrive as new visitors regardless of what is happening on the server side. When the server is too slow, new users will still try to come in. If the server can handle the load, the system runs stable and you just need your user number x (according to our calculation, 4,081, for example). If there are problems on the server side, then the user number automatically increases to x + n (for example, to a total of 10,000 users). In the ideal case, that means you only need 4,081 users but when the server behaves unexpectedly, up to 10,000 users will be activated. This way you can also test the overload behavior at the same time.

Finally…

We hope you were able to follow and that the mess of numbers didn’t get too bad. At times, the concurrent user topic is getting downright absurd… Feel free to comment, any remark is appreciated.

Category: Performance, XLT | Comments (0) | Author:

Nice Reading: GitHub’s CSS Performance

Friday, 7. June 2013 12:46

GitHub CSS Performance, Title Slide

Another must read for the performance-aware programmer and tester. A nice slide deck about CSS performance issues at GitHub. Includes solutions as well.

A talk on some problems solved related to CSS Performance at GitHub. The talk was given at CSS Dev Conference in Honolulu, HI 2012. I recorded the presentation from my laptop and posted it here https://vimeo.com/54990931

Enjoy and keep in mind that performance matters.

Category: Interesting Reading, Open Source, Performance | Comments (0) | Author:

How fast are we going now?

Sunday, 2. June 2013 21:13

Steve Souders summarizes his HTML5 Developer Conference keynote in a nice blog post.

I enjoy evangelizing web performance because I enjoy things that are fast (and efficient). Apparently, I’m not the only one. …

… Vendors are pitching a faster web. Consumers are expecting a faster web. Businesses succeed with a faster web. But is the Web getting faster? Let’s take a look.

So hop over and keep reading: How fast are we going now?

Category: Performance | Comments (0) | Author:

Blog Performance – We Kicked it up a Notch

Monday, 13. May 2013 9:58

Usually, we just  measure the performance of our customer’s applications and talk about it, but from time to time we have to set an example ourselves.

In the last couple of weeks, we increasingly felt that our blog isn’t loading fast enough to deliver a satisfying experience. You know that when you can feel it, it might be too late already. Additionally, SEO is about content and performance and our blog is an important marketing tool for us.

That’s why we went on a quest for improving the performance of our company’s WordPress-based blog. Our motto: “Don’t just complain about the lack of performance, do something about it!”

Step 0 – Measuring

Our initial blog performanceMeasuring is believing and so we started with this WebPageTest result. As you can see, the initial performance is bad, a lot of content is not properly cached, and rendering started after 2.6 sec. Time for some serious tuning.

Step 1 – Reading

Tuning requires you to know what to tune. Thus, we read the famous Best Practises for Speeding Up Your Web Site by Yahoo. A similar article by Google can be found here. If you deal with web site performance in any way, you should read this. We consider it mandatory for performance and web testers.

[...]

Category: Performance, Software | Comments (0) | Author:

The Art of Reading Performance Test Charts

Monday, 22. April 2013 9:36

Powerful load and performance test tools don’t only retrieve pages of your website randomly with zillions of users at the same time, but they also cover realistic scenarios simulating the real-world user. It’s a given that they can deliver lots of useful information and plot interesting charts. To fully take advantage of these benefits, however, you need to be able to interpret this information and draw the right conclusions.

It is this need for the correct interpretation of test results, the mapping of all you see against actual application behavior that makes performance and load testing a non-trivial task. It requires much experience to decide on the right actions, make the right assumptions, or simply come up with a reasonable explanation of why something happened this way and not the other.

In today’s article, we’d like to present you a couple of charts displaying typical response time patterns, and discuss what they could indicate.

Disclaimer: Of course, the reasons for a certain behavior vary a lot, depending on your application and testing. However, as there’s no fixed manual for the interpretation of load testing charts, we want to provide you with a couple of basic guidelines to help you get better in interpreting them yourself and make the most out of your test results. Feel free to comment whether or not you agree with the ideas and explanations we come up with.

The Warm-up

These charts might indicate a system with a cold cache, for instance, when the system has just been started and the caches aren’t filled yet.

The basic characteristics of such a behavior are high response times in the beginning, followed by gradually lower response times until eventually a  certain degree of runtime stability is reached. This time frame is often referred to as the system’s warm-up period. Throughout this period, a couple of things can happen under the surface. If you know the system under test well, you’ll probably come up with the following: database and file system caches are filled, proxies learn about the data and store them, the system under test scales up because it sees traffic, page snippets are cached and so the computing overhead reduces… you name it.

Also keep in mind that it might be the testing process itself that causes such a response time profile. If the system is perfectly warmed up and you hit it, your sent traffic might be too uniform in the beginning. That being the case, randomization kicks in so that the traffic eventually distributes better over time. Furthermore, take into consideration that your load software and hardware are possibly not warmed up either.

The Caching

These charts depict either a typical cache clean or job patterns. In case of a cache clean, system-internal caches expire every half hour. If that’s not the case, the charts may indicate  a running background job draining power from the database or consuming lots of system bandwidth.

Both charts display the same test; however, this test has been executed for different time periods. While two spikes could signify a random event (despite the fact that the temporal distance of 30 minutes is suspicious), the longer test run seems to prove our first assumption: something is going on every half hour.

In any case, make sure that such a behavior is not produced by the test machines themselves, for example, because they’re busy writing or backing up data.

The Spiking

This is what we call a forest of spikes: many spikes that don’t seem to follow a comprehensible pattern; longer runtimes just occur occasionally, often caused by requests accessing certain data or URLs that produce long runtimes. To get behind that mystery, you have to dig into the results in more detail, find the calls behind the spikes, and derive a pattern based on the information you find. Often you’ll come across similar URLs, request parameters, or maybe response codes. Don’t forget any application logs you might have access to, such as web server, error, information, or debug logs. In a perfect world, your application under test offers the necessary tools to get to the bottom of this problem.

XLT lets you easily find this information. All test result data are accessible as CSV files that are quickly readable and documented. Feel free to work with this information and go beyond the scope of the reports available.

The worst outcome here is a non-identifiable pattern and no information on the server side as to what might have happened. In such a case, you have to repeat the test and narrow down your test setup later on to exclude as many variables as possible to find the cause. This is actually also a good time to ask for development or tech-ops support.

The 3rd-party Calls

The first chart is typical for issues with 3rd parties, especially in the field of e-commerce. We’re not talking about direct calls to 3rd parties here, such as analytics vendors or recommendation engines, but calls from one server to the other. Thus, the response time we see is the response time of two systems. Of course, it’s good to know the area where 3rd party calls typically happen, but you have to know the application under test anyway to test it efficiently. So when the final order steps start to act weird, you can easily narrow down the potential reasons.

The second chart looks more like the cache clean or expiration problem described above, but since you know the application, you also know that this area doesn’t use the typical caching logic but is highly dynamic instead. This means that the errors occurring every 50 minutes point into a different direction: as we know that 3rd parties are attached and called during shipping, we can conclude that the 3rd party failed on us.

Verdict

Knowing typical response time patterns helps you specify a certain problem so that you can give hints to the development or further shape the path of testing. If you can read charts and derive the right conclusions or at least know which questions you have to address, you’ll be ahead of the crowd. Be aware that knowledge on the system under test is very important – the production and measurement of a certain load doesn’t make much sense when you’re not able to actually interpret and explain what you’ve measured. Always remember: 42 is not a valid answer for everything. :)

Category: Performance, Testing, XLT | Comments (0) | Author:

Statistics, Facts and Figures: Web Performance on Black Friday and Cyber Monday

Wednesday, 5. December 2012 7:51

According to onvab.com CyberMonday 2012 was the biggest buying and spending day ever!

Whether you are a fan of facts and figures or find such data rather boring – statistics published on dzone.com are really interesting: http://css.dzone.com/articles/black-friday-cyber-monday-2012

Short Summary

  • Most of the shop seemed well prepared. Congratulations!
  • Big players like Amazon and Apple came in with good results and obviously did their homework.
  • Barnes & Noble should review their site really carefully: Up to 262 requests per page and about 10 seconds until a page was fully loaded – this upsets even patient customers.

Now imagine you run a retail site that doesn’t handle the increased traffic on such an important day. Without doubt this will bring revenue down big time, which leads to an upset CEO and rolling heads. Not to mention all the nasty comments on Twitter and Facebook.

But how to prepare for THE day?

Option 1

You could hire thousands of students (each of them owning a notebook, smartphone or tablet) and let them shop your site. If you want to repeat this scenario, you obviously have to pay them twice or three times or…

Option 2

You could automate common page flows using a good test automation tool and run load tests frequently, watch the trends, and really run your own BlackFriday before BlackFriday.

It´s up to you! Afraid we can’t help you with option 1…

Category: Performance | Comments (0) | Author:

Understand and choose a load test model

Tuesday, 27. November 2012 15:58

When running load tests it is all about the concept of how you setup and run a test. Previously we already talked about getting to numbers, today we will talk about the load models you can use to run your tests.

One of the decisions you have to make is for the right load model. Sounds complicated – but in fact isn’t. A load model basically describes what basic characteristic you can influence to reach a certain load and performance behavior.

Just one more thing before we talk about the models, because the definitions used in the following paragraph are not unique across the industry. So each vendor uses them slightly differently.

Transactions: “A transaction is an execution of exactly one test case or test scenario. In order to perform the scenario, the page flow is modeled in code. The test scenario is implemented as a test case, which itself executes a sequence of one or more actions.”

Action: “An action can be defined as one irreducible step within a test case. So an action interacts with the current page and – as a result – loads the next page. That page is associated with this action and becomes the current page for the next action in the test scenario. Generally, an action triggers one or more requests.”

Requests: “This level is equivalent to the HTTP request level used in web browsers or in any other application that relies on HTTP communication. You do not have to deal with requests directly because they are automatically generated by the underlying HtmlUnit framework when performing actions on HTML elements.”

Let’s check out an example. Before running the next marketing campaign your company’s online shop will have to be tested to make sure it won’t break under that much traffic. Two of your teammates are discussing the problem:

Head of IT: “The shop will have to handle 2000 concurrent users.”

Head of marketing: “With our campaign the the shop will have to handle 500 orders per hour.”

Both requirements can lead to two different load models, depending on the exact goal that is set.

  1. User Count Model: This is the approach of the IT guy. Define a certain number of concurrent users the system will have to handle. At any given time during the test, the target system has to handle 2000 concurrent users, no more, no less. The number of transactions that can be achieved depends on the time the target system needs to respond.

  2. Arrival Rate Model: The money driven marketing guy tends to another criteria: The number of transactions (or here: orders) per hour. The scenario will be performed 500 times, equally distributed across a period of one hour. As many concurrent users as necessary to fulfill the given arrival rate are used.

Okay. Understood. Head of IT is happy with the user count model. He’s pretty sure the shop will be stable with 2000 concurrent users.

So what will happen if the response time of the system increases during the test period?

When using the user count model simply less transactions will be finished. In comparison the arrival rate model will increase the number of concurrent users as well, to make sure the given number of orders will be performed.

You can see the difference here? The arrival rate model works feedback based and will react to response time changes during the test period. This way, the generated load is somewhat unpredictable – and that’s how the real world behaves.

Or to use a more common analogy, people will usually still enter the supermarket despite the fact that there are long line at the cashier. That what happens in your online shop when the response time increases and you use the arrival rate model. More user are coming in, because they do not know that there are already lines in the store.

The chances for picking the arrival rate model are getting higher, but the IT guy is just a little bit afraid of the aggressive behaviour of such a test. Just imagine:

  • response time increases
  • more concurrent users are used
  • response time will increase even more due to heavier load
  • more concurrent users are used
  • recursion here

Load Test Models: Arrival Rate Model

But there is a solution. To avoid a complete breakdown, you can define an upper limit to the number of concurrent users used in the arrival model. This can help you to restrict the total load on the system, if you want to avoid a total overload as a result of the feedback loop. That is not reality of course, but of the user feel a certain pain in reality, they might hold back as well.

Ok, the arrival rate model is going to be used for the next load test. But there are definitely scenarios in which the user count model could be a good choice? Here are some guidelines which model fits what purpose best.

The user count model is well-suited for:

  • a simple base line test (single-user test) to assess the base performance of the system under almost no load,
  • a real load or performance test to assess the performance under a high, but predictable aka stable load,
  • a test that should be easily repeatable and its load factor is not influenced by the system under test.

The arrival rate model is best used if the load test should prove that a system is indeed able to handle a certain number of transactions per hour. Since this is the primary purpose of load and performance tests, the arrival rate load model is the best choice for most of your test tasks.

So get testing and give all models a try. Load tests often start with a fixed user rate and once this runs fine, you move over to the more challenging arrival rate model.

Category: Performance, Testing | Comments (0) | Author:

XLT 4.1.8 Update Release

Saturday, 18. February 2012 0:15

XLT 4.1.8 has been released. It contains a couple of improvements and bug fixes. You can get the latest version here: https://lab.xceptance.de/releases/xlt/4.1.8/.

These are the two most important changes.

Optimized scanning CSS rules

XLT simulates the download behavior of a browser as close as possible. With com.xceptance.xlt.css.download.images set to onDemand, XLT carefully considers the download of image resources referred to in CSS files. It checks all CSS rules if they apply to all elements in the DOM tree and, if so, extracts the resource information for later download. This process was previously very expensive and ran for up to several seconds when a page was complex and a lot of CSS rules had to be matched. This has been optimized heavily.

Configurable retry behavior for keep-alive connections

If persistent connections (keep-alive) are enabled, test cases might failed sporadically with an exception such as:
java.lang.RuntimeException: org.apache.http.NoHttpResponseException: The target server failed to respond

This exception occurs only when the request was sent while the connection was about to be closed on the server side at the same time.

The underlying HttpClient retries requests if the connection was closed by the server. However, by default it retries only idempotent operations such as GET and HEAD, but not POST and PUT. In order to avoid these connection errors, the user can now configure whether non-idempotent operations are to be retried as well. Common browsers seem to use the same behavior and most certainly assume that the server did not start to process the request when it closes the connection immediately without responding with a proper HTTP status code.

A new boolean property com.xceptance.xlt.http.retry.nonIdempotentRequests in config/default.properties controls whether or not operations such as PUT and POST are retried in case of certain networking problems. Note that idempotent operations are always retried.

Category: Performance, Software, Testing, XLT | Comments (0) | Author:

Get the right load mix out of a few numbers

Tuesday, 7. June 2011 13:49

When testing ecommerce applications on SaaS environments, you often do not get enough numbers from clients because they simply do not know these numbers or only a few. One reason for that is, that the client simply have not had any only presence before. Often the client also does not have detailed numbers, because the previous hoster or the IT department just holds them back or simply cannot get to these numbers.

So what to do, when you do not know every detail about the current or future load pattern? We are describing one approach below that was very successful so far and always yielded satisfying results.

What we need

  • Visits per peak hour (example 10k)
  • Page views per peak hour (example 100k)
  • Orders per peak hour (example 200 orders)
  • Optionally we can use the conversion rate to get from visits to orders or vice versa.
  • Optionally we can take searches, “add to cart” operations, user registrations, and so on into account.

The mentioned scenarios are typical ecommerce scenarios and look like that. We will not talk about smaller scenarios such as address editing for a registered user.

  • TSingleClickVisit: Enters the store only, does not move beyond the start page
  • TBrowsing: TVisitor plus category and product browsing
  • TSearch: TVisitor plus keyword search plus browsing of the result
  • TAdd2Cart: TBrowsing plus add to cart operations
  • TGuestCheckout: TAdd2Cart plus checkout without an order placement (anonymous user)
  • TGuestOrder: TAdd2Cart plus full checkout (anonymous user)
  • TRegisteredCheckout: TAdd2Cart plus checkout without an order placement (registered customer)
  • TRegisteredOrder: TAdd2Cart plus full checkout (registered customer)
  • TRegistration: Account creation

What we assume

Ecommerce sites follow similar patterns and with a few exceptions, such as special promotions, certain behavioral patterns are nearly identical. So for instance, about 50% of all checkouts are stopped before the order is placed. About 20 to 50% of all created carts aren’t checked out at all.

What we calculate

Based on these assumptions, we put together a fairly simple but sufficiently accurate load mix. Of course, we can also analyze the current log files and try to come up with something more precise, but that will be a snapshot only. Traffic is very volatile and so we should be very generous when setting up this mix.

Since we do not take any daily averages as base but the peaks, we will have a pretty comfortable buffer for our daily ecommerce life anyway.

Bottom-Up

Let’s say, 200 orders are set as goal. Splitting them 50/50  between registered and anonymous users, we get 100 visits of each type. All numbers are per hour of course.

  • TGuestOrder = 100
  • TRegisteredOrder = 100

As a next step, we take our 50% checkout abandonment rate into account. We have 200 checkouts per hour that are stopped and 200 that run through and turn up as orders (as counted previously). So we need to add 200 visits. And because these visitors can either run with their preset account or without, we split them up in 100 guest and 100 registered checkout attempts.

  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

This gives us 400 visits per hour that go into the checkout. We now assume a low cart to checkout conversion rate, about 20% for instance, and so we take 400 checkout visits * 5 and get 2,000 visits that involve cart usage. Since we already have 20% converted into checkouts, we have 2,000 minus 400 visits that use the cart.

  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

We also know that many users do not continue after hitting the home page or any landing page. Let’s add some of these users now.

  • TSingleClickVisitor = 1,000
  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

But wait, what are we missing? Well, we have not registered any new accounts yet. Didn’t we? We did, because the registered checkout creates accounts if required and reuses them several times. But to get a more substantial customer growth, we simply add 200 visits that run registrations:

  • TRegistration = 200
  • TSingleClickVisitor = 1,000
  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

What is left to do? Well, we do not have any “I am just looking around”-visitors yet. We know that our total visit count is 10,000 and we already assigned 3,200 of these to cart, checkout, and registration, so we have 6,800 visits left we can now use for something else. Depending on the shop type (large store, small store etc), people tend to use search more or less. To put enough stress on search and refinements, we simply assume 50% of all people like to search. Thus the missing 6,800 visits will be 3,400 catalog browser visits and 3,400 visits with usage of search before browsing the search result.

The total mix is:

  • TBrowsing = 3,400
  • TSearch = 3,400
  • TRegistration = 200
  • TSingleClickVisitor = 1000
  • TAdd2Cart = 1,600
  • TGuestCheckout = 100
  • TRegisteredCheckout = 100
  • TGuestOrder = 100
  • TRegisteredOrder = 100

Wait… where are my concurrent users? This is simple: “concurrent users” is an inaccurate way of describing traffic, so we have not used that number yet. Why is that?

To get to the bottom of that, we simply check how long a visit takes. Depending on the shop, an average visit might take 2 to 4 minutes. Successfully shopping might take 15 minutes. If we expect about 10 page views per visit and a page view takes 1 second to load and 20 seconds to read it (already a really really high number for an average), a visit would take 10 * 1 second + 9 * 20 seconds = 190 seconds.

Let’s go with the 190 seconds for a visit on average. If we just could serve one visitor at a time, we could serve 60 minutes (3600 seconds) / 190 seconds per visits = 19 visitors per hour. But because we would like to serve 10,000 per hour, we have to deal with 10,000 / 19 = 526 visitors at the same time. This is the famous concurrent user number.

If we now double the think time, we have 1,052 concurrent users/visitors. If we cut it down to 1 second think time, we will get a visit length of 19 seconds and therefore 10,000 visits / (3600 seconds / 19) = 53 concurrent visitors.

So we already have three different “concurrent user” numbers and are still simulating the same traffic. This shows that the number of concurrent users is a pretty questionable way of describing traffic.

It does not matter which number we take, because most of the time the servers will see the same traffic. Because we run against a SaaS environment that serves a multiple of other customers at the same time and is sized to serve the peak traffic for all customers at the same time, we have plenty of comfortable room around us. This permits us to run with 53 concurrent visitors for most of the testing. This will save us client hardware resources for the load generation. e.g. saves us money. We are basically only interested in the runtime of requests and not if the environment can handle that, because it can.

The goal of this test is to demonstrate that the implementation on the SaaS platform is efficient, not that the SaaS platform itself is fast and stable, because this is guaranteed by design and contract. Testing this would require way more traffic and generate huge costs, because the environment would suddenly no longer be a shared one but exclusively used for this testing purpose.

When finalizing the entire test and all tests turned out good, we are going to turn up the concurrent user count to 530 users and compare the result with the previous measurements. Just to satisfy the traditional test expectations.

Does that work for you?

Hope that gives you an idea how to come up with a nice user mix for testing without having too much data in the first place. Comments welcome.

Category: Performance, Testing, XLT | Comments (4) | Author: