Category Archives: Automation

The Delicate Art of Load Test Scripting

TL;DR: We are often asked why we need that much time to recheck load test scripts. So, here is our explanation in ten sentences or less.

“Why is the script broken? We haven’t changed anything?” A load test script can be broken in two ways. It breaks clearly, hitting you with an exception or assertion and giving you explicit errors. It’s just incorrect when it appears to pass but is subtly flawed, leading to misleading results.

“But we haven’t changed anything in the UI!” Load test scripts don’t magically adjust. You can’t assume load testing is UI automation on steroids because UI tests are resource-heavy due to our modern browsers. That makes the tests inefficient and costly for simulating many users, whereas load tests use lightweight, lower-level simulations designed for true scale.

“So, where did this change occur?” Scripts break due to changes in HTML/CSS, JSON, required data, or application flow, while they become incorrect because of optional data changes, wrong requests/order, or outdated data.

The secret to lower script maintenance lies in communicating changes with your performance testers early and in validating all data. The less vague an API is, the easier it is to keep the scripts up to date. This way, you’ll ensure your load tests are a true reflection of reality, not just a green checkmark!

Introduction

It’s a question that echoes through many development teams: ‘Why do we need to touch the load test scripts? We haven’t changed a thing on the UI!’ This common query stems from a perfectly logical assumption – if the user interface looks the same, surely everything behind it is too, right? Unfortunately, in the complex world of modern applications, what you see isn’t always what you get, and a static UI can hide a whirlwind of activity that directly impacts your performance tests.

Let’s talk first about what can really mess up your load testing scripts. It’s super important to know the difference between a script that’s completely broken and one that’s just plain incorrect.

  • Broken scripts: These are the ones that just won’t run through at all. If it’s a JUnit test, for example, you’ll see a big fat failure message. It’s dead in the water.
  • Incorrect scripts: These are trickier. They’ll actually run all the way through and show a “green” result, but they haven’t done what they were supposed to. They’re silently misleading you, and they are the main reason why load test script validation isn’t just run and tell based on outcome.

Test Automation vs. Performance Tests: Not the Same Thing!

Because people often mix these up and assume that a load test script is as easily maintainable as a test automation script, here are the key differences.

  • Test automation is usually about checking if something works. Think of it as replacing manual testing or extending its reach so you can get feedback faster. These tests are often UI-based and interact with a real browser or an app.
  • Performance/Load Tests, on the other hand, are about simulating how users behave, but at a much lower level than the UI. This lets you directly control things like the type of calls, data, and even filtering. You also get rid of a direct dependency on a real browser.

Now, you might think, “Why not just scale up my UI automation tests?” Good question! But here’s why that typically doesn’t work.

Modern browsers are hardware hogs. They need multiple CPUs, 512 MB or more memory, and often a GPU just to run smoothly. Trying to run huge tests with actual browsers chews up too many resources, making it expensive and unreliable to test higher traffic.

Illustration comparing resource-intensive web browsers to efficient load testing simulations.
Illustration: Browsers at Scale vs Load Test Simulation

Plus, running a UI test at scale would be a massive waste. You’d be rendering the same UI a million times over without learning anything new about performance. Browsers are also a pain to control remotely, especially when you need to filter or tweak requests to hit (or avoid) certain resources. That filtering can lead to all sorts of problems because of JavaScript dependencies or rendering quirks when third-party calls are skipped during a load test. And frankly, browsers are terrible at telling you when they’re truly ‘ready’ because so much is happening asynchronously. Modern websites are almost never silent, always doing something in the background, making that “ready” state even harder to pin down.

So, to sum it up: performance test scripts are not test automation scripts. The latter are typically UI-based, work at a higher level, and aren’t as sensitive to the nitty-gritty, low-level changes like more or fewer requests, or little parameter tweaks. Performance scripts live in that nitty-gritty world.

What Makes a Script Break?

These are the things that will make your script crash and burn:

  • HTML Changes: Your CSS selectors or XPath expressions suddenly don’t work because the HTML changed.
  • Required Data Changes: The data you have to submit has changed, and your script isn’t sending the correct stuff.
  • Flow Changes: The application’s flow changed (like a checkout process becoming shorter or longer), and your script can’t follow it any longer.
  • Invalid Test Data: You’re using test data that’s just plain wrong, and the system can’t handle it at all.

What Makes a Script Incorrect?

These are the sneaky ones that run but give you bad intel:

  • Optional Data Changes: The data you can submit has changed. The script runs, but it’s not truly reflecting real user behavior.
  • Extra/Missing Requests: Your script is sending requests it shouldn’t, or missing requests it should be sending.
  • Wrong Order of Calls: The calls are happening, but not in the sequence they would in a real user journey.
  • Invalid Test Data: The data is technically valid, but no longer represents the state of the system. Because there is not enough validation in the system under test, things don’t break but rather go unnoticed.

Of course, these are all just examples. While a JSON format change might break scripts for you, it might just lead to incorrect testing for someone else.

How to Keep Your Scripts from Going Haywire

To avoid this constant headache between your scripts and the actual application, you need to bake script maintenance right into your development process. The biggest help here is communication. If performance testers know about feature changes, they can figure out what needs to be done. This means less scrambling to review every script all the time.

Here’s how to tackle those changes:

  • Talk About Changes: Especially when the front-end will see new or removed requests, logic updates, or changes to the data being collected or sent. Keep performance testers in the loop!
  • Disable Old Functionality: When something’s removed from the front-end, make sure to disable it to make it impossible to work. This will make your scripts fail if they try to hit old endpoints, which is a good thing – it forces you to update the scripts.
  • Verify All Required Data: Always verify all the data needed for an action; don’t just leave things optional. This doesn’t just help your performance tests; it also boosts functional quality and security.

Or in English: Every change of the UI for already performance test scripted functionality should yield to a breaking script state. This requires the performance tester to validate things carefully and the application engineer to apply boundaries to the application that communicate clearly what is needed or not desired.

And yes, it is still possible that you cannot set these clear boundaries because the application itself might not know about certain states, or the APIs are not yours so you depend on their overflexibility.

Conclusion

Load testing scripts are different from normal automation, with unique requirements for creation and maintenance. Recognizing the fundamental differences between “broken” and “incorrect” scripts, and between performance testing and UI automation, is vital for achieving accurate and reliable performance insights. By integrating script maintenance into the development process through proactive communication and robust practices, teams can ensure their performance tests remain effective, reflecting the true state of their application under load.

So, there you have it. Load testing scripts are a different beast, and understanding their quirks is key to getting good performance insights. Keep these points in mind, and you’ll be much better equipped to handle them.

P.S. There is an option to run sensible but still resource-intensive load tests with XLT: It supports load testing with real browsers at scale. This is perfect for a blend of test automation and load testing. Of course, you likely are not going for hundreds of users, but rather a small set for a regression and sanity check. First line of defense, so to speak.

P.S. Depending on the framework and concepts you use, going headless with your application probably means a significant increase in scripting effort.

Automated Accessibility Testing with Google Lighthouse and Neodymium

TL;DR: Accessibility is a crucial aspect of web development that ensures everyone, regardless of their abilities, can access and use your website. Web Content Accessibility Guidelines (WCAG) provide a standard for web accessibility, and adhering to them is not only ethically important but also increasingly becoming a legal requirement.

One valuable tool for checking WCAG compliance is Google Lighthouse. It can help identify accessibility issues during development. And when integrated into your test automation project with tools like Neodymium, it can significantly streamline your accessibility testing and monitoring process.

What is WCAG and Why is it Important?

WCAG, or Web Content Accessibility Guidelines, are a set of standards developed to make web content more accessible to people with disabilities. These guidelines cover a wide range of disabilities, including visual, auditory, motor, and cognitive impairments. By following WCAG, you ensure that your website is usable by a larger audience. Approximately 15% of the world’s population has some form of disability.

WCAG compliance is also gaining traction legally, with governments in the US and EU enforcing accessibility standards. Non-compliance can lead to lawsuits and of course to less revenue and visitors.

The Role of Test Automation

Automating WCAG checks using tools like Lighthouse can help find and fix accessibility flaws early in the development cycle, saving time and resources. It also reduces the burden on manual testers by automatically generating accessibility reports.

Most importantly, test automation maintains the achieved level of accessibility of the application through regression testing.

If you have an existing or planned test automation project, it is crucial to incorporate WCAG compliance into your testing strategy. This is why we have integrated Lighthouse into Neodymium, allowing you to easily extend your test cases with accessibility checks.

Continue reading Automated Accessibility Testing with Google Lighthouse and Neodymium

Neodymium 5.1 – Get more done faster

TL;DR: Neodymium 5.1 has been released with a host of impressive new features. These include full-page screenshots in reports, enhanced control over JSON assertions, accessibility testing via Google Lighthouse, and simplified session handling and URL validation.

The newest enhancements primarily focus on these key areas to boost testing efficiency and user experience.

Enhanced Reporting: We’ve significantly improved the quality and usability of test reports. You now get more informative and insightful results with features like full-page screenshots, enhanced JSON assertions for easier data comparison, and a streamlined report structure for improved readability.

Accessibility Testing: Recognizing the growing importance of web accessibility, we’ve integrated Google Lighthouse to automate accessibility checks. This allows you to easily identify and address accessibility issues within your web applications, ensuring compliance with WCAG standards.

Streamlined Workflows: We’ve introduced several features to streamline your testing processes. These include improved session handling for cleaner test environments, enhanced configuration options for greater flexibility, and robust URL validation to prevent unintended access to sensitive systems and ensure test stability.

These improvements are all about making your testing smoother and more dependable. You’ll get better results and learn more from them, so you can build even better software! So, let’s get into the details, shall we?

Continue reading Neodymium 5.1 – Get more done faster

A new magnetic force: Neodymium 5.0.0

TL;DR: We proudly announce the release of Neodymium 5.0.0. Neo is a Java-based test library for web automation that utilizes existing libraries (Selenide, WebDriver, Allure, JUnit, Maven) and concepts (localization, test multiplication, page objects) and adds missing components such as test data handling, starter templates, multi-device handling, and other small but useful everyday helpers.

Our R&D team has been busy brewing! So today we finally get back on the major release train and present you the new Neodymium 5.0.0. It comes with a lot of new little convenience features, giving you more control and possibilities on your test automation. A better browser control, improved configuration possibilities and a bunch of new annotations like @DataItem, @WorkInProgress, @RandomBrowser, will help you set up and configure your test automation to your specific needs. Even if a picture says more than 1000 words sometimes it’s a pain to see on a single screenshot, why the automation journey broke, to help you in such cases, we introduced video recording, to exactly see what happened during the whole user journey.

See below for what you can find inside the big box of updated code:

Continue reading A new magnetic force: Neodymium 5.0.0

Neodymium – An Open Source Framework for Web Testing

TL;DR: Neodymium is a Java-based test library for web automation that utilizes existing libraries (Selenide, WebDriver, Allure, JUnit, Maven) and concepts (localization, test multiplication, page objects) and adds missing components such as test data handling, starter templates, multi-device handling, and other small but useful everyday helpers.

Motivation

As a company focused on quality assurance and testing, Xceptance always needs test automation software, especially end-to-end automation software. Several years ago we built a Firefox add-on that was designed to create and run browser automation. The tool was primarily used by people who didn’t necessarily have a strong background in software development. Today, the landscape is a bit different: Mozilla cut the cord on the APIs we were using and standard programming languages have largely taken over test automation because they are more flexible and less proprietary. These changes convinced us it was time to implement an idea we had already hatched, namely our own Open Source test automation project: Neodymium. It is written in and utilizes the Java platform, it is MIT licensed, and of course you will find it on GitHub: https://github.com/Xceptance/neodymium-library

Basis

There are many libraries out there to aid web automation in Java, so developers are faced with the task of choosing ones they like and somehow making them work together. On top of that, there are tasks that require some custom code to work properly. We identified the overall tooling problem mostly as a hurdle in getting started and setting up a project. Finally, there are always things missing such as test data handling, concurrency, and common patterns which you don’t want to have to develop yourself. We chose JUnit, Selenide, WebDriver, Maven, and Allure for the base tooling.

Selenide provides an easy-to-use API to control Selenium WebDriver. Allure offers good mechanics to generate useful reports based on the assertions and actions you perform throughout your test cases. Maven is used to set up the build and execution environment for our framework and all the test projects. We decided to use JUnit as the test runner since it is the de facto standard in the Java world, but we enhanced the capabilities of JUnit to do even more. At its heart, Neodymium is a JUnit runner that wraps default JUnit behavior and adds significant useful functionality to it.

Multiple Browsers

You want to be able to run the same tests for different resolutions and/or browsers to simulate the browsers most common among your users. Additionally, you need to be able to implement small differences within your test execution to address variants such as responsive designs or progressive web apps. So we added a way to run web browsers with different configurations and retrieve the current device type and resolution from within the test.

Neodymium provides a Java annotation that can be added to your test case, in order to run different browser setups. Neodymium is very flexible in configuring browsers, allowing you to fully leverage the Chrome device emulation offerings.

Test Data

Another common task is the execution of a test case with different data sets, such as testing address forms with all the relevant variations. The basic idea is to have test data and data sets in structured files next to your code, preferably as JSON, XML, property style, or simply CSV. Hence, we introduced an easy-to-use API to access the current data set and retrieve basic types from it. Furthermore, you can configure specific scenarios running only a subset or even no data set at all by adding proper annotations. To complete the picture, Neodymium supports test data on a global and package-level scope.

Localization

Another recurring topic in modern software projects is localization. Most of the web sites that are in need of test automation also support several locales. We decided to provide an out-of-the-box solution.

Neodymium’s localization feature makes use of a central translation file written in YAML format. YAML helps to structure the translations. Additionally, we implemented a simple way to override specific translations for different locales. The localized text can be easily retrieved using Neodymium API methods that are globally available.

Development Support

As it is essential to understand what your test is doing, we added a feature that enables you to slow down the test execution and highlight elements that match the current selector. Since you can chain selectors using Selenide, any chain of elements is also represented by the highlighting. With this feature activated, a developer can track down the cause of test failures much more easily. In addition, we provide information on how to set up logging in your project should you need that. Finally, we decided to use the Page Object pattern to organize the website-related code to reduce the maintenance effort and increase reusability.   

Reporting

Allure is a widely used framework to generate reports. When using Neodymium with Selenide, your automation code also contributes report information. Your test classes and methods are listed as well as detailed Selenide automation commands. In case of errors, additional details such as screenshots and source code of the page in question are available. Neodymium also provides means to structure code blocks for reporting purposes.

Continuous Integration

Implementing principles of continuous integration will deliver more reliable software by increasing efficiency, and automation is nothing without a continuous integration environment.  Yet in almost every development cycle you will eventually end up needing varied settings due to differences in your setup, which can get complicated. Neodymium provides support for extra configuration files during development to override the standard production settings as needed. Furthermore, the framework supports overriding properties that change the configuration of your test execution by setting environment variables or simply passing Java arguments.

Because automation is supposed to run quickly, Neodymium provides support for parallel test execution and also demonstrates that setup as part of the sample test suite.

Documentation and Templates

Does Neodymium address some of your test automation challenges? Does it sound like a good entry point for your test automation?

Neodymium is hosted on GitHub (https://github.com/Xceptance/neodymium-library), where the accompanying project wiki (https://github.com/Xceptance/neodymium-library/wiki) provides extensive documentation to help you get started and answer your questions.

You might also want to take a look at the comprehensive example projects using Neodymium with Cucumber (https://github.com/Xceptance/neodymium-cucumber-example) or plain Java (https://github.com/Xceptance/neodymium-example). We’ve even provided a template project (https://github.com/Xceptance/neodymium-template) to get you started automating in no time.

License

Neodymium is licensed under the MIT License.

Who Are We

We are Xceptance. A software testing company with strong commerce knowledge and projects with customers from all around the world. Besides Neodymium, we have developed Xceptance Load Test (XLT), a load and performance test tool that is available free of charge and features an extensive range of awesome features to make the tester’s and developer’s life easier.

If you are looking for test automation that also covers the performance side of life, take a look at XLT. You can write and run load tests with real browsers including access to data from the Web Performance Timing API. In case browsers are too heavy, XLT has other modes of load testing to offer as well.

We offer professional support for Neodymium as well as implementation and training services.

Firefox 57 Changes and XLT

Firefox 57 is going to deliver a fundamental change that will affect XLT Script Developer.

As you might know, Mozilla decided to completely remove the support for XUL/XPCOM-based extensions (aka legacy extensions) in favor of extensions built upon the WebExtension API. This cut will take place on November 14, 2017, with the release of Firefox 57. Additionally, Mozilla will refuse to sign updates of legacy extensions at some point in the near future, although the exact date is not determined yet. See the Mozilla Add-ons Blog for an up-to-date timeline.

The consequence of this breaking change is that XLT Script Developer cannot be installed in Firefox 57 and higher. Also, an already installed XLT Script Developer cannot be used any longer.

We at Xceptance have been aware of this development. Over the course of the last year we have been busy evaluating several alternative options. As you might recall, we also conducted a survey back in the spring to collect your feedback.

Based on our findings and your input and after long discussions, we came to the conclusion that the feature set and comfort that had been offered by XLT Script Developer and its way of writing web automation cannot be recreated using the alternative Firefox APIs.

Therefore, we regret to announce that Script Developer is discontinued. Consequently, we don’t recommend starting new test projects based on Script Developer and XML script test cases. To be able to continue to use the features of XLT and the advantages it offers, we suggest you use XLT’s Java-based API.

If you are able to use Firefox 56 or Firefox 52/ESR, maybe in parallel to an updated version of Firefox, you can continue to work with XLT Script Developer as well as execute all your test cases as you have been.

We are aware that this decision might disappoint many of you and may leave open questions. For more information on what shaped this decision and what your options are for maintaining your existing XML script test suites in the future or migrating them to another base, please see the Q&A section below.

Q&A

Why did Mozilla decide to abandon legacy extensions?

By design, legacy extensions have not only full access to your local file system but also to the entire browser and thus to all the pages you visit. This has been causing privacy and security issues and hence Mozilla decided to abandon the API to address these problems.

Why isn’t XLT Script Developer being ported to the WebExtension API?

The possibilities offered by the WebExtension API are very limited. One such limitation, and the most important one, is that access to the local file system involves a user interaction for each and every file to save or read. This would make Script Developer simply impossible to use.

Further restrictions apply. Most of them are related to accessing and manipulating session and browsing data such as cookies or cache. In the end, Script Developer would only be able to support a very reduced feature set.

And last but not least, the outcome of our customer survey revealed needs which cannot be met by such a modest visual, non-programming approach to writing tests. See the next question for more details on the feedback we received.

Why won’t XLT Script Developer be ported to another platform?

The outcome of our survey was that users wanted a tighter integration with Java/IDEs, more commands, more ways to customize things, better flow control, and more flexibility with test data. At the end of the day, this is a full programming approach and turning the XLT Script Developer into another programming environment couldn’t address all these points.

We also believe that the concept of XML script test cases with their limited capabilities are no longer appropriate for modern testing needs. Therefore we have decided to give it up in favor of the opportunities a real programming language provides.

Can I continue to use XLT Script Developer with Firefox 56?

Yes, but at your own risk. Remember that your browser cannot be updated and therefore will not receive any security fixes. Furthermore, it may not be possible to install any Script Developer updates as this Firefox version accepts signed extensions only. And Mozilla might stop signing legacy extensions any day. See below for a better solution.

Can I continue to use XLT Script Developer with Firefox 52/ESR?

Yes, absolutely. That is the recommended way to go, at least for the next months. First, Firefox 52/ESR (Extended Service Release) will be updated with security fixes until May 2018. If you continue to use Script Developer after that date, though, you do so at your own risk. Make sure that you use this browser version for scripting only.

Furthermore, Firefox 52/ESR can be tweaked to install legacy extensions even if they are not signed by Mozilla. This way, you are still able to install Script Developer updates in case Xceptance provides some in the future.

Will the XLT framework still be able to replay XML script test cases?

Yes, XLT will continue to interpret and execute your existing XML script test cases as JUnit tests via their wrapper classes.

How to migrate existing projects?

Export all your existing XML script test cases to Java (Scripting API). From this point, you maintain your test cases in your favorite Java IDE instead of the XLT Script Developer. Since the concepts and commands are the same as in Script Developer, you will be on top of things quickly.

Instead of running your XML test cases in Script Developer, you would now run your Java-based tests as JUnit tests using your preferred WebDriver either from your IDE or using a continuous integration system.

Since your code base is plain Java now, you are free to add all the things that you might have missed in the past.

Note that XLT 4.10.0 will ship with some enhancements for Java-based test cases. For instance, Script Developer will provide an alternative export template that produces more compact code. Additionally, writing test cases directly in Java will be more comfortable as well. Stay tuned for the upcoming release and find all the details in the release notes.

Make sure you subscribe to our low-volume XLT release and news mailing list.

How long will you release XLT Script Developer updates?

TBD. Future updates of Script Developer will be bug-fix releases only (shipped as unsigned extensions once Mozilla stops signing legacy extensions). Don’t expect any new features.

Are there any other options?

Yes. There are forks of Firefox that promise to continue supporting legacy extensions while being kept up-to-date at the same time. For instance, Script Developer installs and runs nicely in Waterfox. However, we cannot predict how long this will actually work.

Multi-Browser Support for Test Automation with XLT

Summary

In today’s post we will discuss the steps necessary to enhance an XLT-based test suite with multi-browser support. We will show how to tag your test cases to conveniently run them in different environments and execute the test suite in a local or remote fashion.

Introduction

Xceptance maintains a MIT licensed test suite at GitHub which demonstrates functional testing for large scale projects. With the suite we’ve put an emphasis on clear structures, naming and test case organization. Targeting Demandware’s SiteGenesis storefront at heart, the underlying concepts and mechanisms are valuable for everyone building test suites for comparable web applications with XLT. Next to being a template for test automation and best practices in test suite design, it can be a starting point ready to pick up in your very own projects. We regularly utilize it and want to encourage you to explore, employ and contribute.

A regular challenge in testing ecommerce applications is the variety of different browsers and platforms that are available today. As you probably know XLT, the test automation and load testing framework from Xceptance, is based on Selenium browser automation and the Webdriver API. Supporting multiple browsers therefore comes naturally. This blog post will demonstrate how XLT is able to streamline different testing environments directly in your test suite. You will learn how to execute your tests locally and remotely with the help of Sauce Labs and similar automated testing platforms. Along the way you will pick up some details about XLT as well as Script Developer and quickly find yourself equipped with a ready to use multi-browser test suite example.
Continue reading Multi-Browser Support for Test Automation with XLT

Special Characters in Script Developer

Recently we received a support request regarding special characters in Script Developer. Perhaps other XLT users stumble across a similar requirement, so it’s a good idea to make the discussion available to the public.

First of all, some bad news: Up to now, Script Developer does not have explicit support for special characters, such as Line Feed (\n), Horizontal Tab (\t), Backspace (\b) or similar. For example, typing multiple lines of text – each line delimited by a newline character – into an element on your page is not possible just like that. Upon loading your script, XLT Script Developer normalizes all white-space characters contained in the target or value field of any command.

Of course, we don’t want to leave you out in the rain but provide a feasible solution.
Continue reading Special Characters in Script Developer

Conditional Expressions in XLT

Motivation

Did you ever have to create multiple versions of your test cases to accommodate small differences of your test objects? Looking for a trade-off between good testing practice and minimizing project complexity. The following blog post reflects on this challenge and introduces you to a potential solution: Conditional Expressions.

Introduction

Xceptance introduced its test automation and load testing tool XLT 4.6 in February 2016 and with it we brought you conditionals. Today we want to shed some light on this new feature, the discussion that came along with it and why we finally decided to introduce it. This blog post will equip you with everything required to employ conditional expressions in your test scripts.

In computer programming, a condition or conditional expression performs an action depending on whether a given statement (the condition) evaluates to true or false. The programmer has the possibility to execute a part of the program only if certain circumstances are met. Now don’t worry, you do not need to become a full-fledged programmer to create your test cases with XLT Script Developer. But you should not skip on the possibilities this new feature is offering.

The Challenge

In testing typically you want your flow of execution to be linear, deterministic and transparent. The individual execution steps of your test case should be well-defined and yield the same results in a constant environment. If one execution step fails – e.g. an assertion does not check out – the whole test case always breaks and evaluates to failed. Run, rinse and repeat.

On the contrary often enough your real world test cases need to cater various scenarios. Think multi-region support of your page for example. Area specific content and functionality can quickly bring you into a catch-22 situation. Follow good practice in test case design, but deal with complexity and organizational nightmares in your test suite. Tiny differences in your test objects force you to keep multiple versions of your (already lengthy) test cases. Farewell maintainability!
Continue reading Conditional Expressions in XLT