Tag archive of » Software «

Our Top Software Testing Trends 2013 – Continued

Monday, 27. May 2013 11:25

Top Software Testing Trends 2013
Image © Juja Schneider

Early in 2013, we compiled a personal list of relevant software testing trends. Having received great feedback on this, we would like to add a couple of other interesting aspects today.

We believe that the domain of software testing is not and should not be subject to short-lived trends and fashions since new approaches always need to prove worthwhile first before becoming custom. So when we’re talking about trends here, we refer to issues that didn’t just come up recently but that have been around and widely discussed for a while now. The subjects we’re going to introduce in the following are subjects we feel our customers care about and that are of relevance in our own daily work. Please keep in mind that the list below is thus once again a very personal one, deriving from our experience in web and e-commerce as well as small and midsize test projects.

Cloud Testing

The cloud enhances testing in that it offers new tools that help broaden the view, cover more, set up faster, and deal with data as well as traffic amounts never seen before. We have now access to cheap hardware, we can set up test environments quickly, hibernate them, we can emulate a larger number of configurations, easily clone setups, or add machines.

Cloud testing is especially suited for scenarios such as:

  • load testing where the load drivers are in the cloud and dynamically scaled
  • load testing where your system under test is in the cloud probably because the live system is or will be cloud based as well
  • functional testing where you have plenty of systems available to try configurations or give every tester their own hardware to avoid the famous all-on-one-system test problem
  • several system running automation around the clock, just to test stability, for instance
  • preserving copies of data or installations because it might be necessary to compare to older versions from time to time
  • having different operation systems available and switching quickly between them

Of course, the cloud has also disadvantages. It can turn into a mess because more systems mean more management and more management can mean more errors, more mistakes, and more resources just to keep it running.

Crowd Testing

The end user is the one that matters. If you create an application for a big number of end users – why let it test by a single person or a small test office? Crowd testing delivers results from the people who are actually going to work with your application.

You are creating a mobile application and can’t cope with the diversity of devices and operating systems? You feel that problems don’t occur in simulators but in the real world? Crowd testing can be the perfect solution for such problems.

Keep in mind that crowd testing is not the answer to everything though. Secretive and sensitive stuff won’t work that way. Additionally, you will have to find a good way to get valuable feedback from strangers. So crowd testing might only be a small component of your overall testing strategy.

Web Security Testing

Hijacked accounts, hacked applications, stolen user data: it’s in the headlines again and again. Customers are scared, sensitive data get in the hands of competitors and prove a powerful tool for criminals – you really want to avoid that.

Security testing will become increasingly important in the data-driven world. Information and data are money you don’t want to lose. Thus, dealing with authentication, session management cross site scripting, etc. should be part of your priorities if you’re providing a web application.

Keep in mind that security is something that cannot be tested into an application. Security has to be a design decision.

Data-Driven Testing

Data-driven testing is all about variation of data. Get rid off a small set of test data, make it wide, make it random, make it flexible.

As for test automation, data-driven testing has already been a relevant topic. Keeping your test data separated from the actual test scripts leads to more flexibility and easier maintenance of the scripts. Data can be put into databases, spreadsheets, data pools, etc.

Have you ever had to update all of your test scripts just for testing with a different set of data? You might be interested in data-driven testing then.

Reliability, Stability, HA… Testing

In a world where everything has to be available 24/7, where your customers are anywhere and not just in one country, keeping your applications running reliably is a key to success. Reliability covers a lot of topics here: stability, performance, high availability, backups… you name it.

So testing is not just about users facing functionality, it’s about the system behind it as well. You don’t take it for granted that it will just run. What about a failing database? What about a failing application server? What about full disks? What about failing hardware of any kind? What about faulty software that requires a rollback of some sort?

You might think this is all DevOps… think again: if you haven’t tested it, it probably won’t work. Question everything, but ask the right questions. Additionally, the lines between departments are fading and so you have to know how the deployment works, what the hardware setup is, and so on.

Deployment Testing

Of course, this can be easily subsumed under reliability testing but it is often viewed separately because it’s a difficult and error-prone area. How do you deploy new software while keeping downtimes minimal? How do you deal with deployment failures? How can you quickly rollback to older versions?

This all has to be answered and tested. So once again the lines between the test department, engineering, and operations or DevOps are fading. This might be the reason why big players, such as Google, did away with the classical three tier approach in software development: development, test, operations. It just doesn’t work anymore.

Feel free to add additional ideas!

Topic: Software, Testing, Testing Culture | Comments (1) | Autor:

The Art of Reading Performance Test Charts

Monday, 22. April 2013 9:36

Powerful load and performance test tools don’t only retrieve pages of your website randomly with zillions of users at the same time, but they also cover realistic scenarios simulating the real-world user. It’s a given that they can deliver lots of useful information and plot interesting charts. To fully take advantage of these benefits, however, you need to be able to interpret this information and draw the right conclusions.

It is this need for the correct interpretation of test results, the mapping of all you see against actual application behavior that makes performance and load testing a non-trivial task. It requires much experience to decide on the right actions, make the right assumptions, or simply come up with a reasonable explanation of why something happened this way and not the other.

In today’s article, we’d like to present you a couple of charts displaying typical response time patterns, and discuss what they could indicate.

Disclaimer: Of course, the reasons for a certain behavior vary a lot, depending on your application and testing. However, as there’s no fixed manual for the interpretation of load testing charts, we want to provide you with a couple of basic guidelines to help you get better in interpreting them yourself and make the most out of your test results. Feel free to comment whether or not you agree with the ideas and explanations we come up with.

The Warm-up

These charts might indicate a system with a cold cache, for instance, when the system has just been started and the caches aren’t filled yet.

The basic characteristics of such a behavior are high response times in the beginning, followed by gradually lower response times until eventually a  certain degree of runtime stability is reached. This time frame is often referred to as the system’s warm-up period. Throughout this period, a couple of things can happen under the surface. If you know the system under test well, you’ll probably come up with the following: database and file system caches are filled, proxies learn about the data and store them, the system under test scales up because it sees traffic, page snippets are cached and so the computing overhead reduces… you name it.

Also keep in mind that it might be the testing process itself that causes such a response time profile. If the system is perfectly warmed up and you hit it, your sent traffic might be too uniform in the beginning. That being the case, randomization kicks in so that the traffic eventually distributes better over time. Furthermore, take into consideration that your load software and hardware are possibly not warmed up either.

The Caching

These charts depict either a typical cache clean or job patterns. In case of a cache clean, system-internal caches expire every half hour. If that’s not the case, the charts may indicate  a running background job draining power from the database or consuming lots of system bandwidth.

Both charts display the same test; however, this test has been executed for different time periods. While two spikes could signify a random event (despite the fact that the temporal distance of 30 minutes is suspicious), the longer test run seems to prove our first assumption: something is going on every half hour.

In any case, make sure that such a behavior is not produced by the test machines themselves, for example, because they’re busy writing or backing up data.

The Spiking

This is what we call a forest of spikes: many spikes that don’t seem to follow a comprehensible pattern; longer runtimes just occur occasionally, often caused by requests accessing certain data or URLs that produce long runtimes. To get behind that mystery, you have to dig into the results in more detail, find the calls behind the spikes, and derive a pattern based on the information you find. Often you’ll come across similar URLs, request parameters, or maybe response codes. Don’t forget any application logs you might have access to, such as web server, error, information, or debug logs. In a perfect world, your application under test offers the necessary tools to get to the bottom of this problem.

XLT lets you easily find this information. All test result data are accessible as CSV files that are quickly readable and documented. Feel free to work with this information and go beyond the scope of the reports available.

The worst outcome here is a non-identifiable pattern and no information on the server side as to what might have happened. In such a case, you have to repeat the test and narrow down your test setup later on to exclude as many variables as possible to find the cause. This is actually also a good time to ask for development or tech-ops support.

The 3rd-party Calls

The first chart is typical for issues with 3rd parties, especially in the field of e-commerce. We’re not talking about direct calls to 3rd parties here, such as analytics vendors or recommendation engines, but calls from one server to the other. Thus, the response time we see is the response time of two systems. Of course, it’s good to know the area where 3rd party calls typically happen, but you have to know the application under test anyway to test it efficiently. So when the final order steps start to act weird, you can easily narrow down the potential reasons.

The second chart looks more like the cache clean or expiration problem described above, but since you know the application, you also know that this area doesn’t use the typical caching logic but is highly dynamic instead. This means that the errors occurring every 50 minutes point into a different direction: as we know that 3rd parties are attached and called during shipping, we can conclude that the 3rd party failed on us.

Verdict

Knowing typical response time patterns helps you specify a certain problem so that you can give hints to the development or further shape the path of testing. If you can read charts and derive the right conclusions or at least know which questions you have to address, you’ll be ahead of the crowd. Be aware that knowledge on the system under test is very important – the production and measurement of a certain load doesn’t make much sense when you’re not able to actually interpret and explain what you’ve measured. Always remember: 42 is not a valid answer for everything. :)

Topic: Performance, Testing, XLT | Comments (0) | Autor:

XLT 4.0.1 released

Saturday, 5. February 2011 5:10

Today, we released XLT 4.0.1. This is an minor update to XLT 4.0 that fixes five defects. Additionally, it provides some documentation enhancements. You can download the release from here: https://lab.xceptance.de/releases/xlt/4.0.1/. At the same location, you will find the documentation as well as the release notes.

Topic: XLT | Comments (0) | Autor:

Xceptance LoadTest 4.0 is available

Thursday, 13. January 2011 18:17

We just released Xceptance LoadTest 4.0. This release of our load test software got some really nice feature enhancements to make your regression testing easier. So we stick to our general software approach: One tool for regression and load testing. One set of scripts for both purposes.

Script Developer

Script DeveloperAs an alternative to writing test cases in Java, you can now use the XLT Script Developer to create script test cases. Script test cases are based on a simple syntax and a reduced set of operations, which makes them a perfect fit for non-programmers. Only the Script Developer, which is an extension to Firefox, is necessary to create, edit, and manage basic script test cases.

To create a new script test case, the test designer simply uses the application under test. All interactions with the application are recorded in the background and stored to an XML script file as a sequence of script commands. While recording, assertion commands to validate the web pages may be inserted manually. From the Script Developer, script test cases can be replayed in Firefox at any time to quickly check whether the test case still runs successfully.

Existing script test cases can be modified later on, for example, to add new or delete obsolete commands. Common command sequences, which could be reused in other test cases as well, can be refactored to parameterizable script modules. Finally, any recorded value can be extracted out of the script into a test data file to separate test data from script code.

Script files can also be run outside of the browser, via the XLT framework, which simulates a head-less browser. This mode is suitable for unattended test case execution, during functional or load tests. When saving scripts, the Script Developer also creates JUnit test case classes as “wrappers” around script test cases, which serve as a bridge between the XLT framework and the script world. This way, from the framework’s point of view, script test cases are in no way different from test cases written in Java.

More Data to Query

For improved tests accuracy, you can now query the request and response data and run assertions on it. This permits checks on the communication because not all requests are reflected in the DOM tree.

Improved EC2 Handling

AWS (Amazon Web Services) added the ability to tag EC2 resources to simplify the administration of your cloud infrastructure. As a form of meta data, tags can be used to create user-friendly names and improve coordination between multiple users. The XLT EC2 administration tool ec2_admin features an additional menu which lets you select your EC2 resources based on the tag name.

Better Automation

To improve automation of tests, we added the ability to pass properties on the mastercontroller command line. Additionally the test definition file for the test suite can be redefined on the command line as well.

Faster Work Flow

When test goes wrong or a logging is turn up, the data to download from all agents can be pretty big. To get a fast or selective result, you can now decide how much data you want to download.

JDK Compatibility

Beginning with v4.0, XLT requires a Java virtual machine 6 or above to run. Java 5 is not supported any longer. The reason is the end-of-life announcement for JDK 5.

Misc

We refreshed HtmlUnit and updated it to version 2.8, Ruby got updated to 1.5.1, and WebDriver is now v2.0a6. The event API got simplified and is now easier to use.

Where to get it

More information about the release, the quick start guide, and the manual can be found in the release area. Of course, the full download of XLT 4.0 is available there too

We are looking forward to your feedback, comments, and of course… Happy testing!

Topic: Performance, Software, Testing, XLT | Comments (0) | Autor:

Some nice reading about HBase

Tuesday, 16. March 2010 21:35

HBase LogoIf you want to stay in touch with cutting-edge technology in terms of scalability of databases, high traffic sites, and large storage volumes, you should read these two articles on the new hstack.org blog.

Cosmin Lehene wrote two excellent articles on Adobe’s experiences with HBase: Why we’re using HBase: Part 1 and Why we’re using HBase: Part 2. Adobe needed a generic, real-time, structured data storage and processing system that could handle any data volume, with access times under 50ms, with no downtime and no data loss. The article goes into great detail about their experiences with HBase and their evaluation process, providing a “well reasoned impartial use case from a commercial user”. It talks about failure handling, availability, write performance, read performance, random reads, sequential scans, and consistency.

(via High Scalability)

Topic: Java, Software Development | Comments (0) | Autor:

Fehler mit Erfolg

Wednesday, 3. March 2010 14:10

Fehler mit ErfolgHeute mal wieder etwas aus der Reihe “Erfolgreiche Fehler” oder “Unsinnige Dialoge”. Gefunden im Nautilus von Ubuntu 9.10.

Topic: Bugs in the Wild, Linux, Software | Comments (0) | Autor:

Google ist mehr als Software

Tuesday, 10. November 2009 23:36

Heute habe ich einen schönen Artikel bei CNET gefunden: Google shifts software value to operations, away from IP. Der Artikel führt prima aus, wie sich Google definiert und warum Google große Teile seiner Software einfach mal gratis weggeben kann:

Google is what Google does with the software, and not the software itself.

Einfach mal lesen und sich seine eigene Meinung dazu bilden.

Topic: Software | Comments (0) | Autor:

Skype ist wohl mit Delphi programmiert worden

Thursday, 16. July 2009 15:24

Skype FehlermeldungSkype scheint mit Delphi geschrieben worden zu sein. Ja, das Entwicklungswerkzeug, das von Borland hergestellt wurde. Damals… vor langer langer Zeit.

Interessant…

Topic: Software, Software Development | Comments (2) | Autor:

QA-Tool-Anbieter Landschaft ändert sich

Wednesday, 6. May 2009 18:47

Die Landschaft der QA-Tool Anbieter ändert sich mal wieder. Borland und Compuware werden beide von Micro Focus gekauft. Damit dürften viele bekannte Werkzeuge in der Bedeutungslosigkeit versinken. Borland hatte erst vor geraumer Zeit Segue erworben.

Eventuell zeigt diese Entwicklung, dass teure Werkzeuge, die umständlich und nur mit grossem Aufwand einsetzbar und verkaufbar sind, einfach nicht mehr IN sind.

Die Zeit der grossen Taschenmesseranbieter ist abgelaufen, man kauft jetzt wieder Messer und Schraubenzieher einzeln.

Mehr dazu bei Heise Online.

Topic: Testing Culture | Comments (0) | Autor:

JUnit 4.5

Friday, 19. September 2008 9:19

JUnit 4.5 wurde heute veröffentlicht. Es gibt keine Features auf der Liste, die uns sofort wechseln lassen, aber gut zu wissen ist es immer.

JUnit 4.5 focuses on features that make life easier for JUnit
extensions, including new public extension points for inserting
behavior into the standard JUnit 4 class runner.

Topic: Software | Comments (0) | Autor: