Category Archives: Testing Culture

Should Load Tests Validate Functionality?

My answer to this question is a very strong “yes“. You might want to limit yourself a little in the overall validation, but checking response codes only is a strong fail in my opinion. Additionally, just checking the result by checking a single phrase or word is not enough.

Reasons and Examples

  • Modern web implementations often incorrectly return application status pages with response code 200.
  • How do you ensure that you got the entire page back and not only the first 75%?
  • Imagine an e-commerce search that breaks under load and instead of saying “I found 200 matches”, it returns a page saying “no matches found, did you mean …”. The latter is still a valid page but your load test will not discover the flaw.
  • Continue reading Should Load Tests Validate Functionality?

SQE training week in Boston

SQE Training TableAt Xceptance we think that “you live and you learn”, so we were eager to read about an upcoming training week by SQE in Boston (March 24-28). The various tracks had promising headlines like ‘How to Break Software: Robustness Testing Unleashed’ or “Testing Under Pressure” but we decided to pick our favorites and chose “Exploring Usability Testing” and “Mobile Application Testing”.

The usability training was scheduled for just one day, the mobile training for two. We knew that it would be tricky to fit an overview of usability testing in just one day of training, and it turned out the instructor was well aware of these concerns.
Continue reading SQE training week in Boston

Why do women make good software testers?

The answer to this question is fairly simple: women make good software testers if they’re good at testing. And so do men. There would be no reason to go into this any further if it weren’t for the occasional blog posts that pop out here and there, carrying headlines like ‘Are women better testers than men?’ or ‘Is software testing women’s work?’ (if you don’t believe that, go ahead and ask Google!). We are aware that this is a tricky topic, and bringing it up usually seems to imply that sexism and discrimination are just around the corner. But instead of jumping right at it and listing possible skills that may or may not make women better testers (who hasn’t heard of ‘multi-tasking’, ‘emotional intelligence’ or ‘an eye for detail’?), we’d rather tell you about our own experiences.

At Xceptance we just think our team is great the way it is. There are women and there are men, and everyone contributes to the company’s success in their own way. Our employees do a great job listening to customer requests, they passionately discuss appropriate testing strategies and they sometimes make phone calls while simultaneously updating browser versions on testing devices and formatting the latest bug report. Some of our employees love snowboarding in their free time while others like cooking and crafts, and we can assure you that there is nothing gender-related about all that. So if there are more women than men in software testing, that’s great! But it doesn’t mean anything besides those bare statistics.

Image by kevinshine under CC-BY-2.0Of course now you could ask: if gender doesn’t matter, why make a blog post of it? Admittedly, that is a good question. But with all those articles out there about women in software testing and the accompanying stereotypes we just felt like we’d have to take a stand and outline our own opinion. We’re not big fans of the ‘you say it best when you say nothing at all’ attitude, so we figured we could as well go for it. And with all of the above being said, there’s just one thing we have to confess: every year on March 8th all of our female employees receive a chocolate treat in honor of International Women’s Day.

Photo by kevinshine under CC-BY-2.0.

Another Six Things You Should Never Say to a Software Tester

Inspired by this UTest article 6 Things You Should Never Say to a Software Tester, we came up with additional six comments a software tester does not really like to hear:

But it works on my development machine!

We don’t doubt that, really. But a sandbox is called a sandbox for a reason. It’s a playground, a simulation of reality, with its own rules and regulations. Kids fight about sand castles first before moving on to arguing about real estate in the grown-up world, so please throw away your shovel for a second and join us in the real world!

You broke it.

“No, I didn’t!” – “Yes, you did!” – “NO, I DID NOT!” and so on… That was me and my friend in kindergarten, at the age of 4. The toy was in pieces and none of us could (or would) recall whose fault it was. And it didn’t really matter at this point. The more important question was how to hide the broken thing from the teacher to avoid any unpleasant consequences. Once we had figured that out, we were done with the argument and started working together. A good example for how you can learn from past mistakes.

Your test report says you executed all of the test cases, so there won’t be any new bugs from now on, right?

“You’ve eaten all the pasta, that means you’ll never be hungry again, right?” – If you think this is an absurd example, read the third statement again. See the analogy?

You’re testing e-commerce sites? Oh, so you’re basically shopping online all day long?

Yes. And no. I don’t need 100 luxury bathrobes embroidered with profanities, and I also don’t want them. I also don’t have a second home in Hawaii and I know that entering incomplete billing information won’t take me to the next step in checkout. And yet I simulated all these scenarios while testing. Testing an e-commerce site is different from ordinary online shopping, especially since the term “ordinary online shopping” has yet to be defined. Am I the grandmother who just got an iPad for Christmas and is now looking for bargains? Am I the hipster who is annoyed by all the promotions that are thrown at him while he just wants to get his shopping done? Is the website prepared for the most challenging checkout attempts? It is our job to find that out, and it has little to do with “online shopping”. Though I did try to pay my last purchase at the farmer’s market with a test credit card, but that’s another story…

I see your point, but it’s not a requirement.

So it’s not a requirement that the password field should be cleared out after submitting the form? Instead of treating the documentation as a bible, we’d appreciate the use of common sense more often. We know that there’s the tight schedule and the even tighter budget and that those things don’t allow for many extras. But, believe us, a password that floats around unattended on a page is not an extra, but it can hit you hard when it comes back to you later.

This is no longer a bug, please see the updated requirements.

This is even worse if it comes along with the previous statement. It should not be an acceptable solution to turn a defect into a requirement. If it happens and the alleged bug turns into a butterfly, that is if the requirements change, please! let! the! testers! know! We don’t like to spend our time hunting fake bugs, therefore we need to know about any updates as soon as possible. And yes, this also includes design changes!

What else comes to your mind?

This is Just a Data Issue


Image © Juja Schneider

Many projects have their closed or “won’t fix” bugs commented with remarks such as:

  • “Content-related issue”,
  • “Won’t happen on live system”, or
  • “This is just a data issue”.

Both the development of new features and data maintenance may be accomplished in different environments. For testers, it’s thus difficult to decide on the cause for a certain issue: is a required text missing because pages aren’t implemented the right way or is it simply not there because the corresponding product data aren’t available in this particular test system? Very often testers work on development systems without having access to latest data. The following is a typical scenario:

  1. Testers report a bug.
  2. The development resolves this bug as content-related and won’t fix.
  3. Testers report further bugs of that sort.
  4. The development gets irritated: “We’ve told them that these are content issues!”
  5. The relationship between testers and developers might really go downhill from there, seriously threatening the project’s success: the credibility of the test team is damaged, developers  don’t take reported bugs seriously anymore, testers get overly cautious and stop reporting content-related problems…

Inevitably, potential bugs don’t get reported and reported bugs never get retested before the project goes live. This is even more true in poorly managed projects because no one really feels responsible to solve this problem.

Continue reading This is Just a Data Issue

Successful Software Testing – Communication is Everything


Image © Juja Schneider

Continue reading this article if one of the following statements applies to your software testing projects:

  • Comments about incompetent developers and nit-picking testers are normal.
  • What test data should be used and where it can be found is always unclear, regardless of how often it has been explained before.
  • The same questions are raised again and again by different people.
  • Features known to be incomplete are tested.
  • People report the same defect over and over again.

All these points may indicate that you communicate poorly in your project. As in real life, communication both within the test team and between the test and development department is the key to successful projects. After all, everybody has the same goal: creating software of high quality. Here are some useful tips that will help you to get the most out of your teams:

Communication between Testers and Developers

  1. Get in touch
    Especially in virtual teams often the only point of interaction between developers and testers are bug tracking system; yet it is especially the written word that causes misunderstandings and leads to pointless extra work. Regular calls and actually talking to each other can work miracles here.
  2. Clarify expectations
    Which information do developers exactly need when you find bugs? Which test areas are of highest priority right now? To avoid extra work, try to settle on some basic guidelines right from the beginning and make sure to communicate or change them whenever necessary.
  3. Create a culture of constructive criticism
    That´s probably the most important point of all. There is one goal and everybody is working on achieving it. Thus, each detected and eliminated bug is a step towards a better piece of software and should be seen as a  great team achievement.

Communication within Test Teams

  1. Get to know each other
    This might sound easy, but it´s surprising how little information testers have about each other, even if they work for the same company. This gets even worse if virtual teams are involved! Sharing information about competences and experiences will result in much more efficient work.
  2. Talk
    Another simple but often neglected point. Testing in a team will lead to great success if you share your knowledge and your findings. Group chats can work for some teams, but routinized and beneficial meetings for others.
  3. Share your test data
    Have you ever found yourself in a situation where testing a specific feature took a lot of time just because you had to ask for test data you could use? This certainly happens every once in a while, and it’s actually not a problem if only you need to do so. However, it does become an issue as soon as everyone else in the test team has to request the required data. So find a way to share you test data with all team members – it´s worth it.
  4. Instantly share news
    There is a new release? There is a temporary problem on one of your test systems? Make sure everybody knows about it.

Last but not least, a major point for everyone involved:

Speak the same language
There is bound to be misunderstandings if everybody talks about the same things in different words. The core of successful team communication is a mutual agreement on the terminology used within your project. Try to write it down, even if it causes extra work. It will help new people get into the subject.

A variety of tools makes communication much easier nowadays and doesn’t require much setup: use shared online documents, create mailing lists, and take advantage of chat systems.

To cut a long story short: cultivate a constructive work environment, make sure to always communicate, and share your knowledge. It´s that easy!

Test Case Design – The Unfinished Discussion

This is a topic whole generations of software testers have debated about. Test case design is a very subjective matter, depending hugely on the preferences of the test engineer and the requirements of the project. Although exploratory techniques are increasingly popular in the testing world, scripted manual testing used in many projects.


Image © Juja Schneider

When you are in charge of designing test cases you will have to decide which level of detail you want to apply to your test cases. This level of detail reaches from the vague description of a given scenario to a list of every single click a tester should perform. You can cover several actions in one test step or split them up into several ones. The more details you provide, the bigger your test suite will be at the end.

But which approach is the right one? This answer might not really help you, but: It depends!

Let’s talk about some advantages of detailed test cases.

  • The test case are very detailed.
  • You can ensure that less obvious features and scenarios are covered.
  • Even less experienced testers can perform them easily as a precise sequence of steps is specified. It is often stated that this becomes a candidate for outsourcing.
  • Testers who are not familiar with the target application will get a detailed overview about the system’s features.
  • Complex and critical key features can be tested very thoroughly based on the requirements and specifications.
  • There will be no room for interpretation whether a test case is failed or not.

Sounds good? It is… often. However we wouldn’t have started the discussion again if this is the answer. So what are the disadvantages?

  • The test case are very detailed. Yes, this sentence is supposed to be here.
  • You have to invest much more time and effort into creating the test cases.
  • The maintenance of your test cases is a much larger topic. Every time a small detail changes, you have to adjust the test cases.
  • You may miss creative testing approaches of skilled testers. Test case extensions while testing are unlikely to happen.
  • Re-using test cases in future projects is less likely as test cases are more project specific.
  • Testers may develop kind of a tunnel view. You may know this from your own experience: If you are forced to follow a specific path to reach your target, you may lose sight for details along the way.
  • The test results depend heavily on the person who created the test cases. If he/she didn’t do a good job, the executing testers won’t make up for this during testing. If the test case designed missed important steps, the testers will miss them too.
  • Providing detailed steps without room for alternative paths will magically merge a group of testers into one single tester. While they will follow precisely the steps when detailed test cases are provided, they might use a completely different approach when they have the freedom to look around.
  • If the results are precisely defined, nobody will question what he or she sees on the screen, so you miss the testing of testing.
  • Or taken to an extreme: Detailed aka step by step test cases will turn your testers into human test automation engines with the advantage of additional pairs of eyes. If you get lucky, your testers will report things they see but which are not stated in the tests, such as broken layouts for instance.

So what is right and what is wrong? Some guidelines would help, wouldn’t they? As there won’t be THE ultimate solution, here are a couple of hints about how to find the right way for your project.

  1. Keep the subject of your project in mind
    If you test a medical device, you probably have to have very detailed test cases to satisfy the regulations and document everything as good as possible, because there are lives at stack. Assuming you test just a DVD player, you might go with 20 relatively free and 5 detailed test cases instead.
  2. Consider the skills of your testers
    Highly skilled testers won’t need detailed instructions for all test areas. You may want to define test cases in a more pragmatic way.
  3. Think of your future user base
    If this is an extremely large group, where all levels of expertise exist, you want to reflect that in your testing without driving up the time needed to test. If your future user base is extremely experienced and have to go by the books anyways, such as pilots, you can reflect that in your testing.
  4. Specify the goal and not the way
    Try to avoid the step by step descriptions. Instead of sending someone down a menu by saying click here, here, here, here, and there, you just state: Navigate to X using the application menu.
  5. Challenge your application
    The later real world user will make mistakes you cannot anticipate, so add that jitter to your testing. Give your team room for creativity and for going individual paths.

Regardless if your project subject requires detailed test cases: Try to harvest the power of randomness, give your testers enough power to question anything anytime while keeping control of what has to be done.

The Bipolar Life Of A Software Tester – Continued

Eric Jacobsen from testthisblog.com started this little rant about his bipolar life as a tester. You should read it, it is very entertaining. It describes precisely what we feel from time to time. So we felt encouraged to continue.

Image © Juja Schneider

Cool, developers have marked most of my bugs as resolved. Maybe we will be able to launch the project in time!

No, wait…

I will be busy doing retests most of the day. This sucks. I won’t be able to continue my scheduled test cases for today. Test management won’t be too happy with me. I hate doing all these retests.

I’m so proud, because I have completed all my test cases in record time and found so many bugs! I’m a testing ninja! I might be able to go home early.

No, wait…

What’s that? Test management has assigned a whole bunch of new test cases to me? Is this my reward for working quickly? Life is not fair.

But I found this really big bug minutes ago. Wohooo! No regular user will be able to work with that feature. It’s a usability nightmare! Hope they will fix this soon. No way they can go live with this one. …I’m a representative of “a regular user”, right? I won’t even look at the specification. This cannot be right!

No, wait…

I took a look at the specification. It is expected to work like that. The design agency sold this as “visionary approach”. What do do now?

Poor developers! I feel honest sympathy for them. All these bugs I submit really cause a lot of work.

No, wait…

Why can’t they build it right from the beginning? I have so much more work to do, just because they deliver a buggy system.

No, wait…

Would I have a job, if all developers would create perfectly well running software? I should be happy that they are a little sloppy sometimes.

Wait a moment, is this a bug? The weather forecast mentioned “light snow in the afternoon”. I would rather call it “heavy-snowish” – and it is pretty late too.

No, wait…

Maybe this is a “Works as designed”?

P.S: Feel free to continue.

Our Top 6 Software Testing Trends 2013

Happy New Year to everybody! Time to think about 2013 and the work ahead of us.

The ecommerce market is growing and becoming more competitive every day. This means, the customer experience is going to play an even bigger role in 2013. Online shops are expected to be stylish and beautifully designed – but customers are getting more demanding in terms of performance and usability on multiple devices.

The following topics are our point of view on the most important issue that will keep us busy in 2013.
Continue reading Our Top 6 Software Testing Trends 2013