It’s not possible to manually test your whole product every time you release to it. Well, maybe it’s possible, but it’s certainly not practical, especially when you have a large and rapidly growing product.
Before we had our automated tests in place, we relied solely on our manual testers to spot any regressions which were caused by releases, which not only was proving more and more difficult as our product grew, but was putting a massive strain on the manual QA team.
Now we can use a combination of different methods of automated testing to ensure we can cover as many bases as possible.
These range from:
– unit tests
– selenium based front end tests
– a
Over time, automated testing has become a very powerful tool. At the push of a button we are able to test thousands of permissions and run multiple suites of front end tests to ensure that there are no regressions on a large portion of the product.
However, I think something which can easily be missed when developing a test automation strategy is having good reporting.
At a fundamental level, being a tester is as much about communication as it is about finding bugs. This may come naturally to any manual tester; if you find a bug then you tell someone about it, and you tell them with as much detail as possible.
The same logic should apply to automated testing. The important thing to remember is that running tests isn’t quality assurance, you have to actually act on them to achieve quality; and you can’t act on them without good reporting.
Another method of testing that we use, and one that’s proven very successful, is a tool called RainforestQA.
This tool allows you to write testing scripts in plain english and at the press of a button it gets fired off to one of their 6000 or so human testers. Clearly, this isn’t actually automated testing, although how it ends up being used is very similar.
From our end, all we do is specify which tests we want to run and press a button. Twenty minutes later, we get informed if we have any failures. What’s the difference right? Importantly, we don’t need any technical knowledge in order to write these tests, no code whatsoever, just plain english. This makes the tests not only quick to write but also easy to maintain.
Overall implementing an automated QA strategy has proven to be very valuable for us, and when used in tandem with Manual QA, it’s enabled us to not only increase our test coverage, but also improve stability in releases. That being said, our strategy is always evolving, and we’re constantly looking for new ways to improve our QA.