On the Selenium-Users mailing list today, I got asked what recommendations I would give to a team just starting browser automation with Selenium on an agile project that had been running for 6 months. Not knowing anything about the project other than that, I had to limit my response to some very basic points that so far in my experience seem to apply well to many projects. (Note that this does not mean that they apply everywhere).
Here’s what I responded with – what else would you add?
Here are some recommendations that I’ve found to be generally applicable to automation projects:
- Figure out why you’re automating tests in general, and why you want to test through the browser. Reasons like “because it’s the cool thing to do” or “I’m looking to increase my skill set and make my resume better” are almost always not be the best choices from an organizational perspective. Reasons like “Testing is taking too long” or “we have too many testers” may not be good ones. “Accessing areas we can’t otherwise get to” or “Free up our manual testers to do more brain-engaged testing” might be. Every project differs though, so you really need to understand why you (or someone asking you to do automation) want to do it in the first place.
- Use PageObjects (google it to find information, both on the Selenium wiki and on various blogs). One of the benefits of using Page Objects is that it localizes all your interactions with actual pages to one location, and makes maintenance easier. You’re starting already behind if the project is 6 months in, and anything that makes maintenance easier is probably a good thing if you ever want to catch up.
- For new features, include the cost of automating the feature in the same story as building the feature. For existing features and infrastructure work, create separate stories on your backlog. Hopefully, you already have channels open where the team can work with the customer to get things the developers need (like refactoring or automated testing) but which don’t necessarily translate directly into new desired functionality for the customer (though they may enable new functionality later or provide other benefits that the customer also desires). If you do have them, use them for your automation, making sure the customer knows the benefits of doing automation (some of which should come from your reasons in point 1 above). If you don’t have those channels, then you need to develop them. Ultimately, the customer decides what they want to pay for, so you definitely want their agreement.
- If you’re following your agile methodology by the book, you probably have a team of generalists and be pairing. With generalists, there shouldn’t be a specific test automation person, and each person on the team should be expected to develop their own tests within the framework that evolves. Most actual projects I have seen haven’t been entirely this way, and there’s been some thought that it’s not even a worthwhile goal. If your team works this way, though, plan to pair with the other team members to get them started. If automation is built into the story cost, this should be easy. You can also request people pair with you on the backlog items when they get purchased for a sprint. Even if you are the “automation person” and the team isn’t going to write automation as they work on stories, getting the team to pair with you is a good plan. It helps build relationships with the rest of the team, it helps the team be more aware of what you’re doing (which means that in the choices where the developer is making a fairly arbitrary decision, they might choose the one that is more automation friendly), and you may find out about APIs or things like that that you can use to make your testing more efficient.
- Don’t get hung up on doing all the testing through the browser. Some testing needs to be done that way. Other testing can be accomplished just as well through lower layers of the application sometimes. If you understand your purpose and your tests, you should be able to analyze the tests and figure out where to best execute them.
- Treat your automation as a model of your problem domain and your application. In my current project, I actually have 2 models – one is entirely focused around the concepts that a doctor or nurse would use in describing what they need our system to do – they register a patient, they order tests and medications, they report results. This model has no knowledge of how things are implemented in our application. The second model focuses entirely on the implementation. It uses Page Objects to encapsulate each page, and each page knows only the tasks that can be performed on that page. My page objects generally aren’t even aware of each other (though there may be some of that for the navigation links). By isolating and decoupling these models, maintenance is reduced – if the implementation changes (which is more likely than the existing business logic changing), the business logic remains untouched.
- Plan to use the WebDriver interface (or Selenium RC if you need to use v1.x) rather than trying to record things in IDE. Use the recorder in IDE to quickly find possible ways to find a control, but don’t bother trying to start off with table-based recorded tests. You’ll find it increasingly difficult to keep up as you want to go beyond what’s possible through the Selenese commands.
- Treat your automation with the same respect and processes as your application code. Code review, pair on it, write unit tests for it, put it in source control – everything. There’s lots of potential pitfalls when developing code of any sort, and we as automators should use solutions that already exist where it makes sense to do so.
- Don’t blindly take your manual tests and automate them. While some manual tests may be suitable for automation, others aren’t – they may take a lot of code to accomplish something that a manual tester could do easily, they may be test cases that don’t provide enough value to be worth automating, or there may be some other reason to keep them manual. Make sure you’re not wasting time automating things that don’t need. You may also find that the automation tool allows you to reach places that your manual tests couldn’t get, so there may be additional tests that it makes sense to automate that aren’t in your manual tests as well.
- In places where you can do so without devaluing the test, introduce variation in your tests. Static automation that always does the exact same thing in the exact same way with the exact same data commonly tends to find defects when you create it and then never again (or rarely again). Sometimes this might be ok – if your sales people have a canned demo script they always use, and you create automation based on that to ensure that your sales people don’t get embarrassed in front of customers, you probably want the test to exactly follow the script. If, however, you’re looking to actually test functionality, adding variance means that your tests have a greater chance of finding bugs when they try something new. Possible ways for adding variance include using random data, path variance, and randomly ordering your test execution. By randomizing test data, you are looking for things like incorrect assumptions – for example, in our app, it shouldn’t matter if a patient being admitted to the hospital is male or female, whether they’re 18 or 80 years old, or whether they know (and provide) their birthdate. We may decide that these things don’t need separate test cases, or we may not even think of them. If our expectation is wrong, however, we’d like to know that. It’s possible that someone coded in a dependency that is a bug or wasn’t communicated to the team or whatever. By varying our test data so that we hit some of that variance naturally over the course of several test runs, we get a wider degree of coverage and might stumble across something like that. Of course, we might miss it too – if the bug only occurs for an 80-year old woman who doesn’t give a birthdate, we might never pick that combination. If our test is hardcoded to use a 36-year old male with a birthdate given though, we’ll definitely never find it with our test. Adding variance to the paths you take uses a similar idea. Say, for example, you needed to copy some text to the clipboard as a step in a test. There are lots of ways to copy text once it’s selected –you can press Ctrl-C, you can select “Copy” off the Edit menu, you can right click on the selection and choose “Copy”, or maybe there’s a toolbar with a copy button in your application. If you’re not specifically testing the copy functionality, you probably don’t generally care which method is used, as long as the text gets to the clipboard. It’s common for automators to pick one way to accomplish something, and use that – maybe it’s the method the automator uses themselves, maybe it’s the easiest to automate, maybe it’s the one specified in the first test case that needs that functionality. Whatever the reason, the tests are now more constrained. If you instead provide a way to vary this behavior each time you need it (either randomly or with some intelligence involved to choose the least used path each time or something), you have the possibility of catching bugs where one path to achieve the task doesn’t work the same as the other paths. Finally, varying the order of your tests gives you the possibility of finding interaction bugs between tests. Providing this variance requires that each test be independent, and this kind of variance isn’t one I’ve looked at a lot yet, but if you can provide it, your tests may have more defect-finding power. For each of these variance techniques, though, there may be test cases where you specifically don’t want to vary things – it’s essential to your tests that a task is accomplished a certain way, or that the data used meets certain criteria, or whatever – this is another area where analyzing and understanding your tests is important. You don’t want to either over-specify your tests and constrain them in ways that aren’t necessary, but you don’t want to under-specify them either and miss what you’re trying to test.
- Design in a way to repeat your tests even with the variance. Even though adding variance to your tests increases their power, there will likely still be times when you do want to exactly repeat the tests (for verifying bugs, for example). Build in this repeatability from the beginning – in my case, my two models talk by the conceptual model sending Command objects to the implementation model. These Command objects contain the action to perform, the data needed to do that action, and the path taken to perform that application (which sub-models, pages, and functions were performed). These commands then get serialized out where a runner can later deserialize them and rerun the tests with all the same data and actions.
- In a similar vein, run your tests from the beginning on all the platforms you know you want to test against. Cover the browsers and operating systems that you can predict you want to run against right from the start, so that as you’re developing, you’re trying the code on all of them and not going to run into major surprises later when you absolutely need to run the test in a platform you haven’t tried before and you find some XPath or behavior differences.