We’ll resume the Art of Game Design series shortly (with a far smaller gap than between lenses #1 & 2), but first an interlude.
I mentioned in the last post that I had recently done some training for a client. Part of that training sprung from a conversation I had with the client’s QA Director about my testing philosophy. We were doing an assessment and planning the training, and I knew elements of my philosophy were going to come out in both and I wanted to talk to him about those elements in a less formal context. The end result was him stating that he wished his team could hear what we talked about, and so the training included a handout that spelled out the elements I’ve so far identified. I wanted to capture that as a stake in the ground, so here it is. This will continue to evolve – I don’t think I’ve captured everything yet, and I”m not sure I’ve got the wording where I want it yet, but I’m pretty happy with what’s here all in all.
- Quality is subjective by person. What one person values might be another person’s bug
- Quality is the whole team’s responsibility
- Software needs to solve the customer’s problems or it is useless
- Testing is a challenging intellectual process
- Thus, any testing activity that discourages thinking or questioning while performing it is potentially harmful. Test execution off of pre-written scripts is often an example of this.
- There are no best practices, just good practices for a particular context. Blindly applying a practice to a situation because it worked somewhere else can more harm than help. Understanding the context any practice will be used in is crucial
- Testing is all about providing information to the team and stakeholders. If testing is failing to provide the information that is needed, it’s wasting of time and resources
- To provide that information, test reporting needs to clearly tell 3 stories: the story of the product quality, the story of the testing done and not done, and the story of the testing quality (why what was done was or wasn’t sufficient)
- To be successful, testers need to approach an app trying to prove that it has bugs rather than proving the app is correct. Otherwise, we may fall victim to confirmation bias and miss issues
- Test cases are only ever as good as their oracles
- Testers are not the gatekeepers of the project. They should have a voice, but not the sole voice, in release decisions
- The value of any action has 3 components: The benefit gained by completing the action, the costs incurred by performing the action, and the hidden costs of not getting the benefit from performing other actions which can no longer be performed. Choosing the right action at any point in time means striking a balance among these 3 elements
- Writing out fully detailed, explicitly defined test cases is rarely the best use of time. Variation in test execution is actually a good thing. Letting the tester executing make decisions that don’t impact the test’s goal leads to better testing
- Test cases do, however, need to be explicit about the point of the test case and what it is trying to test
- Writing test cases at the start of a cycle of testing effort means we’re creating those tests at the point where we have the least knowledge about what we’re testing that we’ll have during the cycle
- Communication is essential
- Variety in tests is critical
- Conventional software testing metrics are often misleading and can lead to undesired behavior. However, they’re what we have to work with at the moment, so until something better comes along, we need to use them to approximate the information we need. We just need to ensure we use them wisely and be conscious of the impacts and risks in doing so
- Some things can’t be automated. Some things can, but shouldn’t be. 100% test automation is not a desirable goal in most contexts
- Automating “stabilized” existing tests as regression tests is probably not the best use of automation for a project
I believe… by Andy Tinkham is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.