Surprise is so basic that we can easily forget about it. Use this lens to remind yourself to fill your game with interesting surprises. Ask yourself these questions:
- What will surprise players when they play my game?
- Does the story in my game have surprises? Do the game rules? Does the artwork? The technology?
- Do the rules give players ways to surprise each other?
- Do your rules give players ways to surprise themselves?
Surprise is a double-edged sword when applied to the broader world of software testing. Users can be surprised in a positive way (as in “Oh wow, this software anticipated a need and has already met it!”) or in a negative way (as in “This software can’t do that task I want to do? What a piece of junk!”). As testers, we provide information to our teams and stakeholders about the quality of the system. We do this by looking at the system through various lenses just as Jesse Schell is advocating for game design. Recently, I’ve done some training for a client where we spent a good chunk of the day talking about the use of lenses. We used the Test Heuristics Cheat Sheet that Elisabeth Hendrickson, James Lyndsay, and Dale Emery created and focused on the heuristic section in particular. Each heuristic is another lens that we can look at our software through – asking ourselves things like what does the Goldilocks Heuristic (Too big, too small, just right) mean in the context of the functionality I’m testing, or applying the Count heuristic (None, One, Many). From there we moved on to Rikard Edgren’s Software Quality Characteristics poster and talked about non-functional lenses. This principle of surprise is yet another lens – what’s going to surprise my users when they use this feature? Will it be a pleasant surprise for them or a negative surprise?
For me, a good example of surprise comes from my days working as a tester on the Microsoft Expression Blend product. I was working with the graphics importer team and my job was the testing of the foundational code that read Photoshop files, parsed out all the data about the layers and graphical effects and made all that data available to a converter that made the images available in Blend. We had a lot of Photoshop files in our test suite, including some that were corrupted. Photoshop wouldn’t open these corrupted files, but we took the approach of salvaging as much of the file as we could. We couldn’t always get everything (some of our corruptions were due to truncating the file from which there was no way to recover data) but we’d show what we could get. We never publicized this feature, nor do I know of any user ever trying it. None the less, it’s easy to imagine a user feeling a degree of panic when their file doesn’t open in Photoshop and trying Blend in a last hope way to salvage their work and being pleasantly surprised to find they could recover at least something. From the time of working on that feature, one of the lens I carry with me in my testing is to look at the feature from the perspective of my users and see if I can identify any potential surprises. The negative ones then lead to additional test cases and/or bug reports to reduce their impact.
What surprises your users about your application?