Last week, I gave a presentation on exploratory testing at the Twin Cities Quality Assurance Association. It was the same paper that I gave at STAR East in May, with some stuff that I learned in giving the presentation the first time. The paper and presentation are both available at the lab website.
One of the things I talked about was the idea that Kaner and Bach have discussed previously — the idea of testing being a process of questioning the application you’re testing. Each question the application answers successfully provides more confidence in the quality of the application. The problem of testing then becomes one of choosing the right questions to ask. Context-free questions are one strategy in coming up with the questions to ask the application. Context free questions are questions that can be used to help focus a person and help them solve a problem more effectively. The questions help the problem solver explore why they need to solve the problem, whether there are similar problems that the problem solver’s experience that can help them with a portion of the problem they’re working on or even the whole thing, and what various solution alternatives are. Gause and Weinberg talk about context-free questions in their book Exploring Requirements: Quality Before Design. Michael Michalko talks about the Phoenix Checklist in his book ThinkerToys. Unfortunately, I can’t find an online listing of these questions. (2011 update, since people keep finding this post in particular: Michael Bolton made a blog post last year including a long list of questions. Curious readers should go there for more information.)
After the meeting, Pete Ter Maat sent me the following question (which he then subsequently gave me permission to post here):
I understand the use of the Phoenix Checklist (which I’ve kept in my Palm Pilot for years) for solving a problem, such as “My office is disorganized.” The checklist is full of mentions of “the problem”, and in this case I just replace “the problem” with “the fact that my office is disorganized”.
But I’m wondering what you think of as “the problem” when you are applying the checklist to a testing situation.
Let’s say you’re testing validation rules that result in error messages when a user enters invalid parameters into a GUI. The user enters values for fields like “Lower Rate” and “Upper Rate.” There are validation rules that ensure the user gets an error message if he/she enters a lower rate that exceeds an upper rate, a lower rate that is 2X the blanking interval, a lower rate under 500 when “mode switch” is enabled, blah blah blah. You have a nice list of all these validation rules, and you have a GUI you can use to enter test values.
In the above example, what is “the problem” (or problems) that you plug into the Phoenix Checklist?
Here’s my answer to Pete:
I would define the problem as the charter of your testing session. So, in your example, your charter might be to “Find errors in the validation rules and their handling”. Putting it into the same tense as your “the fact that my office is disorganized” example, you might have “the fact that you don’t know whether there are bugs in the validation rules and their handling” or “the fact that you don’t know where the bugs are in the rules/handling”. You could also get more specific and focus on individual rules — it might provide more insight to do that for the set of rules as a whole and one or two of the key rules individually. If it worked well for the sample rules, you might then apply the questions to the other rules.
You could also put the problem statement in another form — “the fact that you don’t have sufficient confidence in the rules piece functioning correctly”. Then, you can tie more directly to the idea of testing as asking questions of the application with each question being answered correctly giving you a higher degree of confidence. This might actually be a better approach than the one I detail in the first paragraph, since this is easier to quantify. It’s easier to say “I have enough confidence in the quality of the rules handling” than it is to say “I know there are no bugs in the rules engine” or “I know where all the bugs are.”
Does anyone have anything to add to this?
Andy, here’s one online reference for the “Phoenix Checklist”. I’m not sure it really is what you’re talking about in your entry, since I haven’t read “ThinkerToys”, but I did read “Exploring Requirements” and these sound like context-free questions.
http://www.bo.uiowa.edu/~TrainersNetwork/handouts/Handout_1.pdf
It’s missing at least one key question, “Whose problem is it”.
I find Pete’s example interesting, after reading Cooper and Reimann’s “About Face”. Error messages are at best annoying to end users, at worst (if you believe Cooper, and I think I do) a constant low-grade humiliation. Cooper makes a strong and convincing case that much effort should go into eliminating error messages. The application should either constrain input (e.g. by the use of controls such as the “spinner”); or accept unbounded input without validation, but prevent the “incorrect” input from doing damage, and provide unobtrusive feedback about the possible consequences of such “incorrect” input.
Also see an older blog entry of mine about error messages from the tester’s viewpoint (http://bossavit.com/thoughts/archives/000029.html).
Hi Laurent, to respond to your comments …
Yes, your link points to the Phoenix checklist Andy and I were discussing.
Thanks for pointing out _About Face_. I’m a fan of the book and in other situations have lobbied for error handling policies such as the ones he advocates.
A little more context may help here though. The application in question is software to program a pacemaker. A cardiologist is allowed to enter conflicting parameter values into the GUI. The fields and values are not constrained, because sometimes a handful of parameters interact and we don’t want the user to have to worry about the order in which he/she sets the values. At the same time, we want to warn the user of a “bad” input, because a life is at stake here.
Basically, when a field is changed to a “bad” value, we don’t know whether:
a) The cardiologist accidentally chose an incorrect value.
b) The field change is part of a larger plan. After the cardiologist changes a few more fields, the conflicts may be resolved.
In (a) we want to alert the cardiologist right away while the action is still fresh in his/her mind, whereas in (b) we don’t.
What is constrained is the “Send these values to the device” operation. That button is not enabled until the user sets all parameters to non-conflicting values.