I love NUnit. I am starting on a new project, so what better way to prototype some primordial ooze, than to bang out some test cases?
It was all groovy until I realized one the elements I must prototype is a custom user interface control.
This did not provide much of an issue; I simply made a new set of test cases that displayed a form with the control in it.
Dude! That's not very automated.
True enough, but we are preempted by getting the job done. These cases are isolated enough that they can be excluded from the non-visual automated test cases.
Let's just call these UI Tests, since they require user-interaction to complete. Perhaps there's no need to label them, but I feel it requires differentiation, because these test cases cannot run "in a vacuum" on your CI server.
So why wouldn't I just start building a prototype application, and do it that way?
Another bonus is that by implementing debug "harnesses" of "typical" event-handler scenarios (e.g. tracking selections, event-handler firing), one can actually debug a lot of behavior, and even use Assert to validate.
This is awesome, because it allows you to really focus in on both the client-side (How do we want this code to look?) and the custom control side (Are events firing in appropriate sequence?) I maintain you can't concentrate on this in the context of all the noise of "bringing up" a prototype.
And for the memory-challenged (me!), you now no longer have a reason to "forget" to test any user interaction scenarios before committing.
For example, the custom control had to present a grid of items in 1 of 8 possible orderings (RowMajor/ColumnMajor,LtR/RtL,TtB/BtT) two-cubed equals 8. What better way to systematically run through all 8 presentations? Want to do that manually? I didn't think so. Granted, the user must manually validate the orderings (for now), but the important part is the completeness.
For example, I was implementing ISelectionService on my custom control, so I created a debug harness to manipulate the selection in all the different ways supported in the control's implementation (there were 6).
Just to show I'm not totally heartless, here is some sample debug harness code:
ISelectionService iss = sender as ISelectionService;
Assert.IsNotNull(iss, "sender as ISelectionService failed");
bool wasselected = iss.GetComponentSelected(cea.Cell);
int oldcount = iss.SelectionCount;
_list[0] = cea.Cell;
iss.SetSelectedComponents(_list, _st);
switch (_st) {
case SelectionTypes.Add:
Assert.IsTrue(iss.GetComponentSelected(cea.Cell), "iss.GetComponentSelected() failed");
Assert.AreEqual(oldcount + (wasselected ? 0 : 1), iss.SelectionCount, "iss.SelectionCount failed");
break;
case SelectionTypes.Primary:
Assert.IsTrue(iss.GetComponentSelected(cea.Cell), "iss.GetComponentSelected() failed");
Assert.AreEqual(1, iss.SelectionCount, "iss.SelectionCount failed");
break;
case SelectionTypes.Toggle:
Assert.AreNotEqual(wasselected, iss.GetComponentSelected(cea.Cell), "Toggle failed");
Assert.AreEqual(oldcount + (wasselected ? -1 : 1), iss.SelectionCount, "iss.SelectionCount failed");
break;
} Pretty much what you would verify visually, but let's not forego automated checks to keep everyone honest.
There were additional debug harnesses for validating Enter/Leave/Click event firing, etc.
I'm sorry, but that's just plain hard to get done if you are "embedding" this into a prototype "product". This way, the test cases make you operate the UI in the specific way that the debug harnesses are Asserting for.
Now granted, the user must cooperate with the test case's objective, and the Assert strategy of a debug harness is only as good as the effort you put into it, and can be tantamount to actual use cases. Don't Fret! This is a great thing! You can now refactor that code and re-integrate it back into new test cases. Well done!
No comments:
Post a Comment