Today I wrote a combinatorial optimization algorithm to match members of pair programming teams according to the psychological traits of each pair's members. The program appeared to rearrange the initial random allocation of pairs in a way that might match my specifications. However, as I'll use this allocation for an experiment that I'll be able to perform only once, I realized that I wanted to carefully verify the results. How does one verify the operation of such a program?
Writing unit tests seemed a dubious proposition to me, because the code to verify the results would be very similar to the code that produced them. Writing a formal proof for the stochastic algorithm I used, sounded even more difficult. In the end, I wrote out the results and the original data in a tab-delimited format that Excel can readily load. (One can even give a .xls extension to such a file, and it will launch Excel on it when opened.) I then re-entered the variance measures I'm using for the pair allocation as Excel formulas, and compared the values before and after the specified pair allocation. They matched exactly the results of my C++ code, which made me a lot more confident about its correctness.
The case I described demonstrates the value of having at hand a variety of computing paradigms. These do not only complement each other, but can also increase the confidence we can place on a computation's results, by offering us independent means to verify them.
Comments Toot! TweetLast modified: Friday, November 7, 2008 5:03 pm
Unless otherwise expressly stated, all original material on this page created by Diomidis Spinellis is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.