There are plenty of methods to catch and fix bugs before a piece of software is shipped, but Microsoft is testing a new one that may be a wee bit invasive for developers: biometrics.
A Microsoft Research paper entitled “Using Psycho-Physiological Measures to Assess Task Difficulty in Software Development” details experiments with developer biometrics, or monitoring a developer’s eye movements, physical and mental characteristics as they code to measure alertness and stress levels to indicate a higher probability of code errors.
Researchers Andrew Begel, Thomas Fritz, Sebastian Mueller, Serap Yigit-Elliott, and Manuela Zueger conducted a study of 15 developers where they strapped psycho-physiological biometric sensors, including an eye tracker, an electrodermal (skin sensor measuring sweat) sensor and an EEG (brainwave) sensor, to developers as they programmed various tasks. The study found that biometrics could predict task difficulty for a new developer 64.99% of the time. For a new development task, the researchers found biometrics to be 84.38% accurate.
Microsoft’s researchers concluded that biometrics could better predict when software errors will occur than traditional testing processes looking for defects and bug risks in software metrics. They do admit, however, that a host of internal and external factors including personality, personal life stresses and even time of day could throw off the biometric readings.
In a bubble, biometric readings and predictions may give a more accurate picture of a developer’s stress levels and state of mind during coding. But on top of the fact that the constant feeling of being watched and monitored could contribute to developer stress in the first place, placing a painstaking emphasis on catching bugs during coding discounts the importance of software testing to the software development life cycle. Thinking with a Dev/Test mindset of catching errors and preventing bugs before QA testing occurs is a valuable trait for developers, but software testers are there for a reason. Their job is to anticipate where a bug will occur and to know the whole of the delivered software inside and out. Whether testers are accomplishing this by manual or automated means, biometric readings are not a replacement for professional knowledge and human instinct.
Not to mention, a group of 15 developers is hardly a representative sample.
(Related: How enterprises are maintaining testing quality in a Continuously Delivered world)
Their conclusion states, “It is possible to use fewer sensors and still retain the ability to accurately classify task difficulty,” and that they hope the research will lead to the development of predictive programming support tools. But once you start strapping sensors to a developer’s forehead and skin while tracking every subtle eye movement they make, does it really matter how many sensors they’re using? Probably not to the developer.
Read the full Microsoft Research paper here.