![]() |
I was rendered speechless when a fellow professional said, in all seriousness, she was going to discard the majority of her regression tests because they had failed to find errors. After I recovered my composure–and my voice–I asked why she was considering such a thing, to which she confidently replied, Well, so-and-so says tests that don’t find problems aren’t worthwhile.
As it happens, the crazy claim turns out to be based on the earliest and most commonly quoted definition of software testing. Published in Glenford Myers’ 1977 book, The Art of Software Testing, the definition states: The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product.
Based on this found meaning, I can see where my colleague and her informant got the idea that tests that find no errors have no value. I can also see why software testers might rival dentists for having the top depression and suicide rates in all professions.
Proving a Negative
Simply finding errors is an unacceptable purpose for software testing. The approach requires software testers to prove a negative–there are no more errors to find. To demonstrate this, they must know how many errors there are to begin with and where the errors are. If we knew that, we would not need to test; we would just need to fix the errors.
Furthermore, if you don’t know how many errors exist, how do you know when you will be finished testing? How can you measure your tests’ effectiveness? Does this mean as you contribute to the overall improvement of the software development process, your effectiveness as a tester declines as well?
Proving the Pointless
Another reason this no errors-no value definition is dangerous is it lends credence to the idea all software errors are created equal. It presumes that finding an error, regardless of what or where it is, is valuable. This belief leads testers to invest valuable time and resources creating obscure, random, and meaningless situations in the hopes of catching the programmer unable to adapt to the changes. All the while, the testers are eschewing the most basic and obvious tests, assuming they will work. But what if they don’t?
Ironically, the true meaning of the term regression testing is to look for software functionality that used to work but no longer does, i.e., the software has regressed. But, based on Myers’ definition, there is no point in running a test that has found no errors, so once a software function works it is immune from further testing. Yet, the functionality that no longer works following a regression test poses the greatest risk, since it is still in use. The new functionality that doesn’t work may be irritating, but it is probably not devastating.
Proving Progress
To give credit where credit is due, more recent authors have improved upon the no errors-no value testing definition. In Software Test Automation, written by Mark Fewster and Dorothy Graham in 1999, the purpose of software testing is to give increased confidence in those areas of the product that work and to document issues with those areas of the product that do not work. Notice this terminology introduces the value of establishing what does work as well as what doesn’t.
Similarly, the most recent glossary of standards from the British Computer Society Specialist Interest Group in Software Testing (BCS SIGIST) defines testing as the process of exercising software to verify that it satisfies specified requirements and to detect errors. Ah, now we’re getting somewhere. The concept of requirements–you know, the reason we developed the software in the first place–is finally becoming part of the definition.
I wonder how significant it is that Mr. Fewster and Ms. Graham both hail from the United Kingdom, as, of course, does the British Computer Society. Perhaps we can persuade them to colonize the software testing industry here in the United States?
While it may seem academic to obsess about how software testing is defined, the impact is highly practical. Well-meaning experts–who espouse definitions that lead testers to discard tests that work–are setting the testers (and their companies) up for failure. If software isn’t proven to do the basics, who cares whether it fails to do the obscure? //
Linda Hayes is CEO of WorkSoft Inc. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com.
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.