In my April 1999 column, ”
The pain of platform possibilities,” I discussed the impossibility of testing all the conceivable combinations of supported platforms. I also introduced the concept of certified vs. supported platforms as a strategy for identifying what can reasonably be tested internally and what should be supplemented with beta programs or risk mitigation strategies.
|
|
The next issue you are confronted with is what you can really test within the subset of certified platforms. Or, even if you support only one platform, what is realistic to expect in terms of test coverage? The word realistic is the operative word, by the way, because objectives like 100% test coverage are laughable in most cases.
Don’t believe me? Let’s do the math again.
Believe it
Let’s say you are testing a file transfer system that supports six platforms, either sending or receiving, and on each platform you support three operating systems and two versions of each. Within each platform you have four protocols, five file formats, minimum and maximum file sizes, encrypted or nonencrypted, plus two different flavors of receipt confirmation.
Furthermore, let’s assume you have reduced the scope of the test effort by agreeing to certify only four platforms and one version of each operating system per platform, and it takes you 45 minutes to configure a single source-target platform combination and 15 minutes to actually set-up, send, and receive a particular file, then check the results.
How many test cases and how much time do you need to test 100% of the possible combinations and only the valid boundaries?
I get 23,040 individual tests, which would require about 168 person-weeks if you get seven productive hours per day–and that’s a stretch. You could probably automate a lot of this, but even that takes time and effort to develop and maintain. If you start adding in equivalence classes, including positive and negative cases, the number goes off the charts.
This is a fairly straightforward application. I don’t know of that many companies–in fact, let’s say none–that can afford this level of effort on a routine basis, although I sincerely hope those who are testing the software controlling weapons of mass destruction can and do.
Face it
My point is not to spread defeatism, but to simply expose the truth. Testing every possible combination of every factor is just not realistic. If you set this as your goal, or allow others to, you are doomed to fail and suffer.
Understand, I’m not against full coverage or high quality, but I am against setting unrealistic expectations that lead to disappointment, demoralization, and–ultimately–turmoil and turnover in the test department.
|
So the sane approach to test coverage is to devise a means of getting the most return on the time you are able to invest. This translates into reducing the most risk as opposed to achieving the most coverage.
The next question is, how?
Analyze it
|
|
|
|
Start with the most basic of all questions: Who are your customers? That is, who will be using this system, and what will they be doing with it? Most likely your customer support area can tell you this or, if not, perhaps the sales and marketing department. If you have to, review the sales contracts to see what has been sold.
For example, an electronic commerce company discovered that most of its customers were financial institutions and that the overwhelming majority of them (85%) operated mainframe platforms running OS390 and using the SNA protocol. And, when researched further, it found that a single file format and encryption option (an industry standard) accounted for 90% of all file transfers.
This is a different kind of coverage. Instead of code or feature coverage, we’ll call it “customer coverage.”
Customer coverage
Customer coverage means discovering how your software is actually used and testing it that way. The easiest way to define coverage is to build user or customer “profiles” that describe a particular customer configuration and their typical activities. These profiles might be centered around industries, geographies, or other identifiers that affect the type of customer and how they use the software.
This has some interesting benefits. The first and most obvious is that it forms a natural basis for prioritization: you know what to do first and how to allocate your time and resources. It prevents you from wasting resources by trying–and failing–to test everything in every possible way. Instead, you make absolutely sure that the activities you know for a fact are critical are, in fact, thoroughly tested.
The second benefit is a little more subtle. Let’s say you are running out of time to complete the test effort and it’s critical to make the release date. If you have prioritized your test effort around customer profiles, you could do a “rolling release”–ship only to those customers whose profiles have been tested. That way, it’s not all or nothing. If most of your customers fit into a particular profile, you can ship to most of them on time and only delay shipping to the minority of customers who fall outside the tested profile.
The third benefit is longer term but potentially very valuable. If you adopt this practice, it should lead to better information collection about your customers and how they use the software. Once you understand your users better, you can prioritize enhancement requests, new features, even bug fixes the same way–identify where you should allocate your development and test resources by achieving the highest rate of return in terms of lower support costs and higher customer satisfaction.
This entire approach also gives you a framework for incorporating reported problems into your test plan. Instead of just a “bug” that ends up multiplying as a new test throughout all of your test plans, you determine which customer reported it, what profile they fit into–or if you need a new profile–and add it there.
The goal is to continue to define and refine your test process so you know what to test and why, as well as what not to test and why. This is the only way to establish a basis for measuring, managing, and improving your test effort in a realistic–and achievable–way. //
Linda Hayes is CEO of WorkSoft Inc.. She was one of the founders of AutoTester. She can be reached at linda@worksoft.com. |
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.