Penetration testing is as popular as ever, yet it continues to miss the mark. As a means of validating the security of an application system, it fails miserably on several counts.
I continue to find organizations that make extensive use of penetration testing as their primary means of security testing systems before they go live, or periodically while they are in production. There are a myriad of problems with this approach, but I’d like to address one particular here that you likely haven’t considered.
My principal gripe with penetration testing is language. I’ll explain.
Over the years, I’ve seen, reviewed, and participated in hundreds of “pen tests,” and I’ve seen security engineers neglect the issue of language over and over. That is, they fail to adapt to the language of their audience. Ironically, those same engineers can almost always cite one of Sun Tzu’s admonishments: know your enemy as you know yourself and you need not fear the outcome of a thousand battles.
Why is this such an important issue? Well, consider what the pen test report and its findings are intended to accomplish.
If the pen test is intended is to provide the CIO, IT security manager, or IT manager with visibility into the system’s vulnerabilities, that’s one thing. But if the pen test is intended to help the software developers who wrote the application being tested go and fix their mistakes, then that’s an entirely different thing.
Although these two purposes share the same goal of securing the “system,” they differ significantly in their audience. Not convinced? Consider the following scenario.
The pen test team does their test and finds numerous SQL injection defects in a Web-based application. They deliver their report and the security manager sets up a meeting with the software development team and presents the findings. The security manager delivers a message saying, “SQL injection is bad. Your software contains SQL injection flaws (see here!). Make it stop.”
A perfectly natural human response to this message is to retreat and patch the software to stop that SQL syntax from being injected into the Web application. The developers are likely to write some logic that goes like: if (SQL syntax is present in an input) disallow the input.
Then, the pen test is repeated, the problem is resolved, and everyone is happy. Right? Wrong.
The problem with this approach is that it is almost always a negative model, not a positive one. That is, the programmers will naturally be drawn to checking a “blacklist” of banned SQL syntax, and then disallowing the input. This type of negative validation can invariably be broken by a determined adversary.
Now, consider this alternative approach to the same scenario. Instead of saying “SQL is bad…,” our software-savvy security manager says, “our pen test team uncovered several mutable database queries in your application and were able to exploit them. Since mutable queries can by definition be altered, we’d like you to change your queries to use immutable calls. Java, for example, can do this via an API called PreparedStatement.” (Implemented properly, PreparedStatement or other forms of parameterized SQL queries in languages other than Java, stop SQL injection in its tracks.)
The message here means the same thing as in the first case. The difference, on the other hand, is that the security team is giving the developers actionable guidance in language that makes sense to the developers. It is specific. It tells the developers what to do.
It also requires the security team to understand the software technology they are testing, however. That can make it tough for many security engineers and managers, but it is nonetheless vital to accomplishing the goals of the penetration test. Consider looking for software development skills in your in- or out-sourced pen testing team!
If you want to affect change in the software you’re testing, you need to speak to the software developers, and you need to speak to them in language that is meaningful to them.
That’s not to say there’s anything wrong with a pen test report that has an executive summary or even a list of findings in terms we’re all familiar with today. Vulnerability descriptions, screen shots of successful attacks, and all of these things are useful and meaningful to the security and IT management team. We want and need this information.
But if our message also needs to be sent to the software team who wrote the code we’re testing, then we need to adjust our language significantly. It’s also useful to be aware of and to make use of mechanisms that the developers use, such as bug tracking databases. Security teams possess enormous pools of vulnerability and testing data, but we often fail to get that data into the tracking tools used by the development teams.
I’m convinced of this approach, and I’m convinced it’s a direction we all need to be going. Attackers are increasingly focusing their attention on application-level vulnerabilities. We in the security field have to learn to speak to the application developers in meaningful ways. Don’t just tell them what they’re doing wrong; tell them how to do things right.
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.