A lot of us have been looking at the recent WikiLeaks drop of Central Intelligence Agency (CIA) files related to hacking Internet of Things (IoT) and personal technology devices.
It includes a lot of data, so it is easy to get lost in the woods with regard to what these tools do. But apparently they can break into devices and cast blame for it on the Russians, which I believe is problematic in a whole number of different ways.
I think the real problem with this is the CIA’s risk assessment process.
We typically associate public companies with approaches that favor relatively small tactical benefits over relatively large strategic exposures — not the CIA. I think this decision process problem is far larger than just the CIA and reflects on security in general. I also think that a deep learning tool like IBM’s Watson could, if placed in the decision process, help prevent bad decisions like this.
In my opinion, there were clearly a number of questionable decisions that led to the creation of these tools, which are largely based on vulnerabilities that existed in U.S. companies’ products. The first bad decision was opting to exploit the vulnerabilities rather than report them and have them corrected. It should be obvious that the national exposure from exploits like these likely exceed the benefit of hacking any one individual phone. To put it differently, the trade-off was between keeping our spouses, children, politicians and families safe versus the ability to hack a foreign agent.
But the CIA isn’t tasked with keeping domestic citizens safe. They are tasked with gaining intelligence. The decision may have been easy because they traded off something for which they weren’t responsible for something for which they were responsible.
When you realize this, you might conclude that the core of this problem lies with how the CIA is measured. This decision seems to be the direct result of training executives not to see the bigger picture.
I believe this also showcases that the CIA hasn’t adjusted to its new reality. Given the number of recent leaks, the new reality is that the CIA apparently can’t keep a secret. That means the tools they create to exploit others are likely being stolen and used against U.S. citizens — possibly including CIA operatives. This would suggest that the more prudent path, until the leaks can be decisively addressed, would be to create no more tools like this. They represent an excessive risk to the agency and the country. But that thought process does not seem to have been internalized.
Finally, with the loss of control of these tools, anyone using them could appear as if they were the CIA. That might allow a third party to orchestrate a hack that could potential trigger a declaration of war from a state like North Korea, which might shoot first and ask critical questions later.
In my opinion, all of this suggests that the CIA shouldn’t be creating tools like this. Instead, it should be working with the industry to correct security exposures in order to keep the nation safer instead. It should acquire hacking tools from the outside both to ensure they aren’t significantly making the hacking problem worse and to position themselves more as a defensive than offensive organization — at least until it can effectively address the leaks.
It seems obvious that if you can’t contain a weapon, you shouldn’t create it — unless you have a viable and ready defense that can mitigate it.
This is the real problem with the CIA leak. It has lost control over its own tools. This kind of problem, if not quickly mitigated, could lead to damage to the U.S. that could outstrip what any hostile entity could do alone. And that could be dire for the U.S. and for the CIA as an agency.
But it isn’t only the CIA that has experienced an imbalance of risk and reward. Companies like Volkswagen with its diesel scandal, Samsung with the Galaxy Note 7 phone or Takata with its faulty air bags made decisions that might look good tactically but put the entire company at risk strategically. The tactical benefit was overwhelmed by the strategic risk.
It is my fervent hope that eventually artificial intelligence (AI) systems like IBM’s Watson will be positioned to help executives keep from making foolish decisions like this. But in the meantime, I think executives likely need regular training on balancing reasonable risk and reward (not to mention an ethics refresher course).
From a broader perspective, what I envision with Watson is a communications monitor that gives an alert when an executive appears to be doing something unwise. Alerts might say “That could be considered sexual harassment — please reconsider wording and do not send,” “What you propose would be considered illegal in these countries and result in an estimated cost of $Xb if caught with jail time probability 30 percent,” or “That has to be the stupidest thing any executive has done ever —seriously consider working for our largest competitor, gold star recommendation will be in your email inbox shortly.”
I agree with Ginni Rometty, that Watson, properly applied, could significantly improve decision making (or get poor decision-makers to change companies) before the next scandal occurs.
Photo courtesy of Shutterstock.
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.