ALSO SEE: Are SaaS/Cloud Computing Vendors Offering Questionable Contracts?
There’s been at least as much healthy skepticism about cloud computing as there has been optimism and real results. And there ought to be, especially as cloud computing moves out of buzzword territory and becomes an increasingly powerful tool for extending IT resources.
To that end, here’s a rundown of ten key things both creators and users of cloud computing should continue to bear in mind.
The good news is that the very nature of the cloud may be compelling more real thought about security – on every level – than before. The bad news is that a poorly written application can be just as insecure in the cloud, maybe even more so.
Cloud architectures don’t automatically grant security compliance for the end-user data or apps on them, and so apps written for the cloud always have to be secure on their own terms. Some of the responsibility for this does fall to cloud vendors, but the lion’s share of it is still in the lap of the application designer.
A cloud computing-based solution shouldn’t become just another passive utility like the phone system, where the owners simply puts a tollbooth on it and charges more and more while providing less and less. In short, don’t give competitors a chance to do an end run around you because you’ve locked yourself into what seems like the best way to use the cloud, and given yourself no good exit strategy. Cloud computing is constantly evolving. Getting your solution in place simply means your process of monitoring and improving can now begin.
We’re probably past the days when people thought clouds were just big server clusters, but that doesn’t mean we’re free of ignorance about the cloud moving forward. There are all too many misunderstandings about how public and private clouds (or conventional datacenters and cloud infrastructures) do and don’t work together, misunderstandings about how easy it is to move from one kind of infrastructure to another, how virtualization and cloud computing do and don’t overlap, and so on.
A good way to combat this is to present customers with real-world examples of what’s possible and why, so they can base their understanding on actual work that’s been done and not just hypotheticals where they’re left to fill in the blanks themselves.
Cloud infrastructures, like a lot of other IT innovations, don’t always happen as top-down decrees. They may happen from the bottom up, in a back room somewhere, or on an employee’s own time from his own PC.
Examples of this abound: consider a New York Times staffer’s experience with desktop cloud computing. Make a “sandbox” space within your organization for precisely this kind of experimentation, albeit with proper standards of conduct (e.g., not using live data that might be proprietary as a safety measure). You never know how it’ll pay off.
The biggest example of this: Amazon EC2. As convenient as it is to develop for the cloud using EC2 as one of the most common types of deployments, it’s also something to be cautious of. Ad-hoc standards are a two-edged sword.
On the plus side, they bootstrap adoption: look how quickly a whole culture of cloud computing has sprung up around EC2. On the minus side, it means that much less space for innovators to create something open, to let things break away from the ad-hoc standards and can be adopted on their own. (Will the Kindle still be around in ten years?) Always be mindful of how the standards you’re using now can be expanded or abandoned.
Few things are more annoying to customers than promising something you can’t deliver. The bad news is that in many industries, that’s how things work: overbooking on airlines, for instance.
It might also become like that for cloud providers, who may be forced to sell more capacity than they can actually provide as a way to stay competitive with … well, everyone else doing the same thing. Reuven Cohen of Enomaly has speculated that Amazon EC2 might be doing this right now. With any luck they’re not doing it in lieu of better testing and saner quota allotments.
Testing should always be standard practice. Robust, creative, think-out-of-the-box testing doubly so. Consider the way MySpace used 800 EC2 instances to test itself and see if they could meet anticipated demand for a new streaming music service. Their example involved using the cloud to test their native infrastructure, but there’s no reason one couldn’t use one cloud to generate test demand for another, and determine what your real needs are. And not just once, but again and again.
Just as over-utilization is both bad planning and bad business, so is under-utilization. In fact, having a good deal of idle capacity you’re paying to support and not generating revenue from may well be worse than the opposite scenario.
This sort of thing’s easier to deal with if you’re the one buying the service, but what if you’re the one selling it? That’s another reason why metrics and robust load testing are your best friends when creating cloud services. Also consider the possibility you’re not selling enough kinds of services: is there room in your business plan for more granular, better-tiered service that might draw in a wider array of customers?
One word: IPv6. If you’re deploying systems, using infrastructure or writing applications that aren’t IPv6-aware now, you’re building a time bomb under your chair.
IPv4’s days are more numbered than ever, and tricks like NAT or freeing up previously-unallocated blocks aren’t going to buy enough time to get us through the decade. Cloud computing, with its world of hosts that can appear by the thousands at once, will all but guarantee we need IPv6’s address pool and technical flexibility.
Think forward on every level, and encourage everyone building on top of your infrastructures to do the same thing.
Latency has always been an issue on the Internet; just ask your local World of Warcraft raiding guild. It’s just as much of an issue in the cloud.
Performance within the cloud doesn’t mean much if it takes forever for the results of that performance to show up on the client. The latency that a cloud can introduce doesn’t have to be deadly, and can be beaten back with both an intelligently planned infrastructure and smartly-written applications that understand where and how they’re running.
Also, cloud-based apps – and the capacity of cloud computing itself – are only going to be ramped up, not down, in the future. That means an arms race against increases in latency is in the offing as well. Just as the desktop PC’s biggest bottlenecks are more often storage and memory, not CPU, the true source of cloud latency must be targeted and improved.
The cloud isn’t an endpoint in tech evolution, any more than the PC or the commodity server were final destinations. Something’s going to come after the cloud, and may well eclipse it or render it redundant. The point isn’t to speculate about what might come next, but rather to remain vigilant to change in the abstract. As the sages say, the only certainty is uncertainty, and the only constant thing is the next big thing.
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2020
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Anticipating The Coming Wave Of AI Enhanced PCs
FEATURE | By Rob Enderle,
September 05, 2020
The Critical Nature Of IBM’s NLP (Natural Language Processing) Effort
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
August 14, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.