last updated, May 6, 2011
Catalyst for Strategy
some light commentary on cloud computing
I Hate the New Words
How exactly sales people managed to take the cartoon network clouds on diagrams the world over,
and pass that off to businesses as some sort of explanation for anything - and have it
stick this long escapes me.
Never the less, renting computing power is back in vogue, and this time there are actually a
few new variables in the picture.
Recently, I was shown a very impressive demonstration of Amazon's EC2.
This is certainly not something to be ignored.
By doing so you really do risk slipping a generation behind in the blink of an eye.
Still there are far more reasons for skepticism than the obvious security concerns in
pedestrian analysis.
What Are We Talking About?
Time-sharing is one of the oldest businesses models in the industry.
The three main variations on a theme we are seeing can be lumped into: hosted applications, hosted storage, and hosted generic computing. At first glance this is still nothing really that groundbreaking. We've all seen this a thousand times, no?
If anything can be called new about the latest re-branding as “cloud” services, it is that big players have finally started offerings aimed at establishing themselves as the provider that successfully puts all the pieces together first.
This was not a case of discovering plutonium. High density virtualized server farms have been building a road back the computing as a utility ideas of yesteryear for some time now.
Remember the first time you saw Google? Probably not, because it was such a big deal that it took a while to realize how great the service actually was. That's what it's like when you first really see something like EC2 – a google moment. It takes a little while before you realize what just happened. Anyone can just go to a web page, click around for few minutes, and literally have command of a massive array of geographically distributed computing power. Any number of server instances of various sizes, complete with operating systems, IP addresses, and much of the base configuration already done, are just switched on as easily as pumping gas. All the routing configuration, DNS entries, SAN allocation, the first few hours of updates, are just simply done before one even thinks about it. All at competitive and scalable pricing.
No buildings were built. No rooms were re-purposed. No one called an electrician, or a connectivity sales rep. There were no hardware quotes. There were no lease terms. There was no sales tax, freight containers, or even UPS. At no point were batteries, generators, air conditioning, or even door access controls, even thought about. No one was even given a tour of a collocation facility.
On top of that, we can back out of our deployments as fast as we jumped in. As an example, only needing a quarter million dollars worth of equipment for thirty minutes a day starts to become an explorable option.
See - that is when it hits you. The very idea of doing all the traditional server monkey tasks in-house starts to become as absurd as drilling a natural gas well in your back yard to fuel a stove so you can cook dinner.
Not that all of those jobs will be consolidated into computing provider sweat shops. There will still be site infrastructure and client support. If we just panic fast enough, some of us might be able to outrun the ticking clock. Right? Well that is the vision at least. Fatalistic for some, quixotic for others, it's a sales pitch that might seem nostalgic for most.
Some Things Never Change
In the abstract, what are some of the things that are provided by all this computer junk anyway that
can even be commoditized, metered, and competed for?
Connectivity was one of the first components of that notion.
Collocation was, among other things, an example of consolidating connectivity charges for better wholesale rates.
However, the aggregated connectivity story certainly didn't end with no one but the centralized facilities having great connections.
While I believe that in and of itself foreshadows the future epics of “compute and storage factories”
a great deal, telecommunication of course has more to say about cloud computing.
Primarily that the cost, availability, reliability, not to mention the bandwidth and latency of
most of the world's connectivity is many years, if not decades, away from allowing all back-end
storage and processing to be moved off site, off residence, off hand-held, and out of mind
even if that was desired.
Market Forces
We are told that fewer and fewer people will ever physically maintain real computers. Large enterprises and government operations in particular, not shopping out server side loads, will find themselves reforming many “IT roles” into their own internal versions of cloud services. While in so many ways that does feel logical, it also feels like completely ignoring the fact that every single man, woman, and child these days has about 15 computer-like devices in their lives. How is it that hardware costs can both be decreasing so asymptoticly, and yet becoming so expensive no one will run their own servers anymore?
The reason is that these are not contradictory assertions.
In other words, while demand has created enough business to allow competition to force prices down, the increasingly affordable supply has helped fuel the boundless demand.
From the beginning computing cost have always been decreasing, while simultaneously computing spending has been ceaselessly expanding.
On counterpoint, are these growth rates permanent? While it is true, as the virtualization technologists point out, that most CPU time is waisted idling – is it also true that yet still more CPU is needed?
In the last decade, has there actually ever been a computing shortage?
Few would disagree that there is a shortage of quality. Perhaps that is all that is needed for competing ideas to thrive.
A diptych of the industry looks something like this. On one side the cloud model says from the personnel and electrical cost to the liabilities of physical equipment and staff competency, the more computing needs there are, the harder it will be to justify in-house expenditures over the outsourced commodity compute time and storage. In theory, a specialized utility with expertly crafted schemes to squeeze every last drop of CPU time from the most current, most cost effective, hardware would be able to manage input costs far superior to the average enterprise.
As a child I remember being shown a room with racks sporting “vintage” tape reels behind a glass partition. Then I was told how a little beige box would soon be effortlessly replacing the entire room. I see this same picture on the opposite side if the hinge. While three rooms might be replaced with three ultra dense rack spaces in some cloud factory, given time the whole cloud factory will fit in your hand.
Technology
A whole host of technical debates get reignited by the notion of metered computing costs. If for example as in Amazon's EC2, we are going to pay per hour for instance run time, wouldn't it be preferable to have that computing done in optimized C over, say, a shell script? If I can host the same number of users with half the instances on a back end service by using native code, then those technologies have a greatly increased value in such an environment. Consequently such an environment also makes throwing increasingly more hardware at software problems that much easier to click through.
There is also plenty of room for competing ideas on the cloud technology itself. Unix and other multi-user systems already have process accounting facilities, because that's how they once created invoices for this sort of thing! Virtualized OS images introduce serious overhead. What if a competitor offered you a better rate to just run your server programs in a regular user account? At a certain scale that sort of edge could justify anything.
One hosted application that overlaps the hosted storage is that of the hosted database. This is an area of immense promise for new strategies. Right now, almost all database technology is based around the limitations of hard drives and other storage media. Not only do write operations require the occasional wait for data to arrive “on platter” for integrity, but transaction logs and write journals often require multiple sequential such waits. In the case where data can be written over the Internet or other network faster than these disk write waits, data integrity can be achieved by simply making it's way to more than one storage node in a cloud.
Using multiple distributed “always on” storage nodes approaches the same legitimacy as writing to multiple “always working” hard drives in a RAID configuration.
Advise
For those in the profession, the career strategy should remain as obvious as it has always been. Don't tie yourself too rigidly to a particular technology, style, or mindset. Learn all you can about as many things as you can. Draw from the past, present, and future to help bring your ideas to fruition. Stay excited, not naïve. Everything is always just a sales pitch.
© 2011 C. Thomas Stover
cts at techdeviancy.com
back
|