Archive

Archive for the ‘cloud market’ Category

The enterprise "barrier-to-exit" to cloud computing

December 2, 2008 Leave a comment

An interesting discussion ensued on Twitter this weekend between myself and George Reese of Valtira. George–who recently posted some thought provoking posts on O’Reilly Broadcast about cloud security, and is writing a book on cloud computing–argued strongly that the benefits gained from moving to the cloud outweighed any additional costs that may ensue. In fact, in one tweet he noted:

IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier.

I reacted strongly to that statement; I don’t buy that IT is that bad in all cases (though some certainly is), nor do I buy that simply eliminating a barrier to getting something done makes it worth while. Besides, the barrier being removed isn’t strictly financial, it is corporate IT policy. I can build a kick butt home entertainment system for my house for $50,000; that doesn’t mean it’s the right thing to do.

However, as the conversation unfolded, it became clear that George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can’t justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal. At least that’s what George claims his numbers show, and I’m willing to accept that. It makes sense to me.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

(George actually notes a class of applications that sadly are also good candidates, though they shouldn’t necessarily be: applications that IT just can’t or won’t get to on behalf of a business unit. George claims his business makes good money meeting the needs of marketing organizations that have this problem. Just make sure the ROI is really worth it before taking this option, however.)

This existing investment in infrastructure therefore acts almost as a “barrier-to-exit” for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that?

The good news, as I hinted at earlier, is that there is technology that can be rationalized financially–through capital equipment and energy savings–which in turn can “grease the skids” for cloud adoption in the future. Ask the guys at 3tera. They’ll tell you that their cloud infrastructure allows an enterprise to optimize infrastructure usage while enabling workload portability (though not running workload portability) between cloud providers running their stuff. VMWare introduced their vCloud initiative specifically to make enterprises aware of the work they are doing to allow workload portability across data centers running their stuff. Cisco (my employer) is addressing the problem. In fact, there are several great products out there who can give you cloud technology in your enterprise data center that will open the door to cloud adoption now (with things like cloudbursting) and in the future.

If you aren’t considering how to “cloud enable” your entire infrastructure today, you ought to be getting nervous. Your competitors probably are looking closely at these technologies, and when the time is right, their barrier-to-exit will be lower than yours. Then, the true costs of moving an existing data center infrastructure to the cloud will become painfully obvious.

Many thanks to George for the excellent discussion. Twitter is becoming a great venue for cloud discussions.

Do Your Cloud Applications Need To Be Elastic?

November 22, 2008 Leave a comment

I got to spend a few hours at Sys-Con’s Cloud Computing Expo yesterday, and I have to say it was most certainly an intellectually stimulating day. Not only was just about every US cloud startup represented in one way or another, but included were an unusual conference session, and a meetup of fans of CloudCamp.

While listening in on a session, I overheard one participant ask how the cloud would scale their application if they couldn’t replicate it. This triggered a strong response in me, as I really feel for those that confuse autonomic infrastructures with magic applied to scaling unscalable applications. Let me be clear, the cloud can’t scale your application (much, at least) if you didn’t design it to be scaled. Period.

However, that caused me to ask myself whether or not an application had to be horizontally scalable in order to gain economically while running in an Infrastructure as a Service (IaaS) cloud. The answer, I think, is that it depends.

Chris FlexFleck of Citrix wrote up a pretty decent two part explanation of this on his blog a few weeks ago. He starts out with some basic costs of acquiring and running 5 Quad-core servers–either on-premises (amortized over 3 years at 5%) or in a colocation data center–against the cost of running equivalent “high CPU” servers 24X7 on Amazon’s EC2. The short short of his initial post is that it is much more expensive to run full time on EC2 than it is to run on premises or in the colo facility.

How much more expensive?

  • On-premises: $7800/year
  • Colocation: $13,800/year
  • Amazon EC2: $35,040/year

I tend to believe this reflects the truth, even if its not 100% accurate. First, while you may think “ah, Amazon…that’s 10¢ a CPU hour”, in point of fact most production applications that you read about in the cloud-o-sphere are using the larger instances. Chris is right to use high CPU instances in his comparison at 80¢/CPU hour. Second, while its tempting to think in terms of upfront costs, your accounting department will in fact spread the capital costs out over several years, usually 3 years for a server.

In the second part of his analysis, however, Chris notes that the cost of the same Amazon instances vary based on the amount of time they are actually used, as opposed to the physical infrastructure that must be paid for whether it is used or not (with the possible exception of power and AC costs). This comes into play in a big way if the same instances are used judiciously for varying workloads, such as the hybrid fixed/cloud approach he uses as an example.

In other words, if you have an elastic load, plan for “standard” variances on-premises, but allow “excessive” spikes in load to trigger instances on EC2, you suddenly have a very compelling case relative to buying enough physical infrastructure to handle excessive peaks yourself. As Chris notes:

“To put some simple numbers to it based on the original example, let’s assume that the constant workload is roughly equal to 5 Quadcore server capacity. The variable workload on the other hand peaks at 160% of the base requirement, however it is required only about 400 hours per year, which could translate to 12 hours a day for the month of December or 33 hours per month for peak loads such as test or batch loads. The cost for a premise only solution for this situation comes to roughly 2X or $ 15,600 per year assuming existing space and a 20% factor of safety above peak load. If on the other hand you were able to utilize a Cloud for only the peak loads the incremental cost would be only $1,000. ( Based on Amazon EC2 )

Premise Only
$ 15,600 Annual cost ( 2 x 7,800 from Part 1 )
Premise Plus Cloud
$ 7,800 Annual cost from Part 1
$ 1,000 Cloud EC2 – ( 400 x .8 x 3 )
$ 8,800 Annual Cost Premise Plus Cloud “

The lesson of our story? Using the cloud makes the most sense when you have an elastic load. I would postulate that another option would be a load that is not powered on at full strength 100% of the time. Some examples might include:

  • Dev/test lab server instances
  • Scale-out applications, especially web application architectures
  • Seasonal load applications, such as personal income tax processing systems or retail accounting systems

On the other hand, you probably would not use Infrastructure as a Service today for:

  • That little accounting application that has to run at all times, but has at most 20 concurrent users
  • The MS Exchange server for your 10 person company. (Microsoft’s multi-tenant Exchange online offering is different–I’m talking hosting your own instance in EC2)
  • Your network monitoring infrastructure

Now, the managed hosting guys are going to probably jump down my throat with counter arguments about the level of service provided by (at least their) hosting clouds, but my experience is that all of these clouds actually treat self-service as self-service, and that there really is very little difference between do-it-yourself on-premises and do-it-yourself in cloud.

What would change these economics to the point that it would make sense to run any or all of your applications in an IaaS cloud? Well, I personally think you need to see a real commodity market for compute and storage capacity before you see the pricing that reflects economies in favor of running fixed loads in the cloud. There have been a wide variety of posts about what it would take [pdf] to establish a cloud market in the past, so I won’t go back over that subject here. However, if you are considering “moving my data center to the cloud”, please keep these simple economics in mind.

Salesforce.com Announces They Mean Business

November 5, 2008 Leave a comment

I had some business to take care of in downtown San Francisco this morning, and on my way to my destination, I strolled past Moscone Center, the site of this year’s Dreamforce conference. The news coming out of that conference had peaked my interest a day earlier–I’ll get to that in a minute–but when I saw the graphics and catch phrase of the conference, I had to laugh. Not in mockery, mind you; it was just ironic.

There, spanning the vast entrances of both Moscone North and South was nothing but blue skies and fluffy white…wait for it…clouds. In other words, the single theme of the conference visuals was, I can only assume, cloud computing. Not CRM, not “making your business better”, but an implementation mechanism; a way of doing IT. That’s the irony, in my mind; that in this amazing month or so of cloud computing history, one of the companies most aggressively associating themselves with cloud computing was a CRM company, not a compute capacity or storage provider.

Except, Salesforce.com was already blurring the lines between PaaS and SaaS, even as they open the door to their partners and customers taking advantage of IaaS where it makes sense. Even before Marc Benioff’s keynote yesterday, it was clear that force.com was far more than a way to simply customize the core CRM offering. Granted, most applications launched there took advantage of Salesforce.com data or services in one way or another, but there was clear evidence that the SF gang were targeting a PaaS platform that stood alone, even as it provided the easiest way to draw customers into the CRM application.

The core of the new announcement, Sites, appears to simply be an extension of this. The idea behind Sites is to provide a web site framework that allows customers to address both Intranet and Internet applications without needing to run any infrastructure on-premises. Of course, if you find the built in SF integration makes adopting the CRM platform easier, then SF would be happy to help. Their goal, you see, is summed up in the conference catch phrase: “The End of Software”. (Of course, let’s just ignore the fact that force.com is a software development platform, any way you cut it.)

Skeptical that you can get what you need from a single PaaS offering? Here’s where the genius part of the day’s announcements come in; simply utilize Amazon for the computing and storage needs that force.com was unable to provide. Heck, yeah.

Allow me to observe something important, here. First, note that Salesforce does not have an existing packages software model; thus, there is no incentive whatsoever to offer an on-premesis alternative. Touche, Microsoft. Second, note that Salesforce.com has no problem whatsoever with partnering with someone who does something better than them. En guarde, Google. Finally, pay attention to the fact Salesforce.com is expanding its business offerings in a way that both serves existing customers in increasingly powerful ways, while inviting new, non CRM customers to use productive tools that just happen to include integration with the core offering. PaaS as a marketing hook, not necessarily a business model in and of itself. (If it succeeds on its own, that’s icing on the cake.)

In a three week period that has seen some of the most revolutionary cloud computing announcements, Salesforce.com managed to not only keep themselves relevant, but further managed to make a grab for significant cloud mindshare. Fluffy, white, cloud mindshare.

Is Amazon in Danger of Becoming the Walmart of the Cloud?

October 25, 2008 Leave a comment

Update: Serious misspelling of Walmart throughout the post initially. If you are going to lean an argument heavily on the controversial actions of any entity, spell their name right. Mea culpa. Thanks to Thorsten von Eicken for the heads up.

Also, check out Thorsten’s comment below. Perhaps all is not as bleak as I paint it here for established partners…I’m not entirely convinced this is true for the smaller independent projects, however.


I grew up in the great state of Iowa. After attending college in St. Paul, Minnesota, I returned to my home state where I worked as a computer support technician for Cornell College, a small liberal arts college in Mount Vernon, Iowa. It was a great gig, with plenty of funny stories. Ask me over drinks sometime.

While in Mount Vernon, there was a great controversy brewing–well, nation wide, really–amongst the rural towns and farm villages struggling to survive. You see, the tradition of the family farm was being devastated, and local downtowns were disappearing. Amidst this traumatic upheaval appeared a great beast, threatening to suck what little life was left out of small town retail businesses.

The threat, in one word, was Walmart.

Walmart is, and was, a brilliant company, and their success in retail is astounding. In a Walmart, one can find almost any household item one needs under a single roof, including in many cases groceries and other basic staples. Their purchasing power drives prices so low, that there was almost no way they can get undercut. If you have a WalMart in your area, it might find it the most logical place to go for just about anything you needed for your home.

That, though, was the problem in rural America. If a Walmart showed up in your area, all the local household goods stores, clothing stores, electronics stores and so on were instantly the higher price, lower selection option. Mom and Pop just couldn’t compete, and downtown businesses disappeared almost overnight. The great lifestyle that rural Americans led with such pride was an innocent bystander to the pursuit of volume discounts.

Many of the farm towns in Iowa were on the lookout then, circa 1990, for any sign that Walmart might be moving in. (They still are, I guess.) When a store was proposed just outside of Cedar Rapids, on the road to Mount Vernon, all heck broke loose. There was strong lobbying on both sides, and businesses went on a media campaign to paint Walmart as a community killer. The local business community remained in conflict and turmoil for years on end while the store’s location and development were negotiated.

(The concern about Walmart stores in the countryside is controversial. I will concede that not everyone objects to their building stores in rural areas. However, all of the retailers I knew in Mount Vernon did.)

If I remember correctly, Walmart backed off, but its been a long time. (Even now, they haven’t given up entirely.)

While I admire Amazon and the Amazon Web Services team immensely, I worry that their quest to be the ultimate cloud computing provider might force them into a similar role on the Internet that Walmart played in rural America. As they pursue the drive to bring more and better functionality to those that buy their capacity, the one-time book retailer is finding themselves adding more and more features, expanding their coverage farther and farther afield from just core storage, network and compute capacity–pushing into the market territory of entrepreneurs who seized the opportunity to earn an income off the AWS community.

This week, Amazon may have crossed an invisible line.

With the announcement that they are adding not just a monitoring API, not just a monitoring console, but actual interactive management user interface, with load balancing and automated scaling services, Amazon is for the first time creeping into the territory held firm by the partners that benefited and benefited from Amazon’s amazing story. The Sun is expanding into the path of its satellites, so to speak.

The list of the endangered potentially include innovative little projects like ElasticFox, Ylastic and Firefox S3, as well as major cloud players such as RightScale, Hyperic and EUCALYPTUS. These guys cut their teeth on Amazon’s platform, and have built decent businesses/projects serving the AWS community.

Not that they all go away, mind you. RightScale and Hyperic, for example, support multiple clouds, and can even provide their services across disparate clouds. EUCALYPTUS was designed with multiple cloud simulations in mind. Furthermore, exactly what Amazon will and won’t do for these erstwhile partners remains unclear. Its possible that this may work out well for everyone involved. Not likely, in my opinion, but possible.

Sure, these small shops can stay in business, but they now have to watch Amazon with a weary eye (if they weren’t already doing that). There is no doubt that their market has been penetrated, and they have to be concerned about Amazon doing to them what Microsoft did to Netscape.

Or Walmart did to rural America.

Amazon Enhances "The Proto-Cloud"

October 23, 2008 Leave a comment

Big news today, as you’ve probably already seen. Amazon has announced a series of steps to greatly enhance the “production” nature of its already leading edge cloud computing services, including (quoted directly from Jeff Barr’s post on the AWS blog):

  • Linux on Amazon EC2 is now in full production. The beta label is gone.
  • There’s now an SLA (Service Level Agreement) for EC2.
  • Microsoft Windows is now available in beta form on EC2.
  • Microsoft SQL Server is now available in beta form on EC2.
  • We plan to release an interactive AWS management console.
  • We plan to release new load balancing, automatic scaling, and cloud monitoring services.

There is some great coverage of the announcement already in the blog-o-sphere, so I won’t repeat the basics here. Suffice to say:

  • Removing the beta label removes a barrier to S3/EC2 adoption for the most conservative of organizations.
  • The SLA is interestingly organized to both allow for pockets of outages while promoting global up-time. Make no mistake, though, some automation is required to make sure your systems find the working Amazon infrastructure when specific Availability Zones fail.
  • Oh, wait, they took care of that as well…along with automatic scaling and load balancing.
  • Microsoft is quickly becoming a first class player in AWS, which removes yet another barrier for M$FT happy organizations.

Instead, let me focus in this post on how all of this enhances Amazon’s status as the “reference platform” for infrastructure as a service (IaaS). In another post, I want to express my concern that Amazon runs the danger of becoming the “WallMart” of cloud computing.

First, why is it that Amazon is leading the way so aggressively in terms of feature sets and service offerings for cloud computing? Why does it seem that every other cloud provider seems to be catching up to the services being offered by Amazon at any given time? For example:

The answer in all cases is because Amazon has become the default standard for IaaS feature definition–this despite having no user interface of their own (besides command line and REST), and using “special” Linux images (the core Amazon Machine Images) that don’t provide root access, etc. The reason for the success in setting the standard here is simple: from the beginning, Amazon has focused on prioritizing feature delivery based on barriers to adoption of AWS, rather than on building the very best of any given feature.

Here’s how I see it:

  • In the beginning, there was storage and network access. Enter S3.
  • Then there were virtual servers to do computational tasks. Enter EC2, but with only one server size.
  • Then there were significant complaints that the server size wasn’t big enough to handle real world tasks. Enter additional server types (e.g. “Large”) and associated pricing
  • Then there was the need for “queryable” data storage. Enter SimpleDB.
  • Somewhere in the preceding time frame, the need for messaging services was identified as a barrier. Enter Amazon Simple Queue Service.
  • Now people were beginning to do serious tasks with EC2/S3/etc., so the issues of geographic placement of data and workloads became more of a concern. (This placement was both for geographic fail over, and to address regulatory concerns.) Enter Availability Zones.
  • Soon after that, delivering content and data between the zones became a serious concern (especially with all of the web start ups leveraging EC2/S3/etc.) Enter the announced AWS Content Delivery Service
  • Throw in there various partnership announcements, including support for MySQL and Oracle.

By this point, hundreds of companies had “production” applications or jobs running on Amazon infrastructure, and it became time to decide how serious this was. In my not-so-humble opinion, the floundering economy, its effects on the Amazon retail business, and the predictions that cloud computing could benefit from a weakened economy fed into the decision that its time to remove the training wheels and leave “beta” status for good. Add an official SLA, remove the “beta” label, and “BAM!“, you suddenly have a new “production” business to offset the retail side of the house.

Given that everyone else was playing catchup to these features as they came out (mostly because competitors didn’t realize what they needed to do next, as they didn’t have the customer base to draw from), it is not surprising that Amazon now looks like they are miles ahead of any competitor when it comes to number of customers and (for cloud computing services) probably revenue.

How do you keep the competitors playing catchup? Add more features. How do you select which features to address next? Check with the customer base to see what their biggest concerns are. This time, the low hanging fruit was the management interface, monitoring, and automation. Oh, and that little Windows platform-thingy.

Now, I find it curious that they’ve pre-announced the monitoring and management stuff today. Amazon isn’t really in the habit of announcing a feature before they go private-beta. However, I think there is some concern that they were becoming the “command-line lover’s cloud”, and had to show some interest in competing with the likes of VirtualCenter in the mind’s eye of system administrators. So, to undercut some perceived competitive advantages from folks like GoGrid and Slicehost, they tell their prospects and customers “just give us a second here and we will do you right”.

I think the AWS team has been brilliant, both in terms of marketing and in terms of technology planning and development. They remain the dominant team, in my opinion, though there are certainly plenty of viable alternatives out there that you should not be shy of both in conjunction with and in place of Amazon. Jeff Barr, Werner Vogels and others have proven that a business model that so many other IT organizations failed at miserably could be done extremely well. I just hope they don’t get too far ahead of themselves…as I’ll discuss separately.

Cracks in the Clouds, but the Sky Ain’t Fallin’

Update: I accidentally left off the reference links in the first paragraph. This is corrected now. My apologies to all that were inconvenienced.

This last couple of weeks have been filled with challenges to those preaching the gospel of cloud computing. First it was a paper delivered by three Microsoft researchers describing in detail the advantages of small, geo-diverse, distributed data center designs over “mega-datacenters”, a true blow to the strategy of many a cloud provider and–frankly–large enterprise. Second, the Wall Street Journal published a direct indictment of the term, cloud computing, in which Ben Worthen carefully explains how the term ended up well beyond the boundaries of meaning. Added to the dog pile was Larry Elison’s apparently delightful rant about the meaninglessness of the term, and an apparent quote where he doubts the business model of providing capacity at scale for less than a customer could do it on their own.

Frankly, I think there’s some truth to the notion that cloud computing and many of the notions that people have about it are beginning to lose their luster. We seem to have passed through a tollgate of late, from the honeymoon era of “cloud computing will save the world” to the evolutionary phase of “oh, crud, we now have to make this stuff work”. While the marketing continues unabated, there are some stories creeping out of “cloud-o-sphere” of realizations about the economics and technical realities of dynamically offloading compute capacity. Solutions are being explored to “the little things” that support the big picture: monitoring, management (both system and people) and provisioning. Gaps are being identified, and business models are being criticized. We are all coming to the conclusion that there is a heck of a lot of work left to be done here.

Doubt me? Take a look at the following examples of “oh, crud” moments of the past few months:

  • I can’t for the life of me find the link, but about three months ago I read a quote from one of the recent successful Amazon EC2-based start ups noting that as their traffic and user base grows, they believe the economics of using the cloud will change, and moving some core capacity to a private cloud might make more sense.

    Update: John M Willis pointed me to the reference; a quote from item 8 of his “10 Reasons for NOT Using a Cloud” post, which in turn references a Cloud Cafe podcast in which “Brad Jefferson the CEO of Animoto suggested at some point he might actually flip the cloud.” Read the post and listen to the podcast for more. Thanks, John.

  • Mediafed’s Alan Williamson presents a keynote at CloudCamp London in July in which he notes that “[w]e’ve come to realize we cannot rely on putting all our eggs in one basket”, and shows off their dual-provider architecture utilizing Amazon EC2 and Flexiscale.

  • A court case in the United States demonstrates the legal perils that still have to be navigated in terms of constitutional protections and legal rights for those that place data in the cloud. This case goes to prove that too much depends on the Terms of Service of the providers today to provide a consistent basis for placing sensitive data in the cloud. Even mighty Amazon cannot be trusted to run a business infrastructure alone.

Some are even hinting that cloud computing is stupid, and that it will fail to be the disruptive technology it is touted as being.

That last statement is where I part ways with the critics. Cloud computing–all of it, public and private–will be disruptive to the way IT departments acquire and allocate compute functionality and capacity. To me, this statement is true whether or not it turns out that it would be better to build 500 small, manageable, container based data centers than 5 megaliths. It will be true even if the term gets used to describe anti-virus software. There is great momentum pushing us towards huge gains in IT efficiency, and it makes little economic sense not to follow through on that. Like any complex system, there will be winners and losers, but the winners will strengthen the system overall.

Here’s where I see winning technologies developing in the cloud:

  • “Cloudbursting” – This is the most logical initial use of the cloud by most enterprises; grabbing spare capacity on a short term basis when owned capacity is maxed out. It virtually eliminates the pressure to predict peak load accurately, and gives enterprises a “buffer zone” should they need to scale up resources for an application.

  • Cloud OS – The data center is becoming a unit of computing in and of itself, and as such, it needs an OS. However, the ultimate vision for such an OS is to grow beyond the borders of a single data center, or even a single organization, and allow automated, dynamic resource location and allocation across the globe from an open market system. That’s the goal, anyway…

  • SaaS/PaaS – Most of my readers will know the SaaS debate inside and out: is it better to take advantage of the agile and economic nature of online applications, or is it both safer and, perhaps, cheaper in the long term to keep things in house? I think SaaS is winning converts every day and will likely win nearly everyone for some applications. PaaS gives you the same quick/cheap start up economics as SaaS, but for software development and deployment infrastructure. I’ll post more on PaaS soon.

  • Mashups/WOA – Much has been said of late about the successes of loosely coupled REST-style Internet integrations using published URL-based APIs over the traditional “contract heavy” SOAP/WS-* world. It makes sense. Most applications don’t need RMI contracts if all they are trying to do is retrieve data to recombine with other into new forms. If it remains as easy as it has been for the last five years or so, mashups will be an expected component of most web apps, not an exceptional one.

  • “Quick start” data centers and data center modules – Between private clouds made of fail-in-place data centers in shipping containers, and powerful Infrastructure as a Service offerings from the likes of GoGrid, Flexiscale, Amazon and others, both startups and large enterprises have new ways to quickly acquire, scale up and optimize IT capacity. Acquiring that capacity by traditional ways is starting to look inefficient (even though I have seen no proof that this is so, as of yet).

Even if it never makes sense for a single Fortune 500 company to shut down all of their data centers, there will be a permanent change to the way IT operations are run–a change focused at optimizing the use of hardware to meet increasing service demands. Accounting for IT will change forever, as OpEx becomes dominant over CapEx, and flexibility is the name of the game. Capacity planning is changed forever, as developers can grab capacity from the corporate pool, watch the system utilization as demand grows, tuning the application as needed, adding hardware only when justified by trend analysis. Start up economics are changed forever, as building new applications that require large amounts of infrastructure no longer requires infrastructure investment.

CloudCamp SV demonstrated to me that the intellectual investment in cloud computing far surpasses mere marketing, and includes real technologies, architectures and business models that will keep up on our toes for the next few years.

Let the Cloud Computing OS wars begin!

September 15, 2008 Leave a comment

Today is a big day in the cloud computing world. VMWorld is turning out to be a core cloud industry conference, where many of the biggest announcements of the year are taking place. Take,for instance, the announcement that VMWare has created the vCloud initiative, an interesting looking program that aims to build a partner community around cloud computing with VMWare. (Thanks to the every increasingly cloud news leader, On-Demand Enterprise, for this link and most others in this post.) This is huge, in that it signals a commitment by VMWare to standardize cloud computing with VI3, and provide an ecosystem for anyone looking to build a public, private or hybrid cloud.

The biggest news, however, is the bevy of press releases signaling that three of the bigger names in virtualization are each delivering a “cloud OS” platform using their technology at the core. Here are the three announcements:

  • VMWare is announcing a comprehensive roadmap for a Virtual Datacenter Operating System (VDC-OS), consisting of technologies to allow enterprise data centers to virtualize and pool storage, network and servers to create a platform “where applications are automatically guaranteed the right quality of service at the lowest TCO by harnessing internal and external computing capacity.”

  • Citrix announces C3, “its strategy for cloud computing”, which appears to be a collection of products aimed at cloud providers and enterprises wishing to build their own clouds. Specific focus is on the virtualization platform, the deployment and management systems, orchestration, and–interestingly enough–wide area network (WAN) optimization. In the end, this looks very “Cloud OS”-like to me.

  • Virtual Iron and vmSight announce a partnership in which they plan to deliver “cloud infrastructure” to managed hosting providers and cloud providers. Included in this vision are Virtual Iron’s virtualization platform, virtualization management tools, and vmSight’s “end user experience assurance solution” technology to allow for “operating system independence, high-availability, resource optimization and power conservation, along with the ability to monitor and manage application performance and end user experience.” Again, sounds vaguely Cloud OS to me.

Three established vendors, three similar approaches to solving some real issues in the cloud, and three attacks on any entrenched interests in this space. All three focus on providing comprehensive management and infrastructure tools, including automated scaling and failover; and consistent execution to allow for image portability. The VMWare and Citrix announcements go further, however, in announcing technologies to support “cloudbursting” in which overflow processing needs in the data center are met by cloud providers on demand. VMWare specifically calls out OVF as the standard that enables this in their release; OVF is not mentioned by Citrix, but they have done significant work in this space as well.

Overall, VMWare has made the most comprehensive announcement, and have a lot of existing products to back up their feature list. However, much of what needs to be done to tightly integrate these products appears yet to be done. I base this on the fact that they highlight the need for a “comprehensive roadmap”–I could be wrong about this. They have also introduced a virtual distributed switch, which is a key component for migration between and within the cloud. Citrix doesn’t mention such a thing, but of course the rumor is that Cisco will quite likely provide that. Whether such a switch will enable migration across networks, as VMWare’s does (er, will?) is yet to be seen, however (see VMWare’s VDC-OS press release). Citrix does, however, have a decent stable of existing applications to support their current vision.

By the way, Sun is working feverishly on their own Cloud OS. No sign of Microsoft, yet…

The long and the short of it is that we have entered into a new era, in which data centers will no longer simply be collections of servers, but will actually be computing units in and of themselves–often made up of similar computing units (e.g. containers) in a sort of fractal arrangement. Virtualization is key to make this happen (though server virtualization itself is not technically absolutely necessary). So are powerful management tools, policy and workflow automation, data and compute load portability, and utility-type monitoring and metering systems.

I worry now about my alma mater, Cassatt, who has chosen to go it largely alone until today. Its a very mature, very applicable technology, that would form the basis of a hell of a cloud OS management platform. Here’s hoping there are some big announcements waiting in the wings, as the war begins to rage around them.

Update: No sooner do I express this concern, than Ken posts an excellent analysis of the VMWare announcement with Cassatt in mind. I think he misses the boat on the importance of OVF, but he is right that Cassatt has been doing this a lot longer than VMWare has.