Archive

Archive for November, 2008

What is the value of IT convenience?

November 29, 2008 Leave a comment

RPath’s Billy Marshall wrote a post that is closely related to a topic I have been thinking about a lot lately. Namely, Billy points out that the effect of server virtualization hasn’t been to satisfy the demand on IT resources, but simply to accelerate that demand through simplifying resource allocation. Billy gives a very clear example of what he means:

“Over the past 2 weeks, I have had a number of very interesting conversations with partners, prospects, customers, and analysts that lead me to believe that a virtual machine tsunami is building which might soon swamp the legacy, horizontal system management approaches. Here is what I have heard:

Two separate prospects told me that they have quickly consumed every available bit of capacity on their VMware server farms. As soon as they add more capacity, it disappears under the weight of an ever pressing demand of new VMs. They are scrambling to figure out how they manage the pending VM sprawl. They are also scrambling to understand how they are going to lower their VMware bill via an Amazon EC2 capability for some portion of the runtime instances.

Two prominent analysts proclaimed to me that the percentage of new servers running a hypervisor as the primary boot option will quickly approach 90% by 2012. With all of these systems sporting a hypervisor as the on-ramp for applications built as virtual machines, the number of virtual machines is going to explode. The hypervisor takes the friction out of the deployment process, which in turn escalates the number of VMs to be managed.”

The world of Infrastructure as a Service isn’t really any different:

Amazon EC2 demand continues to skyrocket. It seems that business units are quickly sidestepping those IT departments that have not yet found a way to say “yes” to requests for new capacity due to capital spending constraints and high friction processes for getting applications into production (i.e. the legacy approach of provisioning servers with a general purpose OS and then attempting to install/configure the app to work on the production implementation which is no doubt different than the development environment). I heard a rumor that a new datacenter in Oregon was underway to support this burgeoning EC2 demand. I also saw our most recent EC2 bill, and I nearly hit the roof. Turns out when you provide frictionless capacity via the hypervisor, virtual machine deployment, and variable cost payment, demand explodes. Trust me.”

Billy isn’t the only person I’ve heard comment about their EC2 bill lately. Justin Mason commented on my post, “Do Your Cloud Applications Need to be Elastic?”:

“[W]e also have inelastic parts of the infrastructure that could be hosted elsewhere at a colo for less cost, and personally, I would probably have done this given the choice; but mgmt were happier just to use EC2 as widely as possible, despite the additional costs, since it keeps things simpler.”

In each case, management chooses to pay more for convenience.

I think these examples demonstrate an important decision point for IT organizations, especially during these times of financial strife. What is the value of IT convenience? When is it wise to choose to pay more dollars (or euros, or yen, or whatever) to gain some level of simplicity or focus or comfort? In the case of virtualization, is it always wise to leverage positive economic changes to expand service coverage? In the case of cloud computing, is it always wise to accept relatively high price points per CPU hour over managing your own cheaper compute loads?

I think there are no simple answers, but there are some elements that I would consider if the choice was mine:

  • Do I already have the infrastructure and labor skills I need to do it just as well or better than the cloud? If I were to simply apply some automation to what I already have, would it deliver the elasticity/reliability/agility I want without committing a monthly portion of my corporate revenues to an outside entity?

  • Is virtualization and/or the cloud the only way to get the agility I need to meet my objectives? The answer here is often “yes” for virtualization, but is it as frequently for cloud computing?

  • Do I have the luxury of cash flow that allows for me to spend up a little for someone else to worry about problems that I would have to handle otherwise? Of course, this is the same question that applies to outsourcing, managed hosting, etc.

One of the reasons you’ve seen a backlash against some aspects of cloud computing, or even a rising voice to the “its the same thing we tried before” argument, is that much of the marketing hype out there is staring to ignore the fact that cloud computing costs money; costs enough to provide a profit to the vendor. Yes, it is true that many (most?) IT organizations have lacked the ability to deliver the same efficiencies as the best cloud players, but that can change and change quickly if those same organizations were to look to automation software and infrastructure to provide that efficiency.

My advice to you: if you already own data centers, and if you want convenience on a budget, balance the cost of Amazon/GoGrid/Mosso/whoever with the value delivered by Arjuna/3TERA/Cassatt/Enomaly/etc./etc./etc., including controlling your virtualization sprawl and preparing you for using the cloud in innovative ways. Consider making your storage and networking virtualization friendly.

Sometimes convenience starts at home.

Advertisements
Categories: Uncategorized

Two! Two! Two! Two great Overcast podcasts for your enjoyment

November 29, 2008 Leave a comment

It’s been a busy week or so for Geva Perry and I, as we took Overcast to a joint podcast with John Willis’s CloudCafe podcast, and had a fabulous discussion with Greg Ness of Archimedius.net. Both podcasts are available from the Overcast blog.

The discussion with John focused on definitions in the cloud computing space, and some of the misconceptions that people have about the cloud, what it can and can’t do for you, and what all that crazy terminology refers to. John is an exceptionally comfortable host, and his questions drove a deep conversation about what the cloud is, various components of cloud computing, and adjunct terms like “cloudbursting”. It was a lot of fun to do, and I am grateful for John’s invitation to do this.

Greg Ness demonstrated his uniquely deep understanding of what network security entails in a virtualized data center, and how automation is the lynch pin of protecting that infrastructure. Topics ranged from this year’s DNS exploit and the pace at which systems are getting patched to address it, to the reasons why the static network we all knew and loved is DOA in a cloud (or even just a virtualized) world. I really admire Greg, and find his ability to articulate difficult concepts with the help of historical insight very appealing. I very much appreciate his taking time out of his busy day to join us.

We are busy lining up more great guests for future podcasts, so stay tuned–or better yet, subscribe to Overcast at the Overcast blog.

Categories: Uncategorized

Is IBM the utlimate authority on cloud computing?

November 24, 2008 Leave a comment

There was an interesting announcement today from IBM regarding their new “Resiliant Cloud” seal of approval–a marketing program targeted at cloud providers, and at customers of the cloud. The idea is simple, if I am reading this right:

  • IBM gets all of the world’s cloud vendors to pay them a services fee to submit their services to a series of tests that validate (or not) whether the cloud is resiliant, secure and scalable. Should the vendor’s offering pass, they get to put a “Resiliant Cloud” logo on their web pages, etc.

  • Customers looking for resiliant, secure and scalable cloud infrastructure then can select from the pool of “Resiliant Cloud” offerings to build their specific cloud-based solutions. Oh, and they can hire IBM services to help them distinguish when to go outside for their cloud infrastructure, and when to convert their existing infrastructure. I’m sure IBM will give a balanced analysis as to the technology options here…

I’m sorry, but I’m a bit disappointed with this. IBM has been facing a very stiff “innovator’s dilemma” when it comes to cloud computing, as noted by GigaOm’s Stacy Higgenbotham:

“IBM has been pretty quiet about its cloud efforts. In part because it didn’t want to hack off large customers buying a ton of IBM servers by competing with them. The computing giant hasn’t been pushing its own cloud business until a half-hearted announcement at the end of July, about a month and half after a company exec had told me IBM didn’t really want to advertise its cloud services.”

She goes on to note, however, that IBM has some great things in the works, including a research project in China that shows great promise. That’s welcome news, and I look forward to IBM being a major player on the cloud computing stage again. However, this announcement is just an attempt at making IBM the “godfather” of the cloud market, and that’s not interesting in the least.

Still, I bet if you want to be an IBM strategic partner, you’d better get on board with the program. Amazon, are you going to pay the fee? Microsoft? Google? Salesforce.com? Anyone?

Categories: Uncategorized

Do Your Cloud Applications Need To Be Elastic?

November 22, 2008 Leave a comment

I got to spend a few hours at Sys-Con’s Cloud Computing Expo yesterday, and I have to say it was most certainly an intellectually stimulating day. Not only was just about every US cloud startup represented in one way or another, but included were an unusual conference session, and a meetup of fans of CloudCamp.

While listening in on a session, I overheard one participant ask how the cloud would scale their application if they couldn’t replicate it. This triggered a strong response in me, as I really feel for those that confuse autonomic infrastructures with magic applied to scaling unscalable applications. Let me be clear, the cloud can’t scale your application (much, at least) if you didn’t design it to be scaled. Period.

However, that caused me to ask myself whether or not an application had to be horizontally scalable in order to gain economically while running in an Infrastructure as a Service (IaaS) cloud. The answer, I think, is that it depends.

Chris FlexFleck of Citrix wrote up a pretty decent two part explanation of this on his blog a few weeks ago. He starts out with some basic costs of acquiring and running 5 Quad-core servers–either on-premises (amortized over 3 years at 5%) or in a colocation data center–against the cost of running equivalent “high CPU” servers 24X7 on Amazon’s EC2. The short short of his initial post is that it is much more expensive to run full time on EC2 than it is to run on premises or in the colo facility.

How much more expensive?

  • On-premises: $7800/year
  • Colocation: $13,800/year
  • Amazon EC2: $35,040/year

I tend to believe this reflects the truth, even if its not 100% accurate. First, while you may think “ah, Amazon…that’s 10¢ a CPU hour”, in point of fact most production applications that you read about in the cloud-o-sphere are using the larger instances. Chris is right to use high CPU instances in his comparison at 80¢/CPU hour. Second, while its tempting to think in terms of upfront costs, your accounting department will in fact spread the capital costs out over several years, usually 3 years for a server.

In the second part of his analysis, however, Chris notes that the cost of the same Amazon instances vary based on the amount of time they are actually used, as opposed to the physical infrastructure that must be paid for whether it is used or not (with the possible exception of power and AC costs). This comes into play in a big way if the same instances are used judiciously for varying workloads, such as the hybrid fixed/cloud approach he uses as an example.

In other words, if you have an elastic load, plan for “standard” variances on-premises, but allow “excessive” spikes in load to trigger instances on EC2, you suddenly have a very compelling case relative to buying enough physical infrastructure to handle excessive peaks yourself. As Chris notes:

“To put some simple numbers to it based on the original example, let’s assume that the constant workload is roughly equal to 5 Quadcore server capacity. The variable workload on the other hand peaks at 160% of the base requirement, however it is required only about 400 hours per year, which could translate to 12 hours a day for the month of December or 33 hours per month for peak loads such as test or batch loads. The cost for a premise only solution for this situation comes to roughly 2X or $ 15,600 per year assuming existing space and a 20% factor of safety above peak load. If on the other hand you were able to utilize a Cloud for only the peak loads the incremental cost would be only $1,000. ( Based on Amazon EC2 )

Premise Only
$ 15,600 Annual cost ( 2 x 7,800 from Part 1 )
Premise Plus Cloud
$ 7,800 Annual cost from Part 1
$ 1,000 Cloud EC2 – ( 400 x .8 x 3 )
$ 8,800 Annual Cost Premise Plus Cloud “

The lesson of our story? Using the cloud makes the most sense when you have an elastic load. I would postulate that another option would be a load that is not powered on at full strength 100% of the time. Some examples might include:

  • Dev/test lab server instances
  • Scale-out applications, especially web application architectures
  • Seasonal load applications, such as personal income tax processing systems or retail accounting systems

On the other hand, you probably would not use Infrastructure as a Service today for:

  • That little accounting application that has to run at all times, but has at most 20 concurrent users
  • The MS Exchange server for your 10 person company. (Microsoft’s multi-tenant Exchange online offering is different–I’m talking hosting your own instance in EC2)
  • Your network monitoring infrastructure

Now, the managed hosting guys are going to probably jump down my throat with counter arguments about the level of service provided by (at least their) hosting clouds, but my experience is that all of these clouds actually treat self-service as self-service, and that there really is very little difference between do-it-yourself on-premises and do-it-yourself in cloud.

What would change these economics to the point that it would make sense to run any or all of your applications in an IaaS cloud? Well, I personally think you need to see a real commodity market for compute and storage capacity before you see the pricing that reflects economies in favor of running fixed loads in the cloud. There have been a wide variety of posts about what it would take [pdf] to establish a cloud market in the past, so I won’t go back over that subject here. However, if you are considering “moving my data center to the cloud”, please keep these simple economics in mind.

Reuven Cohen Invents The "Unsession"

November 21, 2008 Leave a comment

Gotta luv the Ruv. One of the highlights of this week’s Sys-Con Cloud Computing Expo was Reuven’s session on World-Wide Cloud Computing, “presented” to a packed room filled with some of the most knowledgeable cloud computing fans you’ll ever see–from vendors, SIs, customers, you name it.

Reuven got up front, showed a total of two slides (to introduce himself, because if you’re Ruv, it takes two slides to properly introduce yourself. 🙂 ), then kicked off a totally “unconference” like hour long session. The best way I can think of to describe it was it was that he a) went straight to the question and answer period, and b) asked questions of the audience, not the other way around. Now, he may just have been lazy, but I think he took advantage of the right sized room with the right subject matter interest and expertise at the right time to shake things up.

The result was an absolutely fascinating and wide ranging discussion about what it would take to deliver a “world wide cloud”, a dream that many of us have had for a while, but that has been a particular focus of Reuven’s. I can’t recount all aspects of the discussion here, quite obviously, but I thought I would share the list of subjects covered that I noted during the talk:

  • federation
  • firewall configuration
  • data encryption
  • Wide Area Network optimization
  • latency
  • trust
  • transparency
  • the community’s role in driving cloud specifications
  • interoperability
  • data portability
  • data ownership
  • metadata/policy/configuration/schema ownership
  • cloud brokerages
  • compliance
  • Payment Card Industry
  • Physical to Virtual and Physical to Cloud
  • reliability
  • SLA metadata
  • data integrity
  • identity
  • revocable certificates (see Nirvanix)
  • content delivery networks (and Amazon’s announcement)
  • storage

Now, I’m not sure that we solved anything in the discussion, but everyone walked away learning something new that afternoon.

Got a session to present to a room of 100 or less? Not sure how to capture attention in a set of slides? The heck with it, pull a “Reuven” and turn the tables. If you have an audience eager to give as well as take, you could end up enlightening yourself as much as you enlighten everyone else.

Thanks, Ruv, and keep stirring things up.

Categories: Uncategorized

Amazon launches CloudFront Content Delivery Service

November 18, 2008 Leave a comment

Quick note before I go to bed. Amazon just announced their previously discussed content delivery network service, CloudFront, tonight. Jeff Barr lays it out for you on the AWS blog, and Werner Vogels adds his vision for the service. To their credit, they are pushing the concept that the way the service is designed, it can do much more than traditional content delivery services; potentially acting as a cacheing and routing mechanism for applications distributed across EC2 “availability zones”.

I think Thorsten von Eiken of RightScale gives the most honest assessment of the service tonight. He praises the simplicity of use, noting that his product supports all CloudFront functionality today. Noting that CloudFront is a “‘minimum viable product’ offering” at this time, he also notes that there are several restrictions, and that there are some features that leave a lot to be desired. That being said, both Amazon and RightScale are clear that this is a necessary service for Amazon to offer, and that it is indeed useful today.

More when I’ve had a chance to evaluate it, but congrats again to the Amazon team for staying a few steps ahead.

Update: Stacy Higginbotham adds some excellent insight from the GigaOm crew on CloudFront’s effect on the overall CDN market. The short short is that Amazon’s “pay-as-you-go” pricing severely undercuts the major CDN vendors for small and medium businesses.

Categories: Uncategorized

Why the Choice of Cloud Computing Type May Depend On Who’s Buying

November 15, 2008 Leave a comment

Thanks to Ron K. Jeffries’ Cloudy Thinking blog, I was directed to Redmonk’s Stephen O’Grady (who I now subscribe to directly) and his excellent post titled Cloud Types: Fabric vs Instance. Stephen makes an excellent observation about the nature of Infrastructure as a Service (called increasingly “Utility Computing” by Tim O’Reilly followers) and Platform as a Service (that one remains consistent). His observation is this:

“…Tim seems to feel that they are aspects of the types, while I’m of the opinion that they instead define the type. For example, by Tim’s definition, one characteristic of Utility Computing style clouds is virtual machine instances, where my definitions rather centers on that.

Here’s how I typically break down cloud computing styles:

Fabric

Description: A problematic term, perhaps, because a few of the vendors employ it towards different ends, but I use it because it’s descriptive. Rather than deploy to virtualized instances, developers building on this style cloud platform write instead to a fabric. The fabric’s role is to abstract the underlying physical and logical architecture from the developer, and – typically – to assume the burden of scaling.
Example: Google App Engine

Instance

Description: Instance style clouds are characterized by, well, instances. Unlike the fabric cloud, there is little to no abstraction present within instance based clouds: they generally recreate – virtually – a typical physical infrastructure composed of instances that include memory, processing cycles, and so on. The lack of abstraction can offer developers more control, but this control is typically offered at the cost of transparent scaling.
Example: Amazon EC2″

I love that distinction. First, for those struggling to see how Amazon/GoGrid/Flexiscale/etc. relates to Google/Microsoft/Salesforce.com/Intuit/etc., it delineates a very clear difference. If you are reserving servers on which to run applications, it is IaaS. If you are running your application free of care about which and how many resources are consumed, then it is PaaS. Easy.

However, I am even more excited by a thought that occurred to me as I read the post. One of the things that this particular distinction points out is the likelihood that the buyers of each type would be different classes of enterprise IT professionals.

Its not black and white, but I would be willing to bet heavily that :

  • The preponderance of interest in IaaS is from those whose primary concern is system administration; those with complex application profiles, who want to tweak scalability themselves, and who want the freedom to determine how data and code get stored, accessed and acted upon.

  • The preponderance of interest in PaaS is from those whose primary concerns is application development; those with a functional orientation, who want to be more concerned about creating application experiences than worrying about how to architect for deployment in a web environment (or whatever the framework provides).

In other words, server jockeys chose instances, while code jockeys choose fabric.

Now, the question quickly becomes, if developers can get the functionality and scalability/reliability/availability required from PaaS, without hiring the system administrators, why would any enterprise choose IaaS unless they were innovating at the architecture level? On the other hand, if all you want to do is add capacity to existing functionality, or you require an unusual or even innovative architecture, or you need to guarantee that certain security and continuity precautions are in place, why would you ever choose PaaS?

This, in turn, boils right back down to the PaaS spectrum I spoke of recently. Choose your cloud type based on your true need, but also take into account the skill set you will require. Don’t focus on a single brand just because it’s cool to your peers. Pick IaaS if you want to tweak infrastructure, otherwise by all means find the PaaS platform that best suits you. You’ll probably save in the long run.

Now, I’ve clearly suppressed the fact that developers probably still want some portability…though I must note that choosing a programming language alone limits function portability. (Perhaps that’s OK if the productivity values out weigh the likelihood of having to port.) Also, the things that system administrators are doing in the enterprise are extremely important, like managing security, data integrity and continuity. There are no guarantees that any of the existing PaaS platforms can help you with any of that.

Something to think about, anyway. What do you think? Will developers lean towards PaaS, while system administrators lean towards IaaS? Who will win the right to choose within the enterprise?