Archive

Archive for the ‘cloud computing’ Category

Exploring cloud and complex systems

February 13, 2012 3 comments

This post—this blog—have been a long time coming.

While many may know me from my long journey exploring cloud computing (the first few years of which are archived to this blog, followed by three years at CNET writing The Wisdom of Clouds, and now my continuing work on GigaOm/cloud), for some time now I’ve been keenly interested in a more specific topic under the cloud computing umbrella, namely how cloud computing is driving application architectures to adopt the traits of complex adaptive systems.

Complex adaptive systems (CAS) are fascinating beasts. Described by complex systems pioneer John Holland as “systems that have a large numbers of components, often called agents, that interact and adapt or learn”, CAS are the reason major systems in nature work—from biology to ecology to economics and society. CAS allow for constant change, with an emphasis on changes that make the system stronger, although there is a constant risk that the system will see negative events as well.

Why this is fascinating to me is that CAS resist the ability to break them down to component parts, establishing clear cause-and-effect relationships between the agents involved and the emergent behavior of the system as a whole. In fact, the sheer complex nature of these systems means that outcomes from any given action within a system in at best unpredictable, and quite likely impossible.

An excellent example of this, that I’ve used before, is a pile of sand on a table. Imagine dropping more sand, one grain at a time, onto that pile. Quick, how many grains of sand will fall off the table with each grain added to the “system”? It is impossible to predict.

Now, granted, a table of sand isn’t exactly “adaptive”, but it is indicative of what complexity does to predictability.

CAS and IT

When applied to IT—everything from architectures to markets to organizations—the science of complex adaptive systems has real deep ramifications to the ways we plan, design, build, trouble shoot and adapt our most important applications of technology to business. We can’t do everything top-down anymore. Much, much more of our work has to be built and maintained from the bottom up.

Now, I am a novice at all this, so much of what I believe I know today will likely turn out to be wrong. As a quick example, the “Cloud as Complex Systems Architecture” presentation I am giving at Cloud Connect in Santa Clara, CA on Tuesday was supposed to center around focusing on how you automate the operations of individual software components to survive in the cloud. “Focus on tweaking agent automation”, my message was going to be.

However, that is counter to the reality of what has to happen; it is just as important to evaluate the system as a whole, and adjust *whatever* needs to be adjusted to when issues are identified system wide. In other words, an agent-level focus is exactly the kind of thing that gets you in trouble in complex systems. You see? I am still learning.

I hope you will join me as I take this journey. Follow me on Twitter at @jamesurquhart. Subscribe to the RSS feed for this blog. Leave comments. Challenge me. Point me to new sources of information. Tell me I’m full of it and should start over. In return, I promise to listen, and to share my own journey, including insights I gain from books, online courses and the other very smart people I am very lucky to interact with.

This is going to be fun. I’m pumped to get started.

The enterprise "barrier-to-exit" to cloud computing

December 2, 2008 Leave a comment

An interesting discussion ensued on Twitter this weekend between myself and George Reese of Valtira. George–who recently posted some thought provoking posts on O’Reilly Broadcast about cloud security, and is writing a book on cloud computing–argued strongly that the benefits gained from moving to the cloud outweighed any additional costs that may ensue. In fact, in one tweet he noted:

IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier.

I reacted strongly to that statement; I don’t buy that IT is that bad in all cases (though some certainly is), nor do I buy that simply eliminating a barrier to getting something done makes it worth while. Besides, the barrier being removed isn’t strictly financial, it is corporate IT policy. I can build a kick butt home entertainment system for my house for $50,000; that doesn’t mean it’s the right thing to do.

However, as the conversation unfolded, it became clear that George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can’t justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal. At least that’s what George claims his numbers show, and I’m willing to accept that. It makes sense to me.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

(George actually notes a class of applications that sadly are also good candidates, though they shouldn’t necessarily be: applications that IT just can’t or won’t get to on behalf of a business unit. George claims his business makes good money meeting the needs of marketing organizations that have this problem. Just make sure the ROI is really worth it before taking this option, however.)

This existing investment in infrastructure therefore acts almost as a “barrier-to-exit” for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that?

The good news, as I hinted at earlier, is that there is technology that can be rationalized financially–through capital equipment and energy savings–which in turn can “grease the skids” for cloud adoption in the future. Ask the guys at 3tera. They’ll tell you that their cloud infrastructure allows an enterprise to optimize infrastructure usage while enabling workload portability (though not running workload portability) between cloud providers running their stuff. VMWare introduced their vCloud initiative specifically to make enterprises aware of the work they are doing to allow workload portability across data centers running their stuff. Cisco (my employer) is addressing the problem. In fact, there are several great products out there who can give you cloud technology in your enterprise data center that will open the door to cloud adoption now (with things like cloudbursting) and in the future.

If you aren’t considering how to “cloud enable” your entire infrastructure today, you ought to be getting nervous. Your competitors probably are looking closely at these technologies, and when the time is right, their barrier-to-exit will be lower than yours. Then, the true costs of moving an existing data center infrastructure to the cloud will become painfully obvious.

Many thanks to George for the excellent discussion. Twitter is becoming a great venue for cloud discussions.

Do Your Cloud Applications Need To Be Elastic?

November 22, 2008 Leave a comment

I got to spend a few hours at Sys-Con’s Cloud Computing Expo yesterday, and I have to say it was most certainly an intellectually stimulating day. Not only was just about every US cloud startup represented in one way or another, but included were an unusual conference session, and a meetup of fans of CloudCamp.

While listening in on a session, I overheard one participant ask how the cloud would scale their application if they couldn’t replicate it. This triggered a strong response in me, as I really feel for those that confuse autonomic infrastructures with magic applied to scaling unscalable applications. Let me be clear, the cloud can’t scale your application (much, at least) if you didn’t design it to be scaled. Period.

However, that caused me to ask myself whether or not an application had to be horizontally scalable in order to gain economically while running in an Infrastructure as a Service (IaaS) cloud. The answer, I think, is that it depends.

Chris FlexFleck of Citrix wrote up a pretty decent two part explanation of this on his blog a few weeks ago. He starts out with some basic costs of acquiring and running 5 Quad-core servers–either on-premises (amortized over 3 years at 5%) or in a colocation data center–against the cost of running equivalent “high CPU” servers 24X7 on Amazon’s EC2. The short short of his initial post is that it is much more expensive to run full time on EC2 than it is to run on premises or in the colo facility.

How much more expensive?

  • On-premises: $7800/year
  • Colocation: $13,800/year
  • Amazon EC2: $35,040/year

I tend to believe this reflects the truth, even if its not 100% accurate. First, while you may think “ah, Amazon…that’s 10¢ a CPU hour”, in point of fact most production applications that you read about in the cloud-o-sphere are using the larger instances. Chris is right to use high CPU instances in his comparison at 80¢/CPU hour. Second, while its tempting to think in terms of upfront costs, your accounting department will in fact spread the capital costs out over several years, usually 3 years for a server.

In the second part of his analysis, however, Chris notes that the cost of the same Amazon instances vary based on the amount of time they are actually used, as opposed to the physical infrastructure that must be paid for whether it is used or not (with the possible exception of power and AC costs). This comes into play in a big way if the same instances are used judiciously for varying workloads, such as the hybrid fixed/cloud approach he uses as an example.

In other words, if you have an elastic load, plan for “standard” variances on-premises, but allow “excessive” spikes in load to trigger instances on EC2, you suddenly have a very compelling case relative to buying enough physical infrastructure to handle excessive peaks yourself. As Chris notes:

“To put some simple numbers to it based on the original example, let’s assume that the constant workload is roughly equal to 5 Quadcore server capacity. The variable workload on the other hand peaks at 160% of the base requirement, however it is required only about 400 hours per year, which could translate to 12 hours a day for the month of December or 33 hours per month for peak loads such as test or batch loads. The cost for a premise only solution for this situation comes to roughly 2X or $ 15,600 per year assuming existing space and a 20% factor of safety above peak load. If on the other hand you were able to utilize a Cloud for only the peak loads the incremental cost would be only $1,000. ( Based on Amazon EC2 )

Premise Only
$ 15,600 Annual cost ( 2 x 7,800 from Part 1 )
Premise Plus Cloud
$ 7,800 Annual cost from Part 1
$ 1,000 Cloud EC2 – ( 400 x .8 x 3 )
$ 8,800 Annual Cost Premise Plus Cloud “

The lesson of our story? Using the cloud makes the most sense when you have an elastic load. I would postulate that another option would be a load that is not powered on at full strength 100% of the time. Some examples might include:

  • Dev/test lab server instances
  • Scale-out applications, especially web application architectures
  • Seasonal load applications, such as personal income tax processing systems or retail accounting systems

On the other hand, you probably would not use Infrastructure as a Service today for:

  • That little accounting application that has to run at all times, but has at most 20 concurrent users
  • The MS Exchange server for your 10 person company. (Microsoft’s multi-tenant Exchange online offering is different–I’m talking hosting your own instance in EC2)
  • Your network monitoring infrastructure

Now, the managed hosting guys are going to probably jump down my throat with counter arguments about the level of service provided by (at least their) hosting clouds, but my experience is that all of these clouds actually treat self-service as self-service, and that there really is very little difference between do-it-yourself on-premises and do-it-yourself in cloud.

What would change these economics to the point that it would make sense to run any or all of your applications in an IaaS cloud? Well, I personally think you need to see a real commodity market for compute and storage capacity before you see the pricing that reflects economies in favor of running fixed loads in the cloud. There have been a wide variety of posts about what it would take [pdf] to establish a cloud market in the past, so I won’t go back over that subject here. However, if you are considering “moving my data center to the cloud”, please keep these simple economics in mind.

Why the Choice of Cloud Computing Type May Depend On Who’s Buying

November 15, 2008 Leave a comment

Thanks to Ron K. Jeffries’ Cloudy Thinking blog, I was directed to Redmonk’s Stephen O’Grady (who I now subscribe to directly) and his excellent post titled Cloud Types: Fabric vs Instance. Stephen makes an excellent observation about the nature of Infrastructure as a Service (called increasingly “Utility Computing” by Tim O’Reilly followers) and Platform as a Service (that one remains consistent). His observation is this:

“…Tim seems to feel that they are aspects of the types, while I’m of the opinion that they instead define the type. For example, by Tim’s definition, one characteristic of Utility Computing style clouds is virtual machine instances, where my definitions rather centers on that.

Here’s how I typically break down cloud computing styles:

Fabric

Description: A problematic term, perhaps, because a few of the vendors employ it towards different ends, but I use it because it’s descriptive. Rather than deploy to virtualized instances, developers building on this style cloud platform write instead to a fabric. The fabric’s role is to abstract the underlying physical and logical architecture from the developer, and – typically – to assume the burden of scaling.
Example: Google App Engine

Instance

Description: Instance style clouds are characterized by, well, instances. Unlike the fabric cloud, there is little to no abstraction present within instance based clouds: they generally recreate – virtually – a typical physical infrastructure composed of instances that include memory, processing cycles, and so on. The lack of abstraction can offer developers more control, but this control is typically offered at the cost of transparent scaling.
Example: Amazon EC2″

I love that distinction. First, for those struggling to see how Amazon/GoGrid/Flexiscale/etc. relates to Google/Microsoft/Salesforce.com/Intuit/etc., it delineates a very clear difference. If you are reserving servers on which to run applications, it is IaaS. If you are running your application free of care about which and how many resources are consumed, then it is PaaS. Easy.

However, I am even more excited by a thought that occurred to me as I read the post. One of the things that this particular distinction points out is the likelihood that the buyers of each type would be different classes of enterprise IT professionals.

Its not black and white, but I would be willing to bet heavily that :

  • The preponderance of interest in IaaS is from those whose primary concern is system administration; those with complex application profiles, who want to tweak scalability themselves, and who want the freedom to determine how data and code get stored, accessed and acted upon.

  • The preponderance of interest in PaaS is from those whose primary concerns is application development; those with a functional orientation, who want to be more concerned about creating application experiences than worrying about how to architect for deployment in a web environment (or whatever the framework provides).

In other words, server jockeys chose instances, while code jockeys choose fabric.

Now, the question quickly becomes, if developers can get the functionality and scalability/reliability/availability required from PaaS, without hiring the system administrators, why would any enterprise choose IaaS unless they were innovating at the architecture level? On the other hand, if all you want to do is add capacity to existing functionality, or you require an unusual or even innovative architecture, or you need to guarantee that certain security and continuity precautions are in place, why would you ever choose PaaS?

This, in turn, boils right back down to the PaaS spectrum I spoke of recently. Choose your cloud type based on your true need, but also take into account the skill set you will require. Don’t focus on a single brand just because it’s cool to your peers. Pick IaaS if you want to tweak infrastructure, otherwise by all means find the PaaS platform that best suits you. You’ll probably save in the long run.

Now, I’ve clearly suppressed the fact that developers probably still want some portability…though I must note that choosing a programming language alone limits function portability. (Perhaps that’s OK if the productivity values out weigh the likelihood of having to port.) Also, the things that system administrators are doing in the enterprise are extremely important, like managing security, data integrity and continuity. There are no guarantees that any of the existing PaaS platforms can help you with any of that.

Something to think about, anyway. What do you think? Will developers lean towards PaaS, while system administrators lean towards IaaS? Who will win the right to choose within the enterprise?

Two Announcements to Pay Attention To This Week

November 12, 2008 Leave a comment

I know I promised a post on how the network fits into cloud computing, but after a wonderful first part of my week spending time first catching up on reading, then one-on-one with my 4-year old son, I’m finally digging into what’s happened in the last two days in the cloud-o-sphere. While the network post remains important to me, there were several announcements that caught my eye, and I thought I’d run through two of them quickly and give you a sense of why they matter

The first announcement came from Replicate Technologies, Rich Miller’s young company, which is focusing initially on virtualization configuration analysis. The Replicate Datacenter Analyzer (RDA) is a powerful analysis and management tool for evaluating the configuration and deployment of virtual machines in an enterprise data center environment. But it goes beyond just evaluating the VMs itself, to evaluating the server, network and storage configuration required to support things like vMotion.

Sound boring, and perhaps not cloud related? Well, if you read Rich’s blog in depth, you may find that he has a very interesting longer term objective. Building on the success of RDA, Replicate aims to become a core element of a virtualized data center operations platform, eventually including hybrid cloud configurations, etc. While initially focused on individual servers, one excellent vision that Rich has is to manage the relationships between VMs in such a tool, so that operations taken on one server will take into account the dependencies on other servers. Very cool.

Watch the introductory video for the fastest impression of what Replicate has here. If you manage virtual machines, sit up and take notice.

The other announcement that caught my eye was the new positioning and features introduced by my alma mater, Cassatt Corporation this week. I’ve often argued that Cassatt is an excellent example of a private cloud infrastructure, and now they are actively promoting themselves as such (although they use the term “internal cloud”).

It’s about freaking time. With a mature, “burned in”, relatively technology agnostic platform that has perhaps the easiest policy management user experience ever (though not necessarily the prettiest), Cassatt has always been one of my favorite infrastructure plays (though I admit some bias). They support an incredible array of hardware, virtualization and OS platforms, and provide the rare ability to manage not only virtual machines, but also bare metal systems. You get automated power management, resource optimization, image management, and dynamic provisioning. For the latter, not only is server provisioning automated, but also network provisioning–such that deploying an image on a server triggers Cassatt to reprogram the ports that the target server is connected to so that they sit on the correct VLAN for the software about to be booted.

The announcement talks a lot about Cassatt’s monitoring capabilities, and a service they provide around application profiling. I haven’t been briefed about these, but given their experience with server power management (a very “profiling focused” activity) I believe they could probably have some unique value-add there. What I remember from six months ago was that they introduced improved dynamic load allocation capabilities that could use just about any digital metric (technical or business oriented) to set upper and lower performance thresholds for scaling. Thus, you could use CPU utilization, transaction rates, user sessions or even market activity to determine the need for more or less servers for an application. Not too many others break away from the easy CPU/memory utilization stats to drive scale.

Now, having said all those nice things, I have to take Cassatt to task for one thing. Throughout the press release, Cassatt talks about Amazon and Google like infrastructure. However, Cassatt is doing nothing to replicate the APIs of either Amazon (which would be logical) or Google (which would make no sense at all). In other words, as announced, Cassatt is building on their own proprietary protocols and interfaces, with no ties to any external clouds or alternative cloud platforms. This is not a very “commodity cloud computing” friendly approach, and obviously I would like to see that changed. And, the truth is, none of their direct competitors are doing so, either (with the possible exception of the academic research project, EUCALYPTUS).

The short short of it is if you are looking at building a private cloud, don’t overlook Cassatt.

There was another announcement from Hyperic that I want to comment on, but I’m due to chat with a Hyperic executive soon, so I’ll reserve that post for later. The fall of 2008 remains a heady time for cloud computing, so expect many more of these types of posts in the coming weeks.

Change, Take Two

November 11, 2008 Leave a comment

As those of you that follow me on Twitter already know, last Friday (11/7) was my last day at Alfresco. Although my time there was very short, it was one of the most valuable experiences of my almost 20 year career. I learned more about the state of enterprise Java applications, the importance of ECM to almost every business, and the great advantage that open source has in the enterprise software market. Not bad for just short of six months. The company and its product are both incredible, and I owe a lot to the amazing field team at Alfresco, especially Matt Asay, Scott Davis, Luis Sala and Peter Monks.

Why am I moving on so soon, then? Well, I’m happy to say that I’m taking on the role of Marketing Manager/Technology Evangelist for Data Center Virtualization at Cisco Systems, Inc. (including Cloud Computing and Cisco’s Data Center 3.0 strategy, at least as it relates to virtualization). This is a dream job in many ways, as Cisco is still in the formative stages of its cloud computing strategy (for both service providers and end users), and I have the chance to be part of what will most likely be the most important producer of cloud and virtualization enabled networking software, services and equipment.

This is also an opportunity in which both my passion for blogging and my passion for cloud computing come directly into play. I am excited to work with both Doug Gourlay and Peter Linkin, who are incredibly smart people with a vision I can really get my head around.

Lest you fret that I’m going to go “all Cisco” here on The Wisdom of Clouds, let me assure you that this was pretty much the first point of negotiation when they approached me about the position. The Wisdom of Clouds (or whatever it morphs into–more on that later in the week) will remain my independent blog, in which I will endeavor to provide the same humble insight into the state of the cloud market and technology that I always have. Cisco specific posts will largely be confined to the Data Center group’s excellent blog, where I will become a regular contributor.

In the end, what sold me on joining Cisco was the excellent opportunity to explore the “uncharted territory” that is the network’s role in cloud computing. More on that in my next post.

Categories: cloud computing, personal

Salesforce.com Announces They Mean Business

November 5, 2008 Leave a comment

I had some business to take care of in downtown San Francisco this morning, and on my way to my destination, I strolled past Moscone Center, the site of this year’s Dreamforce conference. The news coming out of that conference had peaked my interest a day earlier–I’ll get to that in a minute–but when I saw the graphics and catch phrase of the conference, I had to laugh. Not in mockery, mind you; it was just ironic.

There, spanning the vast entrances of both Moscone North and South was nothing but blue skies and fluffy white…wait for it…clouds. In other words, the single theme of the conference visuals was, I can only assume, cloud computing. Not CRM, not “making your business better”, but an implementation mechanism; a way of doing IT. That’s the irony, in my mind; that in this amazing month or so of cloud computing history, one of the companies most aggressively associating themselves with cloud computing was a CRM company, not a compute capacity or storage provider.

Except, Salesforce.com was already blurring the lines between PaaS and SaaS, even as they open the door to their partners and customers taking advantage of IaaS where it makes sense. Even before Marc Benioff’s keynote yesterday, it was clear that force.com was far more than a way to simply customize the core CRM offering. Granted, most applications launched there took advantage of Salesforce.com data or services in one way or another, but there was clear evidence that the SF gang were targeting a PaaS platform that stood alone, even as it provided the easiest way to draw customers into the CRM application.

The core of the new announcement, Sites, appears to simply be an extension of this. The idea behind Sites is to provide a web site framework that allows customers to address both Intranet and Internet applications without needing to run any infrastructure on-premises. Of course, if you find the built in SF integration makes adopting the CRM platform easier, then SF would be happy to help. Their goal, you see, is summed up in the conference catch phrase: “The End of Software”. (Of course, let’s just ignore the fact that force.com is a software development platform, any way you cut it.)

Skeptical that you can get what you need from a single PaaS offering? Here’s where the genius part of the day’s announcements come in; simply utilize Amazon for the computing and storage needs that force.com was unable to provide. Heck, yeah.

Allow me to observe something important, here. First, note that Salesforce does not have an existing packages software model; thus, there is no incentive whatsoever to offer an on-premesis alternative. Touche, Microsoft. Second, note that Salesforce.com has no problem whatsoever with partnering with someone who does something better than them. En guarde, Google. Finally, pay attention to the fact Salesforce.com is expanding its business offerings in a way that both serves existing customers in increasingly powerful ways, while inviting new, non CRM customers to use productive tools that just happen to include integration with the core offering. PaaS as a marketing hook, not necessarily a business model in and of itself. (If it succeeds on its own, that’s icing on the cake.)

In a three week period that has seen some of the most revolutionary cloud computing announcements, Salesforce.com managed to not only keep themselves relevant, but further managed to make a grab for significant cloud mindshare. Fluffy, white, cloud mindshare.