Archive

Archive for the ‘data center culture’ Category

VMWare’s Most Important Cloud Research? It Might Not Be Technology

I was kind of aimlessly wandering around my Google Reader feeds the other day when I came across an interview with Carl Eschenbach, VMWare’s executive vice president of worldwide field operations, titled “Q&A: VMware’s Eschenbach Outlines Channel Opportunities In The Virtual Cloud“. (Thanks to Rich Miller of Telematique for the link.) I started reading the article thinking it was going to be all about how to sell vCloud, but throughout the article, it was painfully clear that a hybrid cloud concept will cause some real disruption in the VMWare channel.

The core problem is this:

  1. Today, VMWare solution providers enjoy tremendous margins selling not only VMWare products, but associated services (often 5 to 7 times the revenue in services than software), and server, storage and networking hardware required to support a virtualized data center.

  2. However, vCloud introduces the concept of offloading some of that computing to a capacity service provider, in a relationship where the solution provider acts merely as a middleman for the initial transaction.

  3. Ostensibly, the solution provider then gets a one time fee, but is squeezed out of recurring revenue for the service.

In other words, VMWare’s channel is not necessarily pumped about the advent of cloud computing.

To Eschenbach’s credit, he acknowledges that this could be the case:

We think there’s a potential. And we’re doing some studies right now with some of our larger solution providers, looking at whether there’s a possibility that they not only sell VMware SKUs into the enterprise, but if that enterprise customer wants to take advantage of cloud computing from a service provider that our VARs, our resellers, actually sell the service providers’ SKUs. So, not only are they selling into the enterprise data center, but now if that customer wants to take advantage of additional capacity that exists outside the four walls of the data center, why couldn’t our solution providers, our VIP resellers, resell a SKU that Verizon (NYSE:VZ) or Savvis or SunGard or BT is offering into that customer. So they can have the capability of selling into the enterprise cloud and the service provider cloud on two different SKUs and still maintain the relationship with the customer.

In a follow up question, Eschenbach declares:

[I]t’s not a lot different from a solution provider today selling into an account a VMware license that’s perpetual. Now, if you’re selling a perpetual license and you’re moving away from that and [your customer is] buying capacity on demand from the cloud, every time they need to do that, if they have an arrangement through a VAR or a solution provider to get access to that capacity, and they’re buying the SKU from them, they’re still engaged.

Does anyone else get the feeling that Eschenbach is talking about turning solution providers into cloud capacity brokerages? Furthermore, that such a solution provider now acts as a very inefficient capacity brokerage? Specifically, choosing the service that provides them with the best margins and locking customers into those providers, instead of the service that gives the customer the most bang for the buck on any given day? Doesn’t this create an even better opportunity for the more pure, independent cloud brokerages to sell terms and pricing that favor the customer?

I think VMWare may have a real issue on their hands, in which maintaining their amazing ecosystem of implementation partners may give way to more direct partnerships with specific cloud brokerages (for capacity) and system integrators (for consultative advice on optimizing between private and commercial capacity). The traditional infrastructure VAR gets left in the dust.

Part of the problem is that traditional IT service needs are often “apples and oranges” to online-based cloud computing needs. Serving traditional IT allows for specialization based on region and industry. In both cases, the business opportunity is on site implementation of a particular service or application system. Everyone has to do it that way, so every business that goes digital (and eventually they all have) needs these services in full.

The cloud now dilutes that opportunity. If the hardware doesn’t run on site, there is no opportunity to sell installation services. If the software is purchased as SaaS, there is no opportunity to sell instances of turnkey systems and the services to install and configure that software. If the operations are handled largely by a faceless organization in a cloud capacity provider, there is no opportunity to sell system administration or runbook services for that capacity. If revenue is largely recurring, there is no significant one-time “payday” for selling someone else’s capacity.

So the big money opportunity for service providers in the cloud is strategic, with just a small amount of tactical work to go around.

One possible exception, however, is system management software and hardware. In this case, I believe that customers need to consider owning their own service level automation systems and to monitor the conditions of all software they have running anywhere, either behind or outside of their own firewalls. There is a turnkey opportunity here, and I know many of the cloud infrastructure providers are talking appliance these days for that purpose. Installing and configuring these appliances is going to take specific expertise that should grow in demand over the next decade.

Unless innovative vendors such as RightScale and CohesiveFT kill that opportunity, too.

I know I’ve seen references by others to this channel problem. (In fact, Eschenbach’s interview also raised red flags for Alessandro Perilli of virtualization.info.) On the other hand, others are optimistic it creates opportunity. So maybe I’m just being paranoid. However, if I was a solution provider with my wagon hitched to VMWare’s star, I’d be thinking really hard about what my company will look like five years from now. And if I’m a customer, I’d be looking closely at how I will be acquiring compute capacity in the same time frame.

"Follow the law" computing

A few days ago, Nick Carr worked his usual magic in analyzing Bill Thompson’s keen observation that every element of “the cloud” eventually boils down to a physical element in a physical location with real geopolitical and legal influences. This problem was first brought to my attention in a blog post by Leslie Poston noting that the Canadian government has refused to allow public IT projects to use US-based hosting environments for fear of security breaches authorized via the Patriot Act. Nick added another example with the following:

Right before the manuscript of The Big Switch was shipped off to the printer (“manuscript” and “shipped off” are being used metaphorically here), I made one last edit, adding a paragraph about France’s decision to ban government ministers from using Blackberrys since the messages sent by the popular devices are routinely stored on servers sitting in data centers in the US and the UK. “The risks of interception are real,” a French intelligence official explained at the time.

I hadn’t thought too much about the political consequences of the cloud since first reading Nick’s book, but these stories triggered a vision that I just can’t shake.

Let me explain. First, some setup…

One of the really cool visions that Bill Coleman used to talk about with respect to cloud computing was the concept of “follow the moon“; in other words, moving running applications globally over the course of an earth day to where processing power is cheapest–on the dark side of the planet. The idea was originally about operational costs in general, but these days Cassatt and others focus this vision around electricity costs.

The concept of “moving” servers around the world was greatly enhanced by the live motion technologies offered by all of the major virtualization infrastructure players (e.g. VMotion). With these technologies (as you all probably know by now), moving a server from one piece of hardware to another is as simple as clicking a button. Today, most of that convenience is limited to within a single network, but with upcoming SLAuto federation architectures and standards that inter-LAN motion will be greatly simplified over the coming years.

(It should be noted that “moving” software running on bare metal is possible, but it requires “rebooting” the server image on another physical box.)

The key piece of the puzzle is automation. Whether simple runbook-style automation (automating human-centric processes) or all-out SLAuto, automation allows for optimized decision making across hundreds, thousands or even tens of thousands of virtual machines. Today, most SLAuto is blissfully unaware of runtime cost factors, such as cost of electricity or cost of network bandwidth, but once the elementary SLAuto solutions are firmly established, this is naturally the next frontier to address.

But hold on…

As the articles I noted earlier suggest, early cloud computing users have discovered a hitch in the giddy-up: the borders and politics of the world DO matter when it comes to IT legislation.

If law will in fact have such an influence on cloud computing dynamics, it occurs to me that a new cost factor might outshine simple operations when it comes to choosing where to run systems; namely, legality itself. As businesses seek to optimize business processes to deliver the most competitive advantage at the lowest costs, it is quite likely that they will seek out ways to leverage legal loopholes around the world to get around barriers in any one country.

Now, this is just pie-in-the-sky thinking on my part, and there are 1000 holes here, but I think its worth going through the exercise of thinking this out. The problem is complicated, as there are different laws that apply to data and the processing being one on that data (as well as, in some jurisdictions, the record keeping about both the data and the processing). However, there are technical solutions available today for both data and processing that could allow a company to mix and match the geographies that give them the best legal leverage for the services they wish to offer:

  • Database Sharding/Replication

    Conceptually, the simplest way to keep from violating any one jurisdiction’s data storage or privacy laws is to not put the data in the jurisdiction. This would be hard to do, if not for some really cool data base sharding frameworks being released to the community these days.

    Furthermore, replicate the data in multiple jurisdictions, but use the best-case instance of that data for processing happening in a given jurisdiction. In fact, by replicating a single data exchange into multiple jurisdictions at once, it becomes possible to move VMs from place to place without losing (read-only, at least) access to that data.

  • VMotion/LiveMotion

    From a processing perspective, once you solve legally accessing the data from each jurisdiction, you can now move your complete processing state from place to place as processing requires, without losing a beat. In fact, with networks getting as fast as they are, transfer times at the heart of the Internet may be almost as fast as on a LAN, and those times are usually measured in the low hundreds of milliseconds.

    So, run your registration process in the USA, your banking steps in Switzerland, and your gambling algorithms in the Bahamas. Or, market your child-focused alternative reality game in the US, but collect personal information exclusively on servers in Madagascar. It may still be technically illegal from a US perspective, but who do they prosecute?

Again, I know there are a million roadblocks here, but I also know both the corporate world and underworld have proven themselves determined and ingenious technologists when it comes to these kinds of problems.

As Leslie noted, our legislators must understand the economic impact of a law meant for a physical world on an online reality. As Nick noted, we seem to be treading into that mythical territory marked on maps with the words “Here Be Dragons”, and the dragons are stirring.

John Willis Honors Me with Inaugural Cloud Cafe Podcast

April 5, 2008 1 comment

I am the inaugural guest in John Willis’s Cloud Cafe podcast series. I couldn’t be more honored.

Those of you who have been following this whole “what is Cloud Computing” debate may have had the opportunity to see the conversations between several bloggers regarding how to define cloud computing and related technologies. John Willis, of the John Willis ESM Blog, is making a key contribution by taking on the challenge of classifying vendors in this space. As I had some issues with his classification of Cassatt, he thought the best way to resolve that was to invite me to launch his new series.

Two things were resolved in this podcast.

First, I learned first hand what a classy guy John is. He handled the interview very well, let me talk my butt off (a talent I got from my minister mother, I think) and had several observations over the course of the conversation that showed his tremendous experience in the enterprise systems management space. I feel quite sheepish that I ever hinted that he wasn’t being forthright with his audience. Lesson gratefully learned; apology gladly offered.

Second, John and I were always much closer in our visions of cloud computing, utility computing and enterprise systems than it might have appeared at first. Our conversation raged from the aforementioned “what is cloud computing” question, to topics such as:

  • the relationship between cloud and utility computing,
  • the cultural challenge facing enterprises seeking the economic returns of these technologies,
  • how cloud and utility computing revolutionize performance and capacity planning, and
  • where Hadoop and CloudDB fit into all of this.

In the end, I think John and I agreed that cloud computing is more than just virtualization on the Internet. I very much enjoyed the conversation, and I hope you will take the time to listen to this podcast.

Got questions or comments? Post them here or on John’s blog; I will check both.

Finally, I will be working to get Cassatt’s entry in John’s classifications updated as a result of the discussion.

Fun with Simon

February 29, 2008 2 comments

Simon Wardley created a couple of posts this week that make for good smiles. The first is his maturity model for cloud computing:


This one I agree with. Very funny, but funny because it reflects truth.

The second is a post on open source computing. I completely disagree with the concept that open source can keep up with closed source in terms of innovation (Anne Zelenka makes a great argument here), and that closed source is bad for ducks (see Simon’s post).

However, I do believe that standardization spreads faster with open source than with closed source. For what its worth, I would also like to see a major utility computing platform release its technology to open source. (Well, at least the components that are required for portability.) I just wonder why any of them would without pressure from the market.

My equations would reflect the “Schrodinger’s Cat” aspects of closed source products prior to the introduction of accepted standards,

open source == kindness to ducks
closed source == ambivolence towards ducks; could go either way
🙂

The importance of operations to online services customers

February 7, 2008 Leave a comment

I hadn’t caught up on Gabriel Morgan’s blog in a while, so I’m a week or so late in seeing his interesting post on the importance of operations features in a SaaS product offering. Gabriel works at Microsoft on the team that is looking at the Software plus Services offerings introduced by Ray Ozzie a few months ago. According to Gabriel, being a software product company, Microsoft has occasionally been slow to learn a key lesson in the online services game:

In the traditional packaged software business, product features define what a product is but Customer 2.0 expects to have direct access to operational features within the Service Offering itself.

Take for example Microsoft Word. Product Features such as Import/Export, Mail Merge, Rich Editing, HTML support, Charts and Graphs and Templates are the types of features that Customer 1.0 values most in a product. SaaS Products are much different because Customer 2.0 demands it. Not only must a product include traditional product features, it must also include operational features such as Configure Service, Manage Service SLA, Manage Add-On Features, Monitor Service Usage Statistics, Self-Service Incident Resolution as well. In traditional packaged software products, these features were either supported manually, didn’t exist or were change requests to a supporting IT department.

In other words “Service Offering = (Product Features) + (Operational Features)”.

Wow. What a simple way to state something I’ve been concerned about for some time now: as you move your enterprise into the cloud, will your service providers (be it SaaS, HaaS, PaaS or others) provide you with the tools and data you need to successfully operate your business? How will you be able to interact with both the service provider’s software and personel to make sure those operations run a) according to your wishes, and b) with no negative impact on your business?

Gabriel goes on:

Guess who builds and supports these Operational Features? Your friendly neighborhood IT department in conjunction with the Operations and Service Offering product group. This raises the quality bar for your traditional IT shop.

Heck, yeah. And guess what? Should a business do something crazy–oh, say, select SaaS products from more than one vendor to integrate into their varied business processes–they will need not only to build solid operational ties with each vendor, but integrate those operational features across vendors. Think about that.

How best to do that? You shouldn’t be surprised when I tell you that a key element of the solution is SLAuto under the control of the business. Managing SaaS systems to business-defined service levels will be a critical role of IT in the cloud-scape of tomorrow.

Cloud computing heats up

February 6, 2008 4 comments

Today’s reading has been especially interesting, as it has become clear that a) “cloud computing” is a concept that more and more IT people are beginning to understand and dissect, and b) there is the corresponding denial that comes with any disruptive change. Let me walk you through my reading to demonstrate.

I always start with Nick Carr, and today he did not disappoint. It seems that IBM has posited that a single (distributed) computer could be built that could run the entire Internet, and expand as needed to meet demand. Of course, this would require the use of Blue Gene, an IBM technology, but man does it feed right into Nick’s vision of the World Wide Computer. To Nick’s credit, he seems skeptical–I know I am. However, it is a worthy thought experiment to think how one would design distributed computing to be more efficient if one had control over the entire architecture from chip to system software. (Er, come to think of it, I could imagine Apple designing a compute cloud…)

I then came across an interesting breakdown of cloud computing by John M Willis, who appears to contribute to redmonk. He breaks down the cloud according to “capacity-on-demand” options, and is one of the few to include a “turn your own capacity into a utility” component. Unfortunately, he needs a little education of these particular options, but I did my best to set him straight. (I appreciate his kind response to my comment.) If you are trying to understand how to break down the “capacity-on-demand” market, this post (along with the comments) is an excellent starting place.

Next on the list was a GigaOm post by Nitin Borwankar stating his concept of “Data Property Rights” and expressing some skepticism about the “data portability” movement. At first I was concerned that he was going to make an argument reinforced certain cloud lock-in principles, but he actually makes a lot of sense. I still want to see Data Portability as an element of his basic rights list, but he is correct when he says if the other elements are handled correctly, data portability will be a largely moot issue (though I would argue it remains a “last resort” property right).

Dana Blankenhorn at ZDNet/open-source covers a concept being put forth by Etelos, a company I find difficult to describe, but that seems to be an “application-on-demand” company (interesting concept). “Opportunity computing“, as described by Etelos CEO Danny Kolke describes the complete set of software and infrastructure required to meet a market opportunity on a moments notice. “Opportunity computing is really a superset of utility computing,” Kolke notes. Blankenhorn adds,

“It’s when you look at the tools Kolke is talking about that you begin to get the picture. He’s combining advertising, applications, the cash register, and all the relationships which go into those elements in his model. “

In other words, it seems like prebuilt ecommerce, CRM and other applications that can quickly be customized and deployed as needed, to the hosting solution of your choice. My experience with this kind of thing is that it is impossible to satisfy all of the people, all of the time, but I’m fascinated by the concept. Sort of Platform as a Service with a twist.

Finally, the denial. The blog “pupwhines” remains true to its name as its author whimpers about how Nick “has figured out that companies can write their own code and then run it in an outsourced data center.” Those of you that have been following utility/cloud computing know that this misses the point entirely. Its not outsourcing capacity that is new, but its the way it is outsourced–no contracts for labor, no work-order charges for capacity changes, etc. In other words, just pay for the compute time.

With SLAuto, it gets even more interesting as you would just tell the cloud “run this software at these service levels”, and the who, what, where and how would be completely hidden from you. To equate that with the old IBM/Accenture/{Insert Indian company here} mode of outsourcing is like comparing your electric utility to renting generators from your neighbors. (OK, not a great analogy, but you get the picture.)

Another interesting data point for measuring the booming interest in utility and cloud computing is the fact that my Google Alerts emails for both terms have grown from one or two links a day, to five or more links each and every day. People are talking about this stuff because the economics are so compelling its impossible not to. Just remember to think before you jump on in.

It’s the labor, baby…

January 28, 2008 2 comments

I’m getting ready to go back to work on Wednesday, so I decided today (while Owen is at school and Mia has Emery) to get caught up on some of the blog chatter out there. First, read Nick Carr’s interview with GRIDToday. Damn it, I wish this was the sentiment he communicated in “The Big Switch“, not the “its all going to hell” tone the book actually conveyed.

Second, Google Alerts, as always, is an excellent source, and I found an interesting contrarian viewpoint about cloud computing from Robin Harris (apparently a storage marketing consultant). Robin argues there are two myths that are propelling “cloud computing” as a buzz phrase, but that private data centers will never go away in any real quantity.

Daniel Lemire responds with a short-but-sweet post that points out the main problem with Robin’s thinking: he assumes that hardware is the issue, and ignores the cost of labor required to support that hardware. (Daniel also makes a point about latency being the real issue in making cloud computing work, not bandwidth, but I won’t address that argument here, especially with Cisco’s announcement today.)

The cost of labor, combined with real economies of scale is the real core of the economics of cloud computing. Take this quote from Nick Carr’s GRIDToday interview:

If you look at the big trends in big-company IT right now, you see this move toward a much more consolidated, networked, virtualized infrastructure; a fairly rapid shift of compressing the number of datacenters you run, the number of computers you run. Ultimately … if you can virtualize your own IT infrastructure and make it much more efficient by consolidating it, at some point it becomes natural to start to think about how you can gain even more advantages and more cost savings by beginning to consolidate across companies rather than just within companies.

Where does labor come into play in that quote? Well, consider “compessing of the number of datacenters you run”, and add to that to the announcement that the Google datacenter in Lenoir, North Carolina will hire a mere 200 workers (up to 4 times as many as announced Microsoft and Yahoo data centers). This is a datacenter that will handle traffic for millions of people and organizations worldwide. If, as Robin implies, corporations will take advantage of the same clustering, storage and network technologies that the Googles and Microsofts of the world leverage, then certainly the labor required to support those data centers will go down.

The rub here is that, once corporations experience these new economies of scale, they will begin to look for ways to push the savings as far as possible. Now the “consolidat[ion] across companies rather than just within companies” takes hold, and companies begin to shut down their own datacenters and rely on the compute utility grid. Its already happening with small business, as Nick, I and many others have pointed out. Check out Don McAskill’s SmugMug blog if you don’t believe me. Or GigaOM’s coverage of Standout Jobs. It may take decades, as Nick notes, but big business will eventually catch on. (Certainly those startups that turn into big businesses using the cloud will drive some of these economics.)

One more objection to Robin’s post. To argue that “Networks are cheap” is a falicy, he notes that networks still lag is speed behind processors, memory, bus speeds, etc. Unfortunately, that misses the point entirely. All that is needed are network speeds that get to the point where functions complete in a time that is acceptible for human users and economically viable for system communications. That function is independent of the network’s speed relative to other components. For example, my choice of Google Analytics to monitor blog traffic is solely dependent on my satisfaction with the speed of the conversation. I don’t care how fast Google’s hardware is, and all evidence seems to point to the fact that their individual systems and storage aren’t exceptionally fast at all.