Archive

Archive for the ‘private clouds’ Category

The enterprise "barrier-to-exit" to cloud computing

December 2, 2008 Leave a comment

An interesting discussion ensued on Twitter this weekend between myself and George Reese of Valtira. George–who recently posted some thought provoking posts on O’Reilly Broadcast about cloud security, and is writing a book on cloud computing–argued strongly that the benefits gained from moving to the cloud outweighed any additional costs that may ensue. In fact, in one tweet he noted:

IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier.

I reacted strongly to that statement; I don’t buy that IT is that bad in all cases (though some certainly is), nor do I buy that simply eliminating a barrier to getting something done makes it worth while. Besides, the barrier being removed isn’t strictly financial, it is corporate IT policy. I can build a kick butt home entertainment system for my house for $50,000; that doesn’t mean it’s the right thing to do.

However, as the conversation unfolded, it became clear that George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can’t justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal. At least that’s what George claims his numbers show, and I’m willing to accept that. It makes sense to me.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

(George actually notes a class of applications that sadly are also good candidates, though they shouldn’t necessarily be: applications that IT just can’t or won’t get to on behalf of a business unit. George claims his business makes good money meeting the needs of marketing organizations that have this problem. Just make sure the ROI is really worth it before taking this option, however.)

This existing investment in infrastructure therefore acts almost as a “barrier-to-exit” for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that?

The good news, as I hinted at earlier, is that there is technology that can be rationalized financially–through capital equipment and energy savings–which in turn can “grease the skids” for cloud adoption in the future. Ask the guys at 3tera. They’ll tell you that their cloud infrastructure allows an enterprise to optimize infrastructure usage while enabling workload portability (though not running workload portability) between cloud providers running their stuff. VMWare introduced their vCloud initiative specifically to make enterprises aware of the work they are doing to allow workload portability across data centers running their stuff. Cisco (my employer) is addressing the problem. In fact, there are several great products out there who can give you cloud technology in your enterprise data center that will open the door to cloud adoption now (with things like cloudbursting) and in the future.

If you aren’t considering how to “cloud enable” your entire infrastructure today, you ought to be getting nervous. Your competitors probably are looking closely at these technologies, and when the time is right, their barrier-to-exit will be lower than yours. Then, the true costs of moving an existing data center infrastructure to the cloud will become painfully obvious.

Many thanks to George for the excellent discussion. Twitter is becoming a great venue for cloud discussions.

Advertisements

Why I Think CohesiveFT’s VPN-Cubed Matters

October 28, 2008 Leave a comment

You may have seen some news about CohesiveFT’s new product today–in large part thanks the the excellent online marketing push they made in the days preceding the announcement. (I had a great conversation with Patrick Kerpan, their CTO.) Normally, I would get a little suspicious about how big a deal such an announcement really is, but I have to say this one may be for real. And so do others, like Krishnan Subramanian of CloudAve.

CohesiveFT’s VPN-Cubed is targeting what I call “the last great frontier of the cloud”, networking. Specifically, it is focusing a key problem–data security and control–in a unique way. The idea is that VPN-Cubed gives you software that allows you to create a VPN of sorts that is under your personal control, regardless of where the endpoints reside, on or off the cloud. Think of it as creating a private cloud network, capable of tying systems together across a plethora of cloud providers, as well as your own network.

The use case architecture is really very simple.


Diagram courtesy of CohesiveFT

VPNCubed Manager VMs are run in the network infrastructure that you wish to add to your cloud VPN. The manager then acts as a VPN gateway for the other VMs in that network, who can then communicate to other systems on the VPN via virtual NICs assigned to the VPN. I’ll stop there, because networking is not my thing, but I will say it is important to note that this is a portable VPN infrastructure, which you can run on any compatible cloud, and CohesiveFT’s business is to create images that will run on as many clouds as possible.

Patrick made a point of using the word “control” a lot in our conversation. I think this is where VPN-Cubed is a game changer. It is one of the first products I’ve seen target isolating your stuff in someone else’s cloud, protecting access and encryption in a way that leaves you in command–assuming it works as advertised…and I have no reason to suspect otherwise.

Now, will this work with PaaS? No. SaaS? No. But if you are managing your applications in the cloud, even a hybrid cloud, and are concerned about network security, VPN-Cubed is worth a look.

What are the negatives here? Well, first I think VPN is a feature of a larger cloud network story. This is the first and only of its kind in the market, but I have a feeling other network vendors looking at this problem will address it in a more comprehensive solution.

Still, CohesiveFT has something here: it’s simple, it is entirely under your control, and it serves a big immediate need. I think we’ll see a lot more about this product as word gets out.

VMWare’s Most Important Cloud Research? It Might Not Be Technology

I was kind of aimlessly wandering around my Google Reader feeds the other day when I came across an interview with Carl Eschenbach, VMWare’s executive vice president of worldwide field operations, titled “Q&A: VMware’s Eschenbach Outlines Channel Opportunities In The Virtual Cloud“. (Thanks to Rich Miller of Telematique for the link.) I started reading the article thinking it was going to be all about how to sell vCloud, but throughout the article, it was painfully clear that a hybrid cloud concept will cause some real disruption in the VMWare channel.

The core problem is this:

  1. Today, VMWare solution providers enjoy tremendous margins selling not only VMWare products, but associated services (often 5 to 7 times the revenue in services than software), and server, storage and networking hardware required to support a virtualized data center.

  2. However, vCloud introduces the concept of offloading some of that computing to a capacity service provider, in a relationship where the solution provider acts merely as a middleman for the initial transaction.

  3. Ostensibly, the solution provider then gets a one time fee, but is squeezed out of recurring revenue for the service.

In other words, VMWare’s channel is not necessarily pumped about the advent of cloud computing.

To Eschenbach’s credit, he acknowledges that this could be the case:

We think there’s a potential. And we’re doing some studies right now with some of our larger solution providers, looking at whether there’s a possibility that they not only sell VMware SKUs into the enterprise, but if that enterprise customer wants to take advantage of cloud computing from a service provider that our VARs, our resellers, actually sell the service providers’ SKUs. So, not only are they selling into the enterprise data center, but now if that customer wants to take advantage of additional capacity that exists outside the four walls of the data center, why couldn’t our solution providers, our VIP resellers, resell a SKU that Verizon (NYSE:VZ) or Savvis or SunGard or BT is offering into that customer. So they can have the capability of selling into the enterprise cloud and the service provider cloud on two different SKUs and still maintain the relationship with the customer.

In a follow up question, Eschenbach declares:

[I]t’s not a lot different from a solution provider today selling into an account a VMware license that’s perpetual. Now, if you’re selling a perpetual license and you’re moving away from that and [your customer is] buying capacity on demand from the cloud, every time they need to do that, if they have an arrangement through a VAR or a solution provider to get access to that capacity, and they’re buying the SKU from them, they’re still engaged.

Does anyone else get the feeling that Eschenbach is talking about turning solution providers into cloud capacity brokerages? Furthermore, that such a solution provider now acts as a very inefficient capacity brokerage? Specifically, choosing the service that provides them with the best margins and locking customers into those providers, instead of the service that gives the customer the most bang for the buck on any given day? Doesn’t this create an even better opportunity for the more pure, independent cloud brokerages to sell terms and pricing that favor the customer?

I think VMWare may have a real issue on their hands, in which maintaining their amazing ecosystem of implementation partners may give way to more direct partnerships with specific cloud brokerages (for capacity) and system integrators (for consultative advice on optimizing between private and commercial capacity). The traditional infrastructure VAR gets left in the dust.

Part of the problem is that traditional IT service needs are often “apples and oranges” to online-based cloud computing needs. Serving traditional IT allows for specialization based on region and industry. In both cases, the business opportunity is on site implementation of a particular service or application system. Everyone has to do it that way, so every business that goes digital (and eventually they all have) needs these services in full.

The cloud now dilutes that opportunity. If the hardware doesn’t run on site, there is no opportunity to sell installation services. If the software is purchased as SaaS, there is no opportunity to sell instances of turnkey systems and the services to install and configure that software. If the operations are handled largely by a faceless organization in a cloud capacity provider, there is no opportunity to sell system administration or runbook services for that capacity. If revenue is largely recurring, there is no significant one-time “payday” for selling someone else’s capacity.

So the big money opportunity for service providers in the cloud is strategic, with just a small amount of tactical work to go around.

One possible exception, however, is system management software and hardware. In this case, I believe that customers need to consider owning their own service level automation systems and to monitor the conditions of all software they have running anywhere, either behind or outside of their own firewalls. There is a turnkey opportunity here, and I know many of the cloud infrastructure providers are talking appliance these days for that purpose. Installing and configuring these appliances is going to take specific expertise that should grow in demand over the next decade.

Unless innovative vendors such as RightScale and CohesiveFT kill that opportunity, too.

I know I’ve seen references by others to this channel problem. (In fact, Eschenbach’s interview also raised red flags for Alessandro Perilli of virtualization.info.) On the other hand, others are optimistic it creates opportunity. So maybe I’m just being paranoid. However, if I was a solution provider with my wagon hitched to VMWare’s star, I’d be thinking really hard about what my company will look like five years from now. And if I’m a customer, I’d be looking closely at how I will be acquiring compute capacity in the same time frame.

Cracks in the Clouds, but the Sky Ain’t Fallin’

Update: I accidentally left off the reference links in the first paragraph. This is corrected now. My apologies to all that were inconvenienced.

This last couple of weeks have been filled with challenges to those preaching the gospel of cloud computing. First it was a paper delivered by three Microsoft researchers describing in detail the advantages of small, geo-diverse, distributed data center designs over “mega-datacenters”, a true blow to the strategy of many a cloud provider and–frankly–large enterprise. Second, the Wall Street Journal published a direct indictment of the term, cloud computing, in which Ben Worthen carefully explains how the term ended up well beyond the boundaries of meaning. Added to the dog pile was Larry Elison’s apparently delightful rant about the meaninglessness of the term, and an apparent quote where he doubts the business model of providing capacity at scale for less than a customer could do it on their own.

Frankly, I think there’s some truth to the notion that cloud computing and many of the notions that people have about it are beginning to lose their luster. We seem to have passed through a tollgate of late, from the honeymoon era of “cloud computing will save the world” to the evolutionary phase of “oh, crud, we now have to make this stuff work”. While the marketing continues unabated, there are some stories creeping out of “cloud-o-sphere” of realizations about the economics and technical realities of dynamically offloading compute capacity. Solutions are being explored to “the little things” that support the big picture: monitoring, management (both system and people) and provisioning. Gaps are being identified, and business models are being criticized. We are all coming to the conclusion that there is a heck of a lot of work left to be done here.

Doubt me? Take a look at the following examples of “oh, crud” moments of the past few months:

  • I can’t for the life of me find the link, but about three months ago I read a quote from one of the recent successful Amazon EC2-based start ups noting that as their traffic and user base grows, they believe the economics of using the cloud will change, and moving some core capacity to a private cloud might make more sense.

    Update: John M Willis pointed me to the reference; a quote from item 8 of his “10 Reasons for NOT Using a Cloud” post, which in turn references a Cloud Cafe podcast in which “Brad Jefferson the CEO of Animoto suggested at some point he might actually flip the cloud.” Read the post and listen to the podcast for more. Thanks, John.

  • Mediafed’s Alan Williamson presents a keynote at CloudCamp London in July in which he notes that “[w]e’ve come to realize we cannot rely on putting all our eggs in one basket”, and shows off their dual-provider architecture utilizing Amazon EC2 and Flexiscale.

  • A court case in the United States demonstrates the legal perils that still have to be navigated in terms of constitutional protections and legal rights for those that place data in the cloud. This case goes to prove that too much depends on the Terms of Service of the providers today to provide a consistent basis for placing sensitive data in the cloud. Even mighty Amazon cannot be trusted to run a business infrastructure alone.

Some are even hinting that cloud computing is stupid, and that it will fail to be the disruptive technology it is touted as being.

That last statement is where I part ways with the critics. Cloud computing–all of it, public and private–will be disruptive to the way IT departments acquire and allocate compute functionality and capacity. To me, this statement is true whether or not it turns out that it would be better to build 500 small, manageable, container based data centers than 5 megaliths. It will be true even if the term gets used to describe anti-virus software. There is great momentum pushing us towards huge gains in IT efficiency, and it makes little economic sense not to follow through on that. Like any complex system, there will be winners and losers, but the winners will strengthen the system overall.

Here’s where I see winning technologies developing in the cloud:

  • “Cloudbursting” – This is the most logical initial use of the cloud by most enterprises; grabbing spare capacity on a short term basis when owned capacity is maxed out. It virtually eliminates the pressure to predict peak load accurately, and gives enterprises a “buffer zone” should they need to scale up resources for an application.

  • Cloud OS – The data center is becoming a unit of computing in and of itself, and as such, it needs an OS. However, the ultimate vision for such an OS is to grow beyond the borders of a single data center, or even a single organization, and allow automated, dynamic resource location and allocation across the globe from an open market system. That’s the goal, anyway…

  • SaaS/PaaS – Most of my readers will know the SaaS debate inside and out: is it better to take advantage of the agile and economic nature of online applications, or is it both safer and, perhaps, cheaper in the long term to keep things in house? I think SaaS is winning converts every day and will likely win nearly everyone for some applications. PaaS gives you the same quick/cheap start up economics as SaaS, but for software development and deployment infrastructure. I’ll post more on PaaS soon.

  • Mashups/WOA – Much has been said of late about the successes of loosely coupled REST-style Internet integrations using published URL-based APIs over the traditional “contract heavy” SOAP/WS-* world. It makes sense. Most applications don’t need RMI contracts if all they are trying to do is retrieve data to recombine with other into new forms. If it remains as easy as it has been for the last five years or so, mashups will be an expected component of most web apps, not an exceptional one.

  • “Quick start” data centers and data center modules – Between private clouds made of fail-in-place data centers in shipping containers, and powerful Infrastructure as a Service offerings from the likes of GoGrid, Flexiscale, Amazon and others, both startups and large enterprises have new ways to quickly acquire, scale up and optimize IT capacity. Acquiring that capacity by traditional ways is starting to look inefficient (even though I have seen no proof that this is so, as of yet).

Even if it never makes sense for a single Fortune 500 company to shut down all of their data centers, there will be a permanent change to the way IT operations are run–a change focused at optimizing the use of hardware to meet increasing service demands. Accounting for IT will change forever, as OpEx becomes dominant over CapEx, and flexibility is the name of the game. Capacity planning is changed forever, as developers can grab capacity from the corporate pool, watch the system utilization as demand grows, tuning the application as needed, adding hardware only when justified by trend analysis. Start up economics are changed forever, as building new applications that require large amounts of infrastructure no longer requires infrastructure investment.

CloudCamp SV demonstrated to me that the intellectual investment in cloud computing far surpasses mere marketing, and includes real technologies, architectures and business models that will keep up on our toes for the next few years.

Let the Cloud Computing OS wars begin!

September 15, 2008 Leave a comment

Today is a big day in the cloud computing world. VMWorld is turning out to be a core cloud industry conference, where many of the biggest announcements of the year are taking place. Take,for instance, the announcement that VMWare has created the vCloud initiative, an interesting looking program that aims to build a partner community around cloud computing with VMWare. (Thanks to the every increasingly cloud news leader, On-Demand Enterprise, for this link and most others in this post.) This is huge, in that it signals a commitment by VMWare to standardize cloud computing with VI3, and provide an ecosystem for anyone looking to build a public, private or hybrid cloud.

The biggest news, however, is the bevy of press releases signaling that three of the bigger names in virtualization are each delivering a “cloud OS” platform using their technology at the core. Here are the three announcements:

  • VMWare is announcing a comprehensive roadmap for a Virtual Datacenter Operating System (VDC-OS), consisting of technologies to allow enterprise data centers to virtualize and pool storage, network and servers to create a platform “where applications are automatically guaranteed the right quality of service at the lowest TCO by harnessing internal and external computing capacity.”

  • Citrix announces C3, “its strategy for cloud computing”, which appears to be a collection of products aimed at cloud providers and enterprises wishing to build their own clouds. Specific focus is on the virtualization platform, the deployment and management systems, orchestration, and–interestingly enough–wide area network (WAN) optimization. In the end, this looks very “Cloud OS”-like to me.

  • Virtual Iron and vmSight announce a partnership in which they plan to deliver “cloud infrastructure” to managed hosting providers and cloud providers. Included in this vision are Virtual Iron’s virtualization platform, virtualization management tools, and vmSight’s “end user experience assurance solution” technology to allow for “operating system independence, high-availability, resource optimization and power conservation, along with the ability to monitor and manage application performance and end user experience.” Again, sounds vaguely Cloud OS to me.

Three established vendors, three similar approaches to solving some real issues in the cloud, and three attacks on any entrenched interests in this space. All three focus on providing comprehensive management and infrastructure tools, including automated scaling and failover; and consistent execution to allow for image portability. The VMWare and Citrix announcements go further, however, in announcing technologies to support “cloudbursting” in which overflow processing needs in the data center are met by cloud providers on demand. VMWare specifically calls out OVF as the standard that enables this in their release; OVF is not mentioned by Citrix, but they have done significant work in this space as well.

Overall, VMWare has made the most comprehensive announcement, and have a lot of existing products to back up their feature list. However, much of what needs to be done to tightly integrate these products appears yet to be done. I base this on the fact that they highlight the need for a “comprehensive roadmap”–I could be wrong about this. They have also introduced a virtual distributed switch, which is a key component for migration between and within the cloud. Citrix doesn’t mention such a thing, but of course the rumor is that Cisco will quite likely provide that. Whether such a switch will enable migration across networks, as VMWare’s does (er, will?) is yet to be seen, however (see VMWare’s VDC-OS press release). Citrix does, however, have a decent stable of existing applications to support their current vision.

By the way, Sun is working feverishly on their own Cloud OS. No sign of Microsoft, yet…

The long and the short of it is that we have entered into a new era, in which data centers will no longer simply be collections of servers, but will actually be computing units in and of themselves–often made up of similar computing units (e.g. containers) in a sort of fractal arrangement. Virtualization is key to make this happen (though server virtualization itself is not technically absolutely necessary). So are powerful management tools, policy and workflow automation, data and compute load portability, and utility-type monitoring and metering systems.

I worry now about my alma mater, Cassatt, who has chosen to go it largely alone until today. Its a very mature, very applicable technology, that would form the basis of a hell of a cloud OS management platform. Here’s hoping there are some big announcements waiting in the wings, as the war begins to rage around them.

Update: No sooner do I express this concern, than Ken posts an excellent analysis of the VMWare announcement with Cassatt in mind. I think he misses the boat on the importance of OVF, but he is right that Cassatt has been doing this a lot longer than VMWare has.

Cloud Computing and the Constitution

September 8, 2008 Leave a comment

A few weeks ago, Mark Rasch of SecurityFocus wrote an article for The Register in which he described in detail the deterioration of legal protections that individuals and enterprises have come to expect from online services that house their data. I’ll let you read the article to get the whole story of Stephen Warshak vs. United States of America, but suffice to say the case opened Rasch’s eyes (and mine) to a series of laws and court decisions that I believe seriously weaken the case for storing your data in the cloud in the United States:

  • The Stored Communications Act, which was used to allow the FBI to access Warshak’s email communications without a warrant, his consent, or any form of notification.

  • The appeals court decisions in the case that argue:

    1. Even if the Stored Communications Act is unconstitutional, Warshak cannot block introduction of the evidence as “the cops reasonably relied on it
    2. Regardless of that outcome, the court could not determine if “emails potentially seized by the government without a warrant would be subject to any expectation of privacy”
  • The Supreme Court decision in Smith v. Maryland, in which the court argued that people generally gave up an expectation of privacy with regards to their phone records simply through the act of dialing their phone–which potentially translates to removing privacy expectation on any data sent to and accessible by a third party.

Rasch notes that in cloud computing, because most terms of service and license agreements are written to give the providers some right of access in various circumstances, all data stored at a provider is subject to the same legal treatment.

This is a serious flaw in the constitutional protections against illegal search and seizure, in my opinion, and may be a reason why US data centers will lose out completely on the cloud computing opportunity. Think about it. Why the heck would I commit my sensitive corporate data to the cloud if the government can argue that a) doing so removes my protections against search and seizure, and b) all expectations of privacy are further removed should my terms of service allow anyone other than myself or my organization to access the data? Especially when I can maintain both privileges simply by storing and processing my data on my own premises?

Couple this with the fact that the Patriot Act is keeping many foreign organizations from even considering US-based cloud storage or processing, and you see how it becomes nearly impossible to guarantee to the world market the same security for data outside the firewall as can be guaranteed inside.

It is my belief that this is the number one issue that darkens the otherwise bright future of cloud computing in the United States. Simple technical security of data, communications and facilities is a solvable problem. Portability of data, processing and services across applications, organizations or geographies is also technically solvable. But, if the US government chooses to destroy all sense of constitutional protection of assets in the cloud, there will be no technology that can save US-based clouds for critical security sensitive applications.

It may be too late to do the right thing here; to declare a cloud storage or processing facility the equivalent of a rented office space or an apartment building–leased spaces where all constitutional protection against illegal search and seizure remain in full strength. When I was younger and rented an apartment, I had every right to expect law enforcement wishing to access my personal spaces would be required to obtain a warrant and present it to me as they began their search. The same, in my opinion, should apply to data I store in the cloud. I should rest assured that the data will not be accessed without the same stringent requirements for a search warrant and notification.

Still, there are a few things individuals and companies can do today that appear OK to thwart attempts to secretly access private data.

  1. Encrypt your data before sending it to your cloud provider, and under no circumstances provide your provider with the keys to that encryption. This means that the worse a provider can be required to do is to hand over the encrypted files. You may even be able to argue that your expectations of privacy were maintained, as you handed over no accessible information to the provider, simply ones and zeros.

  2. Require that your provider modify their EULA/ToS to disavow ANY right to directly access your data or associated metadata for any reason. The exception might be file lengths, etc., required to run the hardware and management software, but certainly no core content or metadata that might reveal the relevant details about that content. This would also weaken the government’s case that you gave up privacy expectations when you handed your data to that particular cloud provider.

  3. Store your data and do your processing outside of the United States. It kills me to say that, but you may be forced into that corner.

If there are others that have looked at this issue and see other approaches (both political and technical) towards solving this (IMHO) crisis, I’d love to hear it. I have to admit I’m a little down on the cloud right now (at least US-based cloud services) because of the legal and constitutional issues that have yet to be worked out in a cloud consumer’s favor.

Oh, and this issue isn’t even close to being on the radar screen of either of the major presidential candidates at this point. I’m beginning to consider what it would take to get it into their faces. Anyone have Lawrence Lessig’s number handy?

Update: The Cloud Computing Bill of Rights

Thanks to all that provided input on the first revision of the Cloud Computing Bill of Rights. The feedback has been incredible, including several eye opening references, and some basic concepts that were missed the first time through. An updated “CCBOR” is below, but I first want to directly outline the changes, and credit those that provided input.

  1. Abhishek Kumar points out that government interference in data privacy and security rights needs to be explicitly acknowledged. I hear him loud and clear, though I think the customer can expect only that laws will remain within the constitutional (or doctrinal) bounds of their particular government, and that government retains the right to create law as it deems necessary within those parameters.

    What must also be acknowledged, however, is that customers have the right to know exactly what laws are in force for the cloud systems they choose to use. Does this mean that vendors should hire civil rights lawyers, or that the customer is on their own to figure that out? I honestly don’t know.

  2. Peter Laird’s “The Good, Bad, and the Ugly of SaaS Terms of Service, Licenses, and Contracts” is a must read when it comes to data rights. It finds for enterprises what was observed by NPR the other night for individuals; that you have very few data privacy rights right now, that your provider probably has explicit provisions protecting them and exposing you or your organization, and the cloud exposes risks that enterprises avoid by owning their own clouds.

    This reinforces the notion that we must understand that privacy is not guaranteed in the cloud, no matter what your provider says. As Laird puts it:

    “…[A] customer should have an explicit and absolute right to data ownership regardless of how a contract is terminated.”

  3. Ian Osbourne asks “should there be a right to know where the data will be stored, and potentially a service level requirement to limit host countries?” I say absolutely! It will be impossible for customers to obey laws globally unless data is maintained in known jurisdictions. This was the catalyst for the “Follow the Law Computing” post. Good catch!

  4. John Marsh of GeekPAC links to his own emerging attempt at a Bill of Rights. In it, he points out a critical concept that I missed:

    “[Vendors] may not terminate [customer] account[s] for political statements, inappropriate language, statements of sexual nature, religious commentary, or statements critical of [the vendor’s] service, with exceptions for specific laws, eg. hate speech, where they apply.”

    Bravo, and noted.

  5. Unfortunately, the federal courts have handed down a series of rulings that challenge the ability of global citizens and businesses to do business securely and privately in the cloud. This Bill of Rights is already under grave attack.

Below is the complete text of the second revision of the Cloud Computing Bill of Rights. Let’s call the first “CCBOR 0.1” and this one “CCBOR 0.2”. I’ll update the original post to reflect the versioning.

One last note. My intention in presenting this post was not to become the authority on cloud computing consumer rights. It is, rather, the cornerstone of my Cloud Computing Architecture discussion, in which I need to move on to the next point. I’m working on setting up a WIKI for this “document”. Is there anyone out there in particular that would like to host it?

The Cloud Computing Bill of Rights (0.2)

In the course of technical history, there exist few critical innovations that forever change the way technical economies operate; forever changing the expectations that customers and vendors have of each other, and the architectures on which both rely for commerce. We, the parties entering into a new era driven by one such innovation–that of network based services, platforms and applications, known at the writing of this document as “cloud computing”–do hereby avow the following (mostly) inalienable rights:

  • Article I: Customers Own Their Data

    1. No vendor shall, in the course of its relationship with any customer, claim ownership of any data uploaded, created, generated, modified, hosted or in any other way associated with the customer’s intellectual property, engineering effort or media creativity. This also includes account configuration data, customer generated tags and categories, usage and traffic metrics, and any other form of analytics or meta data collection.

      Customer data is understood to include all data directly maintained by the customer, as well as that of the customer’s own customers. It is also understood to include all source code and data related to configuring and operating software directly developed by the customer, except for data expressly owned by the underlying infrastructure or platform provided by the vendor.

    2. Vendors shall always provide, at a minimum, API level access to all customer data as described above. This API level access will allow the customer to write software which, when executed against the API, allows access to any customer maintained data, either in bulk or record-by-record as needed. As standards and protocols are defined that allow for bulk or real-time movement of data between cloud vendors, each vendor will endeavor to implement such technologies, and will not attempt to stall such implementation in an attempt to lock in its customers.

    3. Customers own their data, which in turn means they own responsibility for the data’s security and adherence to privacy laws and agreements. As with monitoring and data access APIs, vendors will endeavor to provide customers with the tools and services they need to meet their own customers’ expectations. However, customers are responsible for determining a vendor’s relevancy to specific requirements, and to provide backstops, auditing and even indemnification as required by agreements with their own customers.

      Ultimately, however, governments are responsible for the regulatory environments that define the limits of security and privacy laws. As governments can choose any legal requirement that works within the constraints of their own constitutions or doctrines, customers must be aware of what may or may not happen to their data in the jurisdictions in which data resides, is processed or is referenced. As constitutions vary from country to country, it may not even be required for governments to inform customers what specific actions are taken with or against their data. That laws exist that could put their data in jeopardy, however, is the minimum that governments convey to the market.

      Customers (and their customers) must leverage the legislative mechanisms of any jurisdiction of concern to change those parameters.

      In order for enough trust to be built into the online cloud economy, however, governments should endeavor to build a legal framework that respects corporate and individual privacy, and overall data security. While national security is important, governments must be careful not to create an atmosphere in which the customers and vendors of the cloud distrust their ability to securely conduct business within the jurisdiction, either directly or indirectly.

    4. Because regulatory effects weigh so heavily on data usage, security and privacy, vendors shall, at a minimum, inform customers specifically where their data is housed. A better option would be to provide mechanisms by which users can choose where their data will be stored. Either way, vendors should also endeavor to work with customers to assure that their systems designs do not conflict with known legal or regulatory obstacles. This is assumed to apply to primary, backup and archived data instances.
  • Article II: Vendors and Customers Jointly Own System Service Levels

    1. Vendors own, and shall do everything in their power to meet, service level targets committed to with any given customer. All required effort and expense necessary to meet those explicit service levels will be spent freely and without additional expense to the customer. While the specific legally binding contracts or business agreements will spell out these requirements, it is noted here that these service level agreements are entered into expressly to protect both the customer’s and vendor’s business interests, and all decisions by the vendor will take both parties equally into account.

      Where no explicit service level agreement exists with a customer, the vendor will endeavor to meet any expressed service level targets provided in marketing literature or the like. At no time will it be acceptable for a vendor to declare a high level of service at a base price, only to later indicate that that level of service is only available at a higher premium price.

      It is perfectly acceptable, however, for a vendor to expressly sell a higher level of service at a higher price, as long as they make that clear at all points where a customer may evaluate or purchase the service.

    2. Ultimately, though, customers own their service level commitments to their own internal or external customers, and the customer understands that it is their responsibility to take into account possible failures by each vendor that they do business with.

      Customers relying on a single vendor to meet their own service level commitments enter into an implicit agreement to tie their own service level commitments to the vendor’s, and to live and die by the vendor’s own infrastructure reliability. Those customers who take their own commitments seriously will seek to build or obtain independent monitoring, failure recovery and disaster recovery systems.

    3. Where customer/vendor system integration is necessary, the vendor’s must offer options for monitoring the viability of that integration at as many architectural levels as required to allow the customer to meet their own service level commitments. Where standards exist for such monitoring, the vendor will implement those standards in a timely and complete fashion. The vendor should not underestimate the importance of this monitoring to the customer’s own business commitments.

    4. Under no circumstances will vendors terminate customer accounts for political statements, inappropriate language, statements of sexual nature, religious commentary, or statements critical of the vendor’s service, with exceptions for specific laws, e.g. hate speech, where they apply.
  • Article III: Vendors Own Their Interfaces

    1. Vendors are under no obligation to provide “open” or “standard” interfaces, other than as described above for data access and monitoring. APIs for modifying user experience, frameworks for building extensions or even complete applications for the vendor platform, or such technologies can be developed however the vendor sees fit. If a vendor chooses to require developers to write applications in a custom programming language with esoteric data storage algorithms and heavily piracy protected execution systems, so be it.

      If it seems that this completely abdicates the customer’s power in the business relationship, this is not so. As the “cloud” is a marketplace of technology infrastructures, platforms and applications, the customer exercises their power by choosing where to spend their hard earned money. A decision to select a platform vendor that locks you into proprietary Python libraries, for instance, is a choice to support such programming lock-in. On the other hand, insistence on portable virtual machine formats will drive the market towards a true commodity compute capacity model.

      The key reason for giving vendors such power is to maximize innovation. By restricting how technology gets developed or released, the market risks restricting the ways in which technologists can innovate. History shows that eventually the “open” market catches up to most innovations (or bypasses them altogether), and the pace at which this happens is greatly accelerated by open source. Nonetheless, forcing innovation through open source or any other single method runs the risk of weakening capitalist entrepreneurial risk taking.

    2. The customer, however, has the right to use any method legally possible to extend, replicate, leverage or better any given vendor technology. If a vendor provides a proprietary API for virtual machine management in their cloud, customers (aka “the community”, in this case) have every right to experiment with “home grown” implementations of alternative technologies using that same API. This is also true for replicating cloud platform functionality, or even complete applications–though, again, the right only extends to legal means.

      Possibly the best thing a cloud vendor can do to extend their community, and encourage innovation on their platform from community members is to open their platform as much as possible. By making themselves the “reference platform” for their respective market space, an open vendor creates a petrie dish of sorts for cultivating differentiating features and successes on their platform. Protective proprietary vendors are on their own.

These three articles serve as the baseline for customer, vendor and, as necessary, government relationships in the new network-based computing marketplace. No claim is made that this document is complete, or final. These articles may be changed or extended at any time, and additional articles can be declared, whether in response to new technologies or business models, or simply to reflect the business reality of the marketplace. It is also a community document, and others are encouraged to bend and shape it in their own venues.