Archive

Archive for the ‘cloud lock-in’ Category

Two Announcements to Pay Attention To This Week

November 12, 2008 Leave a comment

I know I promised a post on how the network fits into cloud computing, but after a wonderful first part of my week spending time first catching up on reading, then one-on-one with my 4-year old son, I’m finally digging into what’s happened in the last two days in the cloud-o-sphere. While the network post remains important to me, there were several announcements that caught my eye, and I thought I’d run through two of them quickly and give you a sense of why they matter

The first announcement came from Replicate Technologies, Rich Miller’s young company, which is focusing initially on virtualization configuration analysis. The Replicate Datacenter Analyzer (RDA) is a powerful analysis and management tool for evaluating the configuration and deployment of virtual machines in an enterprise data center environment. But it goes beyond just evaluating the VMs itself, to evaluating the server, network and storage configuration required to support things like vMotion.

Sound boring, and perhaps not cloud related? Well, if you read Rich’s blog in depth, you may find that he has a very interesting longer term objective. Building on the success of RDA, Replicate aims to become a core element of a virtualized data center operations platform, eventually including hybrid cloud configurations, etc. While initially focused on individual servers, one excellent vision that Rich has is to manage the relationships between VMs in such a tool, so that operations taken on one server will take into account the dependencies on other servers. Very cool.

Watch the introductory video for the fastest impression of what Replicate has here. If you manage virtual machines, sit up and take notice.

The other announcement that caught my eye was the new positioning and features introduced by my alma mater, Cassatt Corporation this week. I’ve often argued that Cassatt is an excellent example of a private cloud infrastructure, and now they are actively promoting themselves as such (although they use the term “internal cloud”).

It’s about freaking time. With a mature, “burned in”, relatively technology agnostic platform that has perhaps the easiest policy management user experience ever (though not necessarily the prettiest), Cassatt has always been one of my favorite infrastructure plays (though I admit some bias). They support an incredible array of hardware, virtualization and OS platforms, and provide the rare ability to manage not only virtual machines, but also bare metal systems. You get automated power management, resource optimization, image management, and dynamic provisioning. For the latter, not only is server provisioning automated, but also network provisioning–such that deploying an image on a server triggers Cassatt to reprogram the ports that the target server is connected to so that they sit on the correct VLAN for the software about to be booted.

The announcement talks a lot about Cassatt’s monitoring capabilities, and a service they provide around application profiling. I haven’t been briefed about these, but given their experience with server power management (a very “profiling focused” activity) I believe they could probably have some unique value-add there. What I remember from six months ago was that they introduced improved dynamic load allocation capabilities that could use just about any digital metric (technical or business oriented) to set upper and lower performance thresholds for scaling. Thus, you could use CPU utilization, transaction rates, user sessions or even market activity to determine the need for more or less servers for an application. Not too many others break away from the easy CPU/memory utilization stats to drive scale.

Now, having said all those nice things, I have to take Cassatt to task for one thing. Throughout the press release, Cassatt talks about Amazon and Google like infrastructure. However, Cassatt is doing nothing to replicate the APIs of either Amazon (which would be logical) or Google (which would make no sense at all). In other words, as announced, Cassatt is building on their own proprietary protocols and interfaces, with no ties to any external clouds or alternative cloud platforms. This is not a very “commodity cloud computing” friendly approach, and obviously I would like to see that changed. And, the truth is, none of their direct competitors are doing so, either (with the possible exception of the academic research project, EUCALYPTUS).

The short short of it is if you are looking at building a private cloud, don’t overlook Cassatt.

There was another announcement from Hyperic that I want to comment on, but I’m due to chat with a Hyperic executive soon, so I’ll reserve that post for later. The fall of 2008 remains a heady time for cloud computing, so expect many more of these types of posts in the coming weeks.

Amazon Enhances "The Proto-Cloud"

October 23, 2008 Leave a comment

Big news today, as you’ve probably already seen. Amazon has announced a series of steps to greatly enhance the “production” nature of its already leading edge cloud computing services, including (quoted directly from Jeff Barr’s post on the AWS blog):

  • Linux on Amazon EC2 is now in full production. The beta label is gone.
  • There’s now an SLA (Service Level Agreement) for EC2.
  • Microsoft Windows is now available in beta form on EC2.
  • Microsoft SQL Server is now available in beta form on EC2.
  • We plan to release an interactive AWS management console.
  • We plan to release new load balancing, automatic scaling, and cloud monitoring services.

There is some great coverage of the announcement already in the blog-o-sphere, so I won’t repeat the basics here. Suffice to say:

  • Removing the beta label removes a barrier to S3/EC2 adoption for the most conservative of organizations.
  • The SLA is interestingly organized to both allow for pockets of outages while promoting global up-time. Make no mistake, though, some automation is required to make sure your systems find the working Amazon infrastructure when specific Availability Zones fail.
  • Oh, wait, they took care of that as well…along with automatic scaling and load balancing.
  • Microsoft is quickly becoming a first class player in AWS, which removes yet another barrier for M$FT happy organizations.

Instead, let me focus in this post on how all of this enhances Amazon’s status as the “reference platform” for infrastructure as a service (IaaS). In another post, I want to express my concern that Amazon runs the danger of becoming the “WallMart” of cloud computing.

First, why is it that Amazon is leading the way so aggressively in terms of feature sets and service offerings for cloud computing? Why does it seem that every other cloud provider seems to be catching up to the services being offered by Amazon at any given time? For example:

The answer in all cases is because Amazon has become the default standard for IaaS feature definition–this despite having no user interface of their own (besides command line and REST), and using “special” Linux images (the core Amazon Machine Images) that don’t provide root access, etc. The reason for the success in setting the standard here is simple: from the beginning, Amazon has focused on prioritizing feature delivery based on barriers to adoption of AWS, rather than on building the very best of any given feature.

Here’s how I see it:

  • In the beginning, there was storage and network access. Enter S3.
  • Then there were virtual servers to do computational tasks. Enter EC2, but with only one server size.
  • Then there were significant complaints that the server size wasn’t big enough to handle real world tasks. Enter additional server types (e.g. “Large”) and associated pricing
  • Then there was the need for “queryable” data storage. Enter SimpleDB.
  • Somewhere in the preceding time frame, the need for messaging services was identified as a barrier. Enter Amazon Simple Queue Service.
  • Now people were beginning to do serious tasks with EC2/S3/etc., so the issues of geographic placement of data and workloads became more of a concern. (This placement was both for geographic fail over, and to address regulatory concerns.) Enter Availability Zones.
  • Soon after that, delivering content and data between the zones became a serious concern (especially with all of the web start ups leveraging EC2/S3/etc.) Enter the announced AWS Content Delivery Service
  • Throw in there various partnership announcements, including support for MySQL and Oracle.

By this point, hundreds of companies had “production” applications or jobs running on Amazon infrastructure, and it became time to decide how serious this was. In my not-so-humble opinion, the floundering economy, its effects on the Amazon retail business, and the predictions that cloud computing could benefit from a weakened economy fed into the decision that its time to remove the training wheels and leave “beta” status for good. Add an official SLA, remove the “beta” label, and “BAM!“, you suddenly have a new “production” business to offset the retail side of the house.

Given that everyone else was playing catchup to these features as they came out (mostly because competitors didn’t realize what they needed to do next, as they didn’t have the customer base to draw from), it is not surprising that Amazon now looks like they are miles ahead of any competitor when it comes to number of customers and (for cloud computing services) probably revenue.

How do you keep the competitors playing catchup? Add more features. How do you select which features to address next? Check with the customer base to see what their biggest concerns are. This time, the low hanging fruit was the management interface, monitoring, and automation. Oh, and that little Windows platform-thingy.

Now, I find it curious that they’ve pre-announced the monitoring and management stuff today. Amazon isn’t really in the habit of announcing a feature before they go private-beta. However, I think there is some concern that they were becoming the “command-line lover’s cloud”, and had to show some interest in competing with the likes of VirtualCenter in the mind’s eye of system administrators. So, to undercut some perceived competitive advantages from folks like GoGrid and Slicehost, they tell their prospects and customers “just give us a second here and we will do you right”.

I think the AWS team has been brilliant, both in terms of marketing and in terms of technology planning and development. They remain the dominant team, in my opinion, though there are certainly plenty of viable alternatives out there that you should not be shy of both in conjunction with and in place of Amazon. Jeff Barr, Werner Vogels and others have proven that a business model that so many other IT organizations failed at miserably could be done extremely well. I just hope they don’t get too far ahead of themselves…as I’ll discuss separately.

Cloud Summit Executive: Making "Pay-As-You-Go" Customers Happy

October 15, 2008 Leave a comment

I got to spend a few hours yesterday at the Cloud Summit Executive conference at the Computer History Museum in Mountain View, California, today. The Summit was unique, at least in the Bay Area, for its focus on the business side of cloud computing. The networking was quite good, with a variety of CxOs digging into what the opportunities and challenges are for both customers and providers. The feedback that I got from those that attended the entire day was that the summit opened eyes and expanded minds.

Ken Oestreich has great coverage of the full summit.

I attended the afternoon keynotes, which started with a presentation from SAP CTO Vishal Sikka which seemed at first like the usual fluff about how global enterprise markets will be addressed by SaaS (read “SAP is the answer”). However, Vishal managed to weave in an important message about how the cloud will initially make some things worse, namely integration and data integrity. Elasticity, he noted, is the key selling point of the cloud right now, and it will be required to meet the needs of a heterogeneous world. Software vendors, pay attention to that.

For me, the highlight of the conference was a panel led by David Berlind at TechWeb, consisting of Art Wittmann of Information Week, Carolyn Lawson, CIO of the California Public Utilities Commission, and Anthony Hill, CIO of Golden Gate University. Ken covers the basic discussion quite well, but I took something away from the comments that he missed: both CIOs seemed to be in agreement that the contractual agreement between cloud customer and cloud provider should be different than those normally seen in the managed hosting world. Instead of being service level agreement (SLA) driven, cloud agreements should base termination rights strictly on customer satisfaction.

This was eye opening for me, as I had always focused on service level automation as the way to manage up time and performance in the cloud. I just assumed that the business relationship between customer and provider would include distinct service level agreements. However, Hill was adamant that his best SaaS relationships were those that gave him both subscription (or pay-as-you-use) pricing and the right to terminate the agreement for any reason with 30 days notice.

Why would any vendor agree to this? As Hill points out, it’s because it gives the feeling of control to the customer, without giving up any of the real barriers to termination that the customer has today; the most important of which is the cost of migrating off of the provider’s service. Carolyn generalized the benefits of concepts like this beautifully when she said something to the effect of

“The cloud vendor community has to understand what [using an off premises cloud service] looks like to me — it feels like a free fall. I can’t touch things like I can in my own data center (e.g. AC, racks, power monitors, etc.), I can’t yell at people who report to me. Give me a sense of control; give me options if something goes wrong.”

In other words, base service success on customer satisfaction, and provide options if something goes wrong.

What a beautifully concise way to put a very heart felt need of most cloud consumers–control over their company’s IT destiny. By providing different ways that a customer can handle unexpected situations, a cloud provider is signaling that they honor who the ultimate boss is in a cloud computing transaction–the customer.

Hill loved his 30 day notice for termination clause with one of his vendors, and I can see why. Not because he expects to use it, but because it let’s him and the vendor know that Golden Gate University has decision making power, and the cloud vendor serves at the pleasure of Golden Gate University, not the other way around.

Let the Cloud Computing OS wars begin!

September 15, 2008 Leave a comment

Today is a big day in the cloud computing world. VMWorld is turning out to be a core cloud industry conference, where many of the biggest announcements of the year are taking place. Take,for instance, the announcement that VMWare has created the vCloud initiative, an interesting looking program that aims to build a partner community around cloud computing with VMWare. (Thanks to the every increasingly cloud news leader, On-Demand Enterprise, for this link and most others in this post.) This is huge, in that it signals a commitment by VMWare to standardize cloud computing with VI3, and provide an ecosystem for anyone looking to build a public, private or hybrid cloud.

The biggest news, however, is the bevy of press releases signaling that three of the bigger names in virtualization are each delivering a “cloud OS” platform using their technology at the core. Here are the three announcements:

  • VMWare is announcing a comprehensive roadmap for a Virtual Datacenter Operating System (VDC-OS), consisting of technologies to allow enterprise data centers to virtualize and pool storage, network and servers to create a platform “where applications are automatically guaranteed the right quality of service at the lowest TCO by harnessing internal and external computing capacity.”

  • Citrix announces C3, “its strategy for cloud computing”, which appears to be a collection of products aimed at cloud providers and enterprises wishing to build their own clouds. Specific focus is on the virtualization platform, the deployment and management systems, orchestration, and–interestingly enough–wide area network (WAN) optimization. In the end, this looks very “Cloud OS”-like to me.

  • Virtual Iron and vmSight announce a partnership in which they plan to deliver “cloud infrastructure” to managed hosting providers and cloud providers. Included in this vision are Virtual Iron’s virtualization platform, virtualization management tools, and vmSight’s “end user experience assurance solution” technology to allow for “operating system independence, high-availability, resource optimization and power conservation, along with the ability to monitor and manage application performance and end user experience.” Again, sounds vaguely Cloud OS to me.

Three established vendors, three similar approaches to solving some real issues in the cloud, and three attacks on any entrenched interests in this space. All three focus on providing comprehensive management and infrastructure tools, including automated scaling and failover; and consistent execution to allow for image portability. The VMWare and Citrix announcements go further, however, in announcing technologies to support “cloudbursting” in which overflow processing needs in the data center are met by cloud providers on demand. VMWare specifically calls out OVF as the standard that enables this in their release; OVF is not mentioned by Citrix, but they have done significant work in this space as well.

Overall, VMWare has made the most comprehensive announcement, and have a lot of existing products to back up their feature list. However, much of what needs to be done to tightly integrate these products appears yet to be done. I base this on the fact that they highlight the need for a “comprehensive roadmap”–I could be wrong about this. They have also introduced a virtual distributed switch, which is a key component for migration between and within the cloud. Citrix doesn’t mention such a thing, but of course the rumor is that Cisco will quite likely provide that. Whether such a switch will enable migration across networks, as VMWare’s does (er, will?) is yet to be seen, however (see VMWare’s VDC-OS press release). Citrix does, however, have a decent stable of existing applications to support their current vision.

By the way, Sun is working feverishly on their own Cloud OS. No sign of Microsoft, yet…

The long and the short of it is that we have entered into a new era, in which data centers will no longer simply be collections of servers, but will actually be computing units in and of themselves–often made up of similar computing units (e.g. containers) in a sort of fractal arrangement. Virtualization is key to make this happen (though server virtualization itself is not technically absolutely necessary). So are powerful management tools, policy and workflow automation, data and compute load portability, and utility-type monitoring and metering systems.

I worry now about my alma mater, Cassatt, who has chosen to go it largely alone until today. Its a very mature, very applicable technology, that would form the basis of a hell of a cloud OS management platform. Here’s hoping there are some big announcements waiting in the wings, as the war begins to rage around them.

Update: No sooner do I express this concern, than Ken posts an excellent analysis of the VMWare announcement with Cassatt in mind. I think he misses the boat on the importance of OVF, but he is right that Cassatt has been doing this a lot longer than VMWare has.

Update: The Cloud Computing Bill of Rights

Thanks to all that provided input on the first revision of the Cloud Computing Bill of Rights. The feedback has been incredible, including several eye opening references, and some basic concepts that were missed the first time through. An updated “CCBOR” is below, but I first want to directly outline the changes, and credit those that provided input.

  1. Abhishek Kumar points out that government interference in data privacy and security rights needs to be explicitly acknowledged. I hear him loud and clear, though I think the customer can expect only that laws will remain within the constitutional (or doctrinal) bounds of their particular government, and that government retains the right to create law as it deems necessary within those parameters.

    What must also be acknowledged, however, is that customers have the right to know exactly what laws are in force for the cloud systems they choose to use. Does this mean that vendors should hire civil rights lawyers, or that the customer is on their own to figure that out? I honestly don’t know.

  2. Peter Laird’s “The Good, Bad, and the Ugly of SaaS Terms of Service, Licenses, and Contracts” is a must read when it comes to data rights. It finds for enterprises what was observed by NPR the other night for individuals; that you have very few data privacy rights right now, that your provider probably has explicit provisions protecting them and exposing you or your organization, and the cloud exposes risks that enterprises avoid by owning their own clouds.

    This reinforces the notion that we must understand that privacy is not guaranteed in the cloud, no matter what your provider says. As Laird puts it:

    “…[A] customer should have an explicit and absolute right to data ownership regardless of how a contract is terminated.”

  3. Ian Osbourne asks “should there be a right to know where the data will be stored, and potentially a service level requirement to limit host countries?” I say absolutely! It will be impossible for customers to obey laws globally unless data is maintained in known jurisdictions. This was the catalyst for the “Follow the Law Computing” post. Good catch!

  4. John Marsh of GeekPAC links to his own emerging attempt at a Bill of Rights. In it, he points out a critical concept that I missed:

    “[Vendors] may not terminate [customer] account[s] for political statements, inappropriate language, statements of sexual nature, religious commentary, or statements critical of [the vendor’s] service, with exceptions for specific laws, eg. hate speech, where they apply.”

    Bravo, and noted.

  5. Unfortunately, the federal courts have handed down a series of rulings that challenge the ability of global citizens and businesses to do business securely and privately in the cloud. This Bill of Rights is already under grave attack.

Below is the complete text of the second revision of the Cloud Computing Bill of Rights. Let’s call the first “CCBOR 0.1” and this one “CCBOR 0.2”. I’ll update the original post to reflect the versioning.

One last note. My intention in presenting this post was not to become the authority on cloud computing consumer rights. It is, rather, the cornerstone of my Cloud Computing Architecture discussion, in which I need to move on to the next point. I’m working on setting up a WIKI for this “document”. Is there anyone out there in particular that would like to host it?

The Cloud Computing Bill of Rights (0.2)

In the course of technical history, there exist few critical innovations that forever change the way technical economies operate; forever changing the expectations that customers and vendors have of each other, and the architectures on which both rely for commerce. We, the parties entering into a new era driven by one such innovation–that of network based services, platforms and applications, known at the writing of this document as “cloud computing”–do hereby avow the following (mostly) inalienable rights:

  • Article I: Customers Own Their Data

    1. No vendor shall, in the course of its relationship with any customer, claim ownership of any data uploaded, created, generated, modified, hosted or in any other way associated with the customer’s intellectual property, engineering effort or media creativity. This also includes account configuration data, customer generated tags and categories, usage and traffic metrics, and any other form of analytics or meta data collection.

      Customer data is understood to include all data directly maintained by the customer, as well as that of the customer’s own customers. It is also understood to include all source code and data related to configuring and operating software directly developed by the customer, except for data expressly owned by the underlying infrastructure or platform provided by the vendor.

    2. Vendors shall always provide, at a minimum, API level access to all customer data as described above. This API level access will allow the customer to write software which, when executed against the API, allows access to any customer maintained data, either in bulk or record-by-record as needed. As standards and protocols are defined that allow for bulk or real-time movement of data between cloud vendors, each vendor will endeavor to implement such technologies, and will not attempt to stall such implementation in an attempt to lock in its customers.

    3. Customers own their data, which in turn means they own responsibility for the data’s security and adherence to privacy laws and agreements. As with monitoring and data access APIs, vendors will endeavor to provide customers with the tools and services they need to meet their own customers’ expectations. However, customers are responsible for determining a vendor’s relevancy to specific requirements, and to provide backstops, auditing and even indemnification as required by agreements with their own customers.

      Ultimately, however, governments are responsible for the regulatory environments that define the limits of security and privacy laws. As governments can choose any legal requirement that works within the constraints of their own constitutions or doctrines, customers must be aware of what may or may not happen to their data in the jurisdictions in which data resides, is processed or is referenced. As constitutions vary from country to country, it may not even be required for governments to inform customers what specific actions are taken with or against their data. That laws exist that could put their data in jeopardy, however, is the minimum that governments convey to the market.

      Customers (and their customers) must leverage the legislative mechanisms of any jurisdiction of concern to change those parameters.

      In order for enough trust to be built into the online cloud economy, however, governments should endeavor to build a legal framework that respects corporate and individual privacy, and overall data security. While national security is important, governments must be careful not to create an atmosphere in which the customers and vendors of the cloud distrust their ability to securely conduct business within the jurisdiction, either directly or indirectly.

    4. Because regulatory effects weigh so heavily on data usage, security and privacy, vendors shall, at a minimum, inform customers specifically where their data is housed. A better option would be to provide mechanisms by which users can choose where their data will be stored. Either way, vendors should also endeavor to work with customers to assure that their systems designs do not conflict with known legal or regulatory obstacles. This is assumed to apply to primary, backup and archived data instances.
  • Article II: Vendors and Customers Jointly Own System Service Levels

    1. Vendors own, and shall do everything in their power to meet, service level targets committed to with any given customer. All required effort and expense necessary to meet those explicit service levels will be spent freely and without additional expense to the customer. While the specific legally binding contracts or business agreements will spell out these requirements, it is noted here that these service level agreements are entered into expressly to protect both the customer’s and vendor’s business interests, and all decisions by the vendor will take both parties equally into account.

      Where no explicit service level agreement exists with a customer, the vendor will endeavor to meet any expressed service level targets provided in marketing literature or the like. At no time will it be acceptable for a vendor to declare a high level of service at a base price, only to later indicate that that level of service is only available at a higher premium price.

      It is perfectly acceptable, however, for a vendor to expressly sell a higher level of service at a higher price, as long as they make that clear at all points where a customer may evaluate or purchase the service.

    2. Ultimately, though, customers own their service level commitments to their own internal or external customers, and the customer understands that it is their responsibility to take into account possible failures by each vendor that they do business with.

      Customers relying on a single vendor to meet their own service level commitments enter into an implicit agreement to tie their own service level commitments to the vendor’s, and to live and die by the vendor’s own infrastructure reliability. Those customers who take their own commitments seriously will seek to build or obtain independent monitoring, failure recovery and disaster recovery systems.

    3. Where customer/vendor system integration is necessary, the vendor’s must offer options for monitoring the viability of that integration at as many architectural levels as required to allow the customer to meet their own service level commitments. Where standards exist for such monitoring, the vendor will implement those standards in a timely and complete fashion. The vendor should not underestimate the importance of this monitoring to the customer’s own business commitments.

    4. Under no circumstances will vendors terminate customer accounts for political statements, inappropriate language, statements of sexual nature, religious commentary, or statements critical of the vendor’s service, with exceptions for specific laws, e.g. hate speech, where they apply.
  • Article III: Vendors Own Their Interfaces

    1. Vendors are under no obligation to provide “open” or “standard” interfaces, other than as described above for data access and monitoring. APIs for modifying user experience, frameworks for building extensions or even complete applications for the vendor platform, or such technologies can be developed however the vendor sees fit. If a vendor chooses to require developers to write applications in a custom programming language with esoteric data storage algorithms and heavily piracy protected execution systems, so be it.

      If it seems that this completely abdicates the customer’s power in the business relationship, this is not so. As the “cloud” is a marketplace of technology infrastructures, platforms and applications, the customer exercises their power by choosing where to spend their hard earned money. A decision to select a platform vendor that locks you into proprietary Python libraries, for instance, is a choice to support such programming lock-in. On the other hand, insistence on portable virtual machine formats will drive the market towards a true commodity compute capacity model.

      The key reason for giving vendors such power is to maximize innovation. By restricting how technology gets developed or released, the market risks restricting the ways in which technologists can innovate. History shows that eventually the “open” market catches up to most innovations (or bypasses them altogether), and the pace at which this happens is greatly accelerated by open source. Nonetheless, forcing innovation through open source or any other single method runs the risk of weakening capitalist entrepreneurial risk taking.

    2. The customer, however, has the right to use any method legally possible to extend, replicate, leverage or better any given vendor technology. If a vendor provides a proprietary API for virtual machine management in their cloud, customers (aka “the community”, in this case) have every right to experiment with “home grown” implementations of alternative technologies using that same API. This is also true for replicating cloud platform functionality, or even complete applications–though, again, the right only extends to legal means.

      Possibly the best thing a cloud vendor can do to extend their community, and encourage innovation on their platform from community members is to open their platform as much as possible. By making themselves the “reference platform” for their respective market space, an open vendor creates a petrie dish of sorts for cultivating differentiating features and successes on their platform. Protective proprietary vendors are on their own.

These three articles serve as the baseline for customer, vendor and, as necessary, government relationships in the new network-based computing marketplace. No claim is made that this document is complete, or final. These articles may be changed or extended at any time, and additional articles can be declared, whether in response to new technologies or business models, or simply to reflect the business reality of the marketplace. It is also a community document, and others are encouraged to bend and shape it in their own venues.

The Cloud Computing Bill of Rights

Update: Title and version number added before Cloud Computing Bill of Rights text below.

Before you architect your application systems for the cloud, you have to set some ground rules on what to expect from the cloud vendors you either directly or indirectly leverage. It is important that you walk into these relationships with certain expectations, in both the short and long term, and both those that protect you and those that protect the vendor.

This post is an attempt to capture many of the core rights that both customers and vendors of the cloud should come to expect, with the goal of setting that baseline for future Cloud Oriented Architecture discussions.

This is but a first pass, presented to the community for feedback, discussion, argument and–if deserved–derision. Your comments below will be greatly appreciated in any case.

The Cloud Computing Bill of Rights (0.1)

In the course of technical history, there exist few critical innovations that forever change the way technical economies operate; forever changing the expectations that customers and vendors have of each other, and the architectures on which both rely for commerce. We, the parties entering into a new era driven by one such innovation–that of network based services, platforms and applications, known at the writing of this document as “cloud computing”–do hereby avow the following (mostly) inalienable rights:

  • Article I: Customers Own Their Data

    1. No vendor shall, in the course of its relationship with any customer, claim ownership of any data uploaded, created, generated, modified, hosted or in any other way associated with the customer’s intellectual property, engineering effort or media creativity. This also includes account configuration data, customer generated tags and categories, usage and traffic metrics, and any other form of analytics or meta data collection.

      Customer data is understood to include all data directly maintained by the customer, as well as that of the customer’s own customers. It is also understood to include all source code and data related to configuring and operating software directly developed by the customer, except for data expressly owned by the underlying infrastructure or platform provided by the vendor.

    2. Vendors shall always provide, at a minimum, API level access to all customer data as described above. This API level access will allow the customer to write software which, when executed against the API, allows access to any customer maintained data, either in bulk or record-by-record as needed. As standards and protocols are defined that allow for bulk or real-time movement of data between cloud vendors, each vendor will endeavor to implement such technologies, and will not attempt to stall such implementation in an attempt to lock in its customers.

  • Article II: Vendors and Customers Jointly Own System Service Levels

    1. Vendors own, and shall do everything in their power to meet, service level targets committed to with any given customer. All required effort and expense necessary to meet those explicit service levels will be spent freely and without additional expense to the customer. While the specific legally binding contracts or business agreements will spell out these requirements, it is noted here that these service level agreements are entered into expressly to protect the customer’s business interests, and all decisions by the vendor will take this into account.

      Where no explicit service level agreement exists with a customer, the vendor will endeavor to meet any expressed service level targets provided in marketing literature or the like. At no time will it be acceptable for a vendor to declare a high level of service at a base price, only to later indicate that that level of service is only available at a higher premium price.

      It is perfectly acceptable, however, for a vendor to expressly sell a higher level of service at a higher price, as long as they make that clear at all points where a customer may evaluate or purchase the service.

    2. Ultimately, though, customers own their service level commitments to their own internal or external customers, and the customer understands that it is their responsibility to take into account possible failures by each vendor that they do business with.

      Customers relying on a single vendor to meet their own service level commitments enter into an implicit agreement to tie their own service level commitments to the vendor’s, and to live and die by the vendor’s own infrastructure reliability. Those customers who take their own commitments seriously will seek to build or obtain independent monitoring, failure recovery and disaster recovery systems.

    3. Where customer/vendor system integration is necessary, the vendor’s must offer options for monitoring the viability of that integration at as many architectural levels as required to allow the customer to meet their own service level commitments. Where standards exist for such monitoring, the vendor will implement those standards in a timely and complete fashion. The vendor should not underestimate the importance of this monitoring to the customer’s own business commitments.

  • Article III: Vendors Own Their Interfaces

    1. Vendors are under no obligation to provide “open” or “standard” interfaces, other than as described above for data access and monitoring. APIs for modifying user experience, frameworks for building extensions or even complete applications for the vendor platform, or such technologies can be developed however the vendor sees fit. If a vendor chooses to require developers to write applications in a custom programming language with esoteric data storage algorithms and heavily piracy protected execution systems, so be it.

      If it seems that this completely abdicates the customer’s power in the business relationship, this is not so. As the “cloud” is a marketplace of technology infrastructures, platforms and applications, the customer exercises their power by choosing where to spend their hard earned money. A decision to select a platform vendor that locks you into proprietary Python libraries, for instance, is a choice to support such programming lock-in. On the other hand, insistence on portable virtual machine formats will drive the market towards a true commodity compute capacity model.

      The key reason for giving vendors such power is to maximize innovation. By restricting how technology gets developed or released, the market risks restricting the ways in which technologists can innovate. History shows that eventually the “open” market catches up to most innovations (or bypasses them altogether), and the pace at which this happens is greatly accelerated by open source. Nonetheless, forcing innovation through open source or any other single method runs the risk of weakening capitalist entrepreneurial risk taking.

    2. The customer, however, has the right to use any method legally possible to extend, replicate, leverage or better any given vendor technology. If a vendor provides a proprietary API for virtual machine management in their cloud, customers (aka “the community”, in this case) have every right to experiment with “home grown” implementations of alternative technologies using that same API. This is also true for replicating cloud platform functionality, or even complete applications–though, again, the right only extends to legal means.

      Possibly the best thing a cloud vendor can do to extend their community, and encourage innovation on their platform from community members is to open their platform as much as possible. By making themselves the “reference platform” for their respective market space, an open vendor creates a petrie dish of sorts for cultivating differentiating features and successes on their platform. Protective proprietary vendors are on their own.

These three articles serve as the baseline for customer/vendor relationships in the new network-based computing marketplace. No claim is made that this document is complete, or final. These articles may be changed or extended at any time, and additional articles can be declared, whether in response to new technologies or business models, or simply to reflect the business reality of the marketplace. It is also a community document, and others are encouraged to bend and shape it in their own venues.

Comments, complaints or questions can be directed to the author through the comments section below.

Is Dell desparate, or just defensive?

Ugh…

How else do you react to the news the Dell is most of the way down the road towards trademarking the term “cloud computing”?

The only question I have is “why?”. What do they gain from this assault on one of the most explosive marketplaces to find its way into technology since the Internet itself? I see two options (though there are probably more–let me know what you think):

  1. Dell thought at the time of the application they could create technology and a brand around “cloud computing”, and they would own the mindshare around the term. They even applied for (and got) the cloudcomputing.com domain. Of course, this aspiration is naive at best, and if this is the case, at this point Dell should kill the application, build a kick-ass site for Dell’s vision of cloud computing and call it a day.

  2. They were simply trying to protect the cloudcomputing.com domain by blocking others from getting cloudcomputing.net, cloudcomputing.info, etc. If this is the case, the trademark application is too harsh, and they should use other legal means to protect the domain.

Whatever the reason, kill the application, Dell. Spare yourself becoming the SCO of network computing.

Update: I note that Dell has even displayed the trademark on the term “Cloud Computing Solutions” on their web site, as can be seen in the image below:


Update: Dell is apparently suggesting that the second reason I stated above (or a variation) is why they filed for the trademark. Kill the application, Dell, or make a public pledge that is stronger than “It is not our intention…”.