Archive

Archive for the ‘service level automation’ Category

The enterprise "barrier-to-exit" to cloud computing

December 2, 2008 Leave a comment

An interesting discussion ensued on Twitter this weekend between myself and George Reese of Valtira. George–who recently posted some thought provoking posts on O’Reilly Broadcast about cloud security, and is writing a book on cloud computing–argued strongly that the benefits gained from moving to the cloud outweighed any additional costs that may ensue. In fact, in one tweet he noted:

IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier.

I reacted strongly to that statement; I don’t buy that IT is that bad in all cases (though some certainly is), nor do I buy that simply eliminating a barrier to getting something done makes it worth while. Besides, the barrier being removed isn’t strictly financial, it is corporate IT policy. I can build a kick butt home entertainment system for my house for $50,000; that doesn’t mean it’s the right thing to do.

However, as the conversation unfolded, it became clear that George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can’t justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal. At least that’s what George claims his numbers show, and I’m willing to accept that. It makes sense to me.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

(George actually notes a class of applications that sadly are also good candidates, though they shouldn’t necessarily be: applications that IT just can’t or won’t get to on behalf of a business unit. George claims his business makes good money meeting the needs of marketing organizations that have this problem. Just make sure the ROI is really worth it before taking this option, however.)

This existing investment in infrastructure therefore acts almost as a “barrier-to-exit” for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that?

The good news, as I hinted at earlier, is that there is technology that can be rationalized financially–through capital equipment and energy savings–which in turn can “grease the skids” for cloud adoption in the future. Ask the guys at 3tera. They’ll tell you that their cloud infrastructure allows an enterprise to optimize infrastructure usage while enabling workload portability (though not running workload portability) between cloud providers running their stuff. VMWare introduced their vCloud initiative specifically to make enterprises aware of the work they are doing to allow workload portability across data centers running their stuff. Cisco (my employer) is addressing the problem. In fact, there are several great products out there who can give you cloud technology in your enterprise data center that will open the door to cloud adoption now (with things like cloudbursting) and in the future.

If you aren’t considering how to “cloud enable” your entire infrastructure today, you ought to be getting nervous. Your competitors probably are looking closely at these technologies, and when the time is right, their barrier-to-exit will be lower than yours. Then, the true costs of moving an existing data center infrastructure to the cloud will become painfully obvious.

Many thanks to George for the excellent discussion. Twitter is becoming a great venue for cloud discussions.

Cloud Summit Executive: Making "Pay-As-You-Go" Customers Happy

October 15, 2008 Leave a comment

I got to spend a few hours yesterday at the Cloud Summit Executive conference at the Computer History Museum in Mountain View, California, today. The Summit was unique, at least in the Bay Area, for its focus on the business side of cloud computing. The networking was quite good, with a variety of CxOs digging into what the opportunities and challenges are for both customers and providers. The feedback that I got from those that attended the entire day was that the summit opened eyes and expanded minds.

Ken Oestreich has great coverage of the full summit.

I attended the afternoon keynotes, which started with a presentation from SAP CTO Vishal Sikka which seemed at first like the usual fluff about how global enterprise markets will be addressed by SaaS (read “SAP is the answer”). However, Vishal managed to weave in an important message about how the cloud will initially make some things worse, namely integration and data integrity. Elasticity, he noted, is the key selling point of the cloud right now, and it will be required to meet the needs of a heterogeneous world. Software vendors, pay attention to that.

For me, the highlight of the conference was a panel led by David Berlind at TechWeb, consisting of Art Wittmann of Information Week, Carolyn Lawson, CIO of the California Public Utilities Commission, and Anthony Hill, CIO of Golden Gate University. Ken covers the basic discussion quite well, but I took something away from the comments that he missed: both CIOs seemed to be in agreement that the contractual agreement between cloud customer and cloud provider should be different than those normally seen in the managed hosting world. Instead of being service level agreement (SLA) driven, cloud agreements should base termination rights strictly on customer satisfaction.

This was eye opening for me, as I had always focused on service level automation as the way to manage up time and performance in the cloud. I just assumed that the business relationship between customer and provider would include distinct service level agreements. However, Hill was adamant that his best SaaS relationships were those that gave him both subscription (or pay-as-you-use) pricing and the right to terminate the agreement for any reason with 30 days notice.

Why would any vendor agree to this? As Hill points out, it’s because it gives the feeling of control to the customer, without giving up any of the real barriers to termination that the customer has today; the most important of which is the cost of migrating off of the provider’s service. Carolyn generalized the benefits of concepts like this beautifully when she said something to the effect of

“The cloud vendor community has to understand what [using an off premises cloud service] looks like to me — it feels like a free fall. I can’t touch things like I can in my own data center (e.g. AC, racks, power monitors, etc.), I can’t yell at people who report to me. Give me a sense of control; give me options if something goes wrong.”

In other words, base service success on customer satisfaction, and provide options if something goes wrong.

What a beautifully concise way to put a very heart felt need of most cloud consumers–control over their company’s IT destiny. By providing different ways that a customer can handle unexpected situations, a cloud provider is signaling that they honor who the ultimate boss is in a cloud computing transaction–the customer.

Hill loved his 30 day notice for termination clause with one of his vendors, and I can see why. Not because he expects to use it, but because it let’s him and the vendor know that Golden Gate University has decision making power, and the cloud vendor serves at the pleasure of Golden Gate University, not the other way around.

VMWare’s Most Important Cloud Research? It Might Not Be Technology

I was kind of aimlessly wandering around my Google Reader feeds the other day when I came across an interview with Carl Eschenbach, VMWare’s executive vice president of worldwide field operations, titled “Q&A: VMware’s Eschenbach Outlines Channel Opportunities In The Virtual Cloud“. (Thanks to Rich Miller of Telematique for the link.) I started reading the article thinking it was going to be all about how to sell vCloud, but throughout the article, it was painfully clear that a hybrid cloud concept will cause some real disruption in the VMWare channel.

The core problem is this:

  1. Today, VMWare solution providers enjoy tremendous margins selling not only VMWare products, but associated services (often 5 to 7 times the revenue in services than software), and server, storage and networking hardware required to support a virtualized data center.

  2. However, vCloud introduces the concept of offloading some of that computing to a capacity service provider, in a relationship where the solution provider acts merely as a middleman for the initial transaction.

  3. Ostensibly, the solution provider then gets a one time fee, but is squeezed out of recurring revenue for the service.

In other words, VMWare’s channel is not necessarily pumped about the advent of cloud computing.

To Eschenbach’s credit, he acknowledges that this could be the case:

We think there’s a potential. And we’re doing some studies right now with some of our larger solution providers, looking at whether there’s a possibility that they not only sell VMware SKUs into the enterprise, but if that enterprise customer wants to take advantage of cloud computing from a service provider that our VARs, our resellers, actually sell the service providers’ SKUs. So, not only are they selling into the enterprise data center, but now if that customer wants to take advantage of additional capacity that exists outside the four walls of the data center, why couldn’t our solution providers, our VIP resellers, resell a SKU that Verizon (NYSE:VZ) or Savvis or SunGard or BT is offering into that customer. So they can have the capability of selling into the enterprise cloud and the service provider cloud on two different SKUs and still maintain the relationship with the customer.

In a follow up question, Eschenbach declares:

[I]t’s not a lot different from a solution provider today selling into an account a VMware license that’s perpetual. Now, if you’re selling a perpetual license and you’re moving away from that and [your customer is] buying capacity on demand from the cloud, every time they need to do that, if they have an arrangement through a VAR or a solution provider to get access to that capacity, and they’re buying the SKU from them, they’re still engaged.

Does anyone else get the feeling that Eschenbach is talking about turning solution providers into cloud capacity brokerages? Furthermore, that such a solution provider now acts as a very inefficient capacity brokerage? Specifically, choosing the service that provides them with the best margins and locking customers into those providers, instead of the service that gives the customer the most bang for the buck on any given day? Doesn’t this create an even better opportunity for the more pure, independent cloud brokerages to sell terms and pricing that favor the customer?

I think VMWare may have a real issue on their hands, in which maintaining their amazing ecosystem of implementation partners may give way to more direct partnerships with specific cloud brokerages (for capacity) and system integrators (for consultative advice on optimizing between private and commercial capacity). The traditional infrastructure VAR gets left in the dust.

Part of the problem is that traditional IT service needs are often “apples and oranges” to online-based cloud computing needs. Serving traditional IT allows for specialization based on region and industry. In both cases, the business opportunity is on site implementation of a particular service or application system. Everyone has to do it that way, so every business that goes digital (and eventually they all have) needs these services in full.

The cloud now dilutes that opportunity. If the hardware doesn’t run on site, there is no opportunity to sell installation services. If the software is purchased as SaaS, there is no opportunity to sell instances of turnkey systems and the services to install and configure that software. If the operations are handled largely by a faceless organization in a cloud capacity provider, there is no opportunity to sell system administration or runbook services for that capacity. If revenue is largely recurring, there is no significant one-time “payday” for selling someone else’s capacity.

So the big money opportunity for service providers in the cloud is strategic, with just a small amount of tactical work to go around.

One possible exception, however, is system management software and hardware. In this case, I believe that customers need to consider owning their own service level automation systems and to monitor the conditions of all software they have running anywhere, either behind or outside of their own firewalls. There is a turnkey opportunity here, and I know many of the cloud infrastructure providers are talking appliance these days for that purpose. Installing and configuring these appliances is going to take specific expertise that should grow in demand over the next decade.

Unless innovative vendors such as RightScale and CohesiveFT kill that opportunity, too.

I know I’ve seen references by others to this channel problem. (In fact, Eschenbach’s interview also raised red flags for Alessandro Perilli of virtualization.info.) On the other hand, others are optimistic it creates opportunity. So maybe I’m just being paranoid. However, if I was a solution provider with my wagon hitched to VMWare’s star, I’d be thinking really hard about what my company will look like five years from now. And if I’m a customer, I’d be looking closely at how I will be acquiring compute capacity in the same time frame.

Cisco’s Nexus 1000v and the Cloud: Is it really a big deal?

September 17, 2008 Leave a comment

Yesterday, the big announcements at VMWorld 2008 were about Cloud OSes. Today, the big news seemed to be Maritz’s keynote (where he apparently laid out an amazing vision of what VMWare thinks they can achieve in the coming year), and the long rumored Cisco virtual switch.

The latter looks to be better than I had hoped for functionally, though perhaps a little more locked in to VMWare than I’d like. There is an explanation for the latter, however, so it may not be so bad…see below.

I’ve already explained why I love the Nexus concept so much. Today, Cisco and VMWare jointly announced the Nexus 1000v virtual machine access switch, a fully VI compatible software switch that…well, I’ll let Cisco’s data sheet explain it:

“The Cisco Nexus™ 1000V virtual machine access switch is an intelligent software switch implementation for VMware ESX environments. Running inside of the VMware ESX hypervisor, the Cisco Nexus 1000V supports Cisco® VN-Link server virtualization technology, providing

  • Policy-based virtual machine (VM) connectivity
  • Mobile VM security and network policy, and
  • Non-disruptive operational model for your server virtualization, and networking teams.

When server virtualization is deployed in the data center, virtual servers typically are not managed the same way as physical servers. Server virtualization is treated as a special deployment, leading to longer deployment time with a greater degree of coordination among server, network, storage, and security administrators. But with the Cisco Nexus 1000V you can have a consistent networking feature set and provisioning process all the way from the VM to the access, aggregation, and core switches. Your virtual servers can use the same network configuration, security policy, tools, and operational models as physical servers. Virtualization administrators can leverage predefined network policy that follows the nomadic VM and focus on virtual machine administration. This comprehensive set of capabilities helps you to deploy server virtualization faster and realize its benefits sooner.”

In other words, the 1000v is a completely equal player in a Cisco fabric, and can completely leverage all of the skill sets and policy management available in its other switches. Think “my sys admins can do what they do best, and my network admins can do what they do best”. Further more, it supports VN-Link, which allows VMWare systems running on Cisco fabric to VMotion without losing any network or security configuration. Read that last sentence again.

(I wrote some time about about network administrators facing the most change by this whole pooled-resource thing–this feature seals the deal. Those static network maps they used to hang on your wall, showing them exactly what system was connected to what switch port with what IP address are now almost entirely obsolete.)

I love that feature. I will love it even more if it functions in its entirety in the vCloud concept that VMWare is pitching, and all indications are that it will. So, to tell the story here as simply as possible:

  • You create a group of VMs for a distributed application in VConsole
  • You assign network security and policy via Cisco tools, using the same interface as on the physical switches
  • You configure VMWare to allow VMs for the application to get capacity from an external vendor–one of dozens supporting vCloud
  • When an unexpected peak hits, your VM cluster grabs additional capacity as required in the external cloud, without losing network policy and security configurations.

Cloud computing nirvana.

Now, there are some disappointments, as I hinted above. First, the switch is not stackable, as originally hoped, though the interconnectivity of VN-Link probably overrides that. (Is VN-Link just another way to “stack” switches? Networking is not my strong point.)

Update: In the comments below, Omar Sultan of Cisco notes that the switches are, in fact, “virtually stackable”, meaning they can be distributed across multiple physical systems, creating a single network domain for a cluster of machines. I understand that just enough to be dangerous, so I’ll stop there.”

More importantly, I was initially kind of ticked off that Cisco partnered so closely with VMWare without being careful to note that they would be releasing similar technologies with Citrix and Red Hat at a minimum. But, as I thought about it, Citrix hitched its wagon to 3TERA, and 3TERA owns every aspect of the logical infrastructure an application runs on. In AppLogic, you have to use their network representation, load balancers, and so on as a part of your application infrastructure definition, and 3TERA maps those to real resources as it sees fit. For network connections, it relies on a “Logical Connection Manager (LCM)“:

“The logical connection manager implements a key service that abstracts intercomponent communications. It enables AppLogic to define all interactions between components of an application in terms of point-to-point logical connections between virtual appliances. The interactions are controlled and tunneled across physical networks, allowing AppLogic to enforce interaction protocols, detect security breaches and migrate live TCP connections from one IP network to another transparently.”

(from the AppLogic Grid Operating System Technical Overview: System Services)

Thus, there is no concept of a virtual switch, per se, in AppLogic. A quick look at their site shows no other partners in the virtual networking or load balancing space (though Nirvanix is a virtual storage partner), so perhaps Cisco simply hasn’t been given the opportunity or the hooks to participate in the Xen/3TERA Cloud OS.

(If anyone at 3TERA would like to clarify, I would be extremely grateful. If Cisco should be partnering here, I would be happy to add some pressure to them to do so.)

As for Red Hat, I honestly don’t know anything about their VMM, so I can’t guess at why Cisco didn’t do anything there…although my gut tells me that I won’t be waiting long to hear about a partnership between those two.

This switch makes VMWare VMs equal players in the data center network, and that alone is going to disrupt a lot of traditional IT practices. While I was at Cassatt, I remember a colleague predicting that absolutely everything would run in a VM by the end of this decade. That still seems a little aggressive to me, but a lot less so than it did yesterday.

Elements of a Cloud Oriented Architecture

In my post, The Principles of Cloud Oriented Architectures, I introduced you to the concept of a software system architecture designed with “the cloud” in mind:

“…I offer you a series of posts…describing in depth my research into what it takes to deliver a systems architecture with the following traits:

  1. It partially or entirely incorporates the clouds for at least one layer of the Infrastructure/Platform/Application stack.
  2. Is focused on consumers of cloud technologies, not the requirements of those delivering cloud infrastructures, either public or private (or even dark).
  3. Takes into account a variety of technical, economic and even political factors that systems running in the “cloud” must take into account.
  4. Is focused at least as much on the operational aspects of the system as the design and development aspects

The idea here is not to introduce an entirely new paradigm–that’s the last thing we need given the complexity of the task ahead of us. Nor is it to replace the basic principles of SOA or any other software architecture. Rather, the focus of this series is on how to best prepare for the new set of requirements before us.”

I followed that up with a post (well, two really) that set out to define what our expectations of “the cloud” ought to be. The idea behind the Cloud Computing Bill of Rights was not to lay out a policy platform–though I am flattered that some would like to use it as the basis of one— but rather to set out some guidelines about what cloud computing customers should anticipate in their architectures. In this continuing “COA principles” series, I intend to lay out what can be done to leverage what vendors deliver, and design around what they fail to deliver.

With that basic framework laid out, the next step is to break down what technology elements need to be considered when engineering for the cloud. This post will cover only the list of some such elements as I understand them today (and feel free to use the comments below to add your own insights), and future posts will provide a more thorough analysis of individual elements and/or related groups of elements. The series is really very “stream of consciousness”, so don’t expect too much structure or continuity.

When considering what elements matter in a Cloud Oriented Architecture, we consider first that we are talking about distributed systems. Simply utilizing Salesforce.com to do your Customer Relationship Management doesn’t require an architecture; integrating it with your SAP billing systems does. As your SAP systems most likely don’t run in Salesforce.com data centers, the latter is a distributed systems problem.

Most distributed systems problems have just a few basic elements. For example:

  • Distribution of responsibilities among component parts

  • Dependency management between those component parts

  • Scalability and reliability

    • Of the system as a whole
    • Of each component
  • Data Access and Management

  • Communication and Networking

  • Monitoring and Systems Management

However, because cloud computing involves leveraging services and systems entirely outside of the architect’s control, several additional issues must be considered. Again, for example:

  • How are the responsibilities of a complex distributed system best managed when the services being consumed are relatively fixed in the tasks they can perform?

  • How are the cloud customer’s own SLA commitments best addressed when the ability to monitor and manage components of the system may be below the standards required for the task?

  • How are the economics of the cloud best leveraged?

    • How can a company gain the most work for the least amount of money?
    • How can a company leverage the cloud marketplace for not just cost savings, but also increased availability and system performance?

In an attempt to address the more cloud-specific distributed systems architecture issues, I’ve come up with the following list of elements to be addressed in a typical Cloud Oriented Architecture:

  • Service Fluidity – How does the system best allow for static redeployment and/or “live motion” of component pieces within and across hardware, facility and network boundaries? Specific issues to consider here include:

    • Distributed application architecture, or how is the system designed to manage component dependencies while allowing the system to dynamically find each component as required? (Hint: this problem has been studied thoroughly by such practices as SOA, EDA, etc.)
    • Network resiliency, or how does the system respond to changes in network location, including changes in IP addressing, routing and security?
  • Monitoring – How is the behavior and effectiveness of the system measured and tracked both to meet existing SLAs, as well as to allow developers to improve the overall system in the future? Issues to be considered here include:

    • Load monitoring, or how do you measure system load when system components are managed by multiple vendors with little or know formal agreements of how to share such data with the customer or each other?
    • Cost monitoring, or how does the customer get a accurate accounting of the costs associated with running the system from their point of view?
  • Management – How does the customer configure and maintain the overall system based on current and ongoing technical and business requirements? Examples of what needs to be considered here includes:

    • Cost, or what adjustments can be made to the system capacity or deployment to provide the required amount of service capacity at the lowest cost possible? This includes ways to manage the efficiency of computation, networking and storage.
    • Scalability, or how does the system itself allow changes to capacity to meet required workloads? These changes can happen:
      • vertically (e.g. get a bigger box for existing components–physically or virtually)
      • horizontally (e.g. add or remove additional instances of one or more components as required)
      • from a network latency perspective (adjust the ways in which the system accesses the network in order to increase overall system performance)
    • Availability, or how does the system react to failure or any one component, or any group of components (e.g. when an entire vendor cloud goes offline)?
  • Compliance – How does the overall system meet organizational, industry and legislative regulatory requirements–again, despite being made up of components from a variety of vendors who may themselves provide computing in a variety of legal jurisdictions?

Now comes the fun of breaking these down a bit, and talking about specific technologies and practices that can address them. Please, give me your feedback (or write up your criticism on your own blog, but link here so I can find you). Point me towards references to other ways to think about the problem. I look forward to the conversation.

Update: The Cloud Computing Bill of Rights

Thanks to all that provided input on the first revision of the Cloud Computing Bill of Rights. The feedback has been incredible, including several eye opening references, and some basic concepts that were missed the first time through. An updated “CCBOR” is below, but I first want to directly outline the changes, and credit those that provided input.

  1. Abhishek Kumar points out that government interference in data privacy and security rights needs to be explicitly acknowledged. I hear him loud and clear, though I think the customer can expect only that laws will remain within the constitutional (or doctrinal) bounds of their particular government, and that government retains the right to create law as it deems necessary within those parameters.

    What must also be acknowledged, however, is that customers have the right to know exactly what laws are in force for the cloud systems they choose to use. Does this mean that vendors should hire civil rights lawyers, or that the customer is on their own to figure that out? I honestly don’t know.

  2. Peter Laird’s “The Good, Bad, and the Ugly of SaaS Terms of Service, Licenses, and Contracts” is a must read when it comes to data rights. It finds for enterprises what was observed by NPR the other night for individuals; that you have very few data privacy rights right now, that your provider probably has explicit provisions protecting them and exposing you or your organization, and the cloud exposes risks that enterprises avoid by owning their own clouds.

    This reinforces the notion that we must understand that privacy is not guaranteed in the cloud, no matter what your provider says. As Laird puts it:

    “…[A] customer should have an explicit and absolute right to data ownership regardless of how a contract is terminated.”

  3. Ian Osbourne asks “should there be a right to know where the data will be stored, and potentially a service level requirement to limit host countries?” I say absolutely! It will be impossible for customers to obey laws globally unless data is maintained in known jurisdictions. This was the catalyst for the “Follow the Law Computing” post. Good catch!

  4. John Marsh of GeekPAC links to his own emerging attempt at a Bill of Rights. In it, he points out a critical concept that I missed:

    “[Vendors] may not terminate [customer] account[s] for political statements, inappropriate language, statements of sexual nature, religious commentary, or statements critical of [the vendor’s] service, with exceptions for specific laws, eg. hate speech, where they apply.”

    Bravo, and noted.

  5. Unfortunately, the federal courts have handed down a series of rulings that challenge the ability of global citizens and businesses to do business securely and privately in the cloud. This Bill of Rights is already under grave attack.

Below is the complete text of the second revision of the Cloud Computing Bill of Rights. Let’s call the first “CCBOR 0.1” and this one “CCBOR 0.2”. I’ll update the original post to reflect the versioning.

One last note. My intention in presenting this post was not to become the authority on cloud computing consumer rights. It is, rather, the cornerstone of my Cloud Computing Architecture discussion, in which I need to move on to the next point. I’m working on setting up a WIKI for this “document”. Is there anyone out there in particular that would like to host it?

The Cloud Computing Bill of Rights (0.2)

In the course of technical history, there exist few critical innovations that forever change the way technical economies operate; forever changing the expectations that customers and vendors have of each other, and the architectures on which both rely for commerce. We, the parties entering into a new era driven by one such innovation–that of network based services, platforms and applications, known at the writing of this document as “cloud computing”–do hereby avow the following (mostly) inalienable rights:

  • Article I: Customers Own Their Data

    1. No vendor shall, in the course of its relationship with any customer, claim ownership of any data uploaded, created, generated, modified, hosted or in any other way associated with the customer’s intellectual property, engineering effort or media creativity. This also includes account configuration data, customer generated tags and categories, usage and traffic metrics, and any other form of analytics or meta data collection.

      Customer data is understood to include all data directly maintained by the customer, as well as that of the customer’s own customers. It is also understood to include all source code and data related to configuring and operating software directly developed by the customer, except for data expressly owned by the underlying infrastructure or platform provided by the vendor.

    2. Vendors shall always provide, at a minimum, API level access to all customer data as described above. This API level access will allow the customer to write software which, when executed against the API, allows access to any customer maintained data, either in bulk or record-by-record as needed. As standards and protocols are defined that allow for bulk or real-time movement of data between cloud vendors, each vendor will endeavor to implement such technologies, and will not attempt to stall such implementation in an attempt to lock in its customers.

    3. Customers own their data, which in turn means they own responsibility for the data’s security and adherence to privacy laws and agreements. As with monitoring and data access APIs, vendors will endeavor to provide customers with the tools and services they need to meet their own customers’ expectations. However, customers are responsible for determining a vendor’s relevancy to specific requirements, and to provide backstops, auditing and even indemnification as required by agreements with their own customers.

      Ultimately, however, governments are responsible for the regulatory environments that define the limits of security and privacy laws. As governments can choose any legal requirement that works within the constraints of their own constitutions or doctrines, customers must be aware of what may or may not happen to their data in the jurisdictions in which data resides, is processed or is referenced. As constitutions vary from country to country, it may not even be required for governments to inform customers what specific actions are taken with or against their data. That laws exist that could put their data in jeopardy, however, is the minimum that governments convey to the market.

      Customers (and their customers) must leverage the legislative mechanisms of any jurisdiction of concern to change those parameters.

      In order for enough trust to be built into the online cloud economy, however, governments should endeavor to build a legal framework that respects corporate and individual privacy, and overall data security. While national security is important, governments must be careful not to create an atmosphere in which the customers and vendors of the cloud distrust their ability to securely conduct business within the jurisdiction, either directly or indirectly.

    4. Because regulatory effects weigh so heavily on data usage, security and privacy, vendors shall, at a minimum, inform customers specifically where their data is housed. A better option would be to provide mechanisms by which users can choose where their data will be stored. Either way, vendors should also endeavor to work with customers to assure that their systems designs do not conflict with known legal or regulatory obstacles. This is assumed to apply to primary, backup and archived data instances.
  • Article II: Vendors and Customers Jointly Own System Service Levels

    1. Vendors own, and shall do everything in their power to meet, service level targets committed to with any given customer. All required effort and expense necessary to meet those explicit service levels will be spent freely and without additional expense to the customer. While the specific legally binding contracts or business agreements will spell out these requirements, it is noted here that these service level agreements are entered into expressly to protect both the customer’s and vendor’s business interests, and all decisions by the vendor will take both parties equally into account.

      Where no explicit service level agreement exists with a customer, the vendor will endeavor to meet any expressed service level targets provided in marketing literature or the like. At no time will it be acceptable for a vendor to declare a high level of service at a base price, only to later indicate that that level of service is only available at a higher premium price.

      It is perfectly acceptable, however, for a vendor to expressly sell a higher level of service at a higher price, as long as they make that clear at all points where a customer may evaluate or purchase the service.

    2. Ultimately, though, customers own their service level commitments to their own internal or external customers, and the customer understands that it is their responsibility to take into account possible failures by each vendor that they do business with.

      Customers relying on a single vendor to meet their own service level commitments enter into an implicit agreement to tie their own service level commitments to the vendor’s, and to live and die by the vendor’s own infrastructure reliability. Those customers who take their own commitments seriously will seek to build or obtain independent monitoring, failure recovery and disaster recovery systems.

    3. Where customer/vendor system integration is necessary, the vendor’s must offer options for monitoring the viability of that integration at as many architectural levels as required to allow the customer to meet their own service level commitments. Where standards exist for such monitoring, the vendor will implement those standards in a timely and complete fashion. The vendor should not underestimate the importance of this monitoring to the customer’s own business commitments.

    4. Under no circumstances will vendors terminate customer accounts for political statements, inappropriate language, statements of sexual nature, religious commentary, or statements critical of the vendor’s service, with exceptions for specific laws, e.g. hate speech, where they apply.
  • Article III: Vendors Own Their Interfaces

    1. Vendors are under no obligation to provide “open” or “standard” interfaces, other than as described above for data access and monitoring. APIs for modifying user experience, frameworks for building extensions or even complete applications for the vendor platform, or such technologies can be developed however the vendor sees fit. If a vendor chooses to require developers to write applications in a custom programming language with esoteric data storage algorithms and heavily piracy protected execution systems, so be it.

      If it seems that this completely abdicates the customer’s power in the business relationship, this is not so. As the “cloud” is a marketplace of technology infrastructures, platforms and applications, the customer exercises their power by choosing where to spend their hard earned money. A decision to select a platform vendor that locks you into proprietary Python libraries, for instance, is a choice to support such programming lock-in. On the other hand, insistence on portable virtual machine formats will drive the market towards a true commodity compute capacity model.

      The key reason for giving vendors such power is to maximize innovation. By restricting how technology gets developed or released, the market risks restricting the ways in which technologists can innovate. History shows that eventually the “open” market catches up to most innovations (or bypasses them altogether), and the pace at which this happens is greatly accelerated by open source. Nonetheless, forcing innovation through open source or any other single method runs the risk of weakening capitalist entrepreneurial risk taking.

    2. The customer, however, has the right to use any method legally possible to extend, replicate, leverage or better any given vendor technology. If a vendor provides a proprietary API for virtual machine management in their cloud, customers (aka “the community”, in this case) have every right to experiment with “home grown” implementations of alternative technologies using that same API. This is also true for replicating cloud platform functionality, or even complete applications–though, again, the right only extends to legal means.

      Possibly the best thing a cloud vendor can do to extend their community, and encourage innovation on their platform from community members is to open their platform as much as possible. By making themselves the “reference platform” for their respective market space, an open vendor creates a petrie dish of sorts for cultivating differentiating features and successes on their platform. Protective proprietary vendors are on their own.

These three articles serve as the baseline for customer, vendor and, as necessary, government relationships in the new network-based computing marketplace. No claim is made that this document is complete, or final. These articles may be changed or extended at any time, and additional articles can be declared, whether in response to new technologies or business models, or simply to reflect the business reality of the marketplace. It is also a community document, and others are encouraged to bend and shape it in their own venues.

The Cloud Computing Bill of Rights

Update: Title and version number added before Cloud Computing Bill of Rights text below.

Before you architect your application systems for the cloud, you have to set some ground rules on what to expect from the cloud vendors you either directly or indirectly leverage. It is important that you walk into these relationships with certain expectations, in both the short and long term, and both those that protect you and those that protect the vendor.

This post is an attempt to capture many of the core rights that both customers and vendors of the cloud should come to expect, with the goal of setting that baseline for future Cloud Oriented Architecture discussions.

This is but a first pass, presented to the community for feedback, discussion, argument and–if deserved–derision. Your comments below will be greatly appreciated in any case.

The Cloud Computing Bill of Rights (0.1)

In the course of technical history, there exist few critical innovations that forever change the way technical economies operate; forever changing the expectations that customers and vendors have of each other, and the architectures on which both rely for commerce. We, the parties entering into a new era driven by one such innovation–that of network based services, platforms and applications, known at the writing of this document as “cloud computing”–do hereby avow the following (mostly) inalienable rights:

  • Article I: Customers Own Their Data

    1. No vendor shall, in the course of its relationship with any customer, claim ownership of any data uploaded, created, generated, modified, hosted or in any other way associated with the customer’s intellectual property, engineering effort or media creativity. This also includes account configuration data, customer generated tags and categories, usage and traffic metrics, and any other form of analytics or meta data collection.

      Customer data is understood to include all data directly maintained by the customer, as well as that of the customer’s own customers. It is also understood to include all source code and data related to configuring and operating software directly developed by the customer, except for data expressly owned by the underlying infrastructure or platform provided by the vendor.

    2. Vendors shall always provide, at a minimum, API level access to all customer data as described above. This API level access will allow the customer to write software which, when executed against the API, allows access to any customer maintained data, either in bulk or record-by-record as needed. As standards and protocols are defined that allow for bulk or real-time movement of data between cloud vendors, each vendor will endeavor to implement such technologies, and will not attempt to stall such implementation in an attempt to lock in its customers.

  • Article II: Vendors and Customers Jointly Own System Service Levels

    1. Vendors own, and shall do everything in their power to meet, service level targets committed to with any given customer. All required effort and expense necessary to meet those explicit service levels will be spent freely and without additional expense to the customer. While the specific legally binding contracts or business agreements will spell out these requirements, it is noted here that these service level agreements are entered into expressly to protect the customer’s business interests, and all decisions by the vendor will take this into account.

      Where no explicit service level agreement exists with a customer, the vendor will endeavor to meet any expressed service level targets provided in marketing literature or the like. At no time will it be acceptable for a vendor to declare a high level of service at a base price, only to later indicate that that level of service is only available at a higher premium price.

      It is perfectly acceptable, however, for a vendor to expressly sell a higher level of service at a higher price, as long as they make that clear at all points where a customer may evaluate or purchase the service.

    2. Ultimately, though, customers own their service level commitments to their own internal or external customers, and the customer understands that it is their responsibility to take into account possible failures by each vendor that they do business with.

      Customers relying on a single vendor to meet their own service level commitments enter into an implicit agreement to tie their own service level commitments to the vendor’s, and to live and die by the vendor’s own infrastructure reliability. Those customers who take their own commitments seriously will seek to build or obtain independent monitoring, failure recovery and disaster recovery systems.

    3. Where customer/vendor system integration is necessary, the vendor’s must offer options for monitoring the viability of that integration at as many architectural levels as required to allow the customer to meet their own service level commitments. Where standards exist for such monitoring, the vendor will implement those standards in a timely and complete fashion. The vendor should not underestimate the importance of this monitoring to the customer’s own business commitments.

  • Article III: Vendors Own Their Interfaces

    1. Vendors are under no obligation to provide “open” or “standard” interfaces, other than as described above for data access and monitoring. APIs for modifying user experience, frameworks for building extensions or even complete applications for the vendor platform, or such technologies can be developed however the vendor sees fit. If a vendor chooses to require developers to write applications in a custom programming language with esoteric data storage algorithms and heavily piracy protected execution systems, so be it.

      If it seems that this completely abdicates the customer’s power in the business relationship, this is not so. As the “cloud” is a marketplace of technology infrastructures, platforms and applications, the customer exercises their power by choosing where to spend their hard earned money. A decision to select a platform vendor that locks you into proprietary Python libraries, for instance, is a choice to support such programming lock-in. On the other hand, insistence on portable virtual machine formats will drive the market towards a true commodity compute capacity model.

      The key reason for giving vendors such power is to maximize innovation. By restricting how technology gets developed or released, the market risks restricting the ways in which technologists can innovate. History shows that eventually the “open” market catches up to most innovations (or bypasses them altogether), and the pace at which this happens is greatly accelerated by open source. Nonetheless, forcing innovation through open source or any other single method runs the risk of weakening capitalist entrepreneurial risk taking.

    2. The customer, however, has the right to use any method legally possible to extend, replicate, leverage or better any given vendor technology. If a vendor provides a proprietary API for virtual machine management in their cloud, customers (aka “the community”, in this case) have every right to experiment with “home grown” implementations of alternative technologies using that same API. This is also true for replicating cloud platform functionality, or even complete applications–though, again, the right only extends to legal means.

      Possibly the best thing a cloud vendor can do to extend their community, and encourage innovation on their platform from community members is to open their platform as much as possible. By making themselves the “reference platform” for their respective market space, an open vendor creates a petrie dish of sorts for cultivating differentiating features and successes on their platform. Protective proprietary vendors are on their own.

These three articles serve as the baseline for customer/vendor relationships in the new network-based computing marketplace. No claim is made that this document is complete, or final. These articles may be changed or extended at any time, and additional articles can be declared, whether in response to new technologies or business models, or simply to reflect the business reality of the marketplace. It is also a community document, and others are encouraged to bend and shape it in their own venues.

Comments, complaints or questions can be directed to the author through the comments section below.