Archive

Archive for the ‘cloud security’ Category

The enterprise "barrier-to-exit" to cloud computing

December 2, 2008 Leave a comment

An interesting discussion ensued on Twitter this weekend between myself and George Reese of Valtira. George–who recently posted some thought provoking posts on O’Reilly Broadcast about cloud security, and is writing a book on cloud computing–argued strongly that the benefits gained from moving to the cloud outweighed any additional costs that may ensue. In fact, in one tweet he noted:

IT is a barrier to getting things done for most businesses; the Cloud reduces or eliminates that barrier.

I reacted strongly to that statement; I don’t buy that IT is that bad in all cases (though some certainly is), nor do I buy that simply eliminating a barrier to getting something done makes it worth while. Besides, the barrier being removed isn’t strictly financial, it is corporate IT policy. I can build a kick butt home entertainment system for my house for $50,000; that doesn’t mean it’s the right thing to do.

However, as the conversation unfolded, it became clear that George and I were coming at the problem from two different angles. George was talking about many SMB organizations, which really can’t justify the cost of building their own IT infrastructure, but have been faced with a choice of doing just that, turning to (expensive and often rigid) managed hosting, or putting a server in a colo space somewhere (and maintaining that server). Not very happy choices.

Enter the cloud. Now these same businesses can simply grab capacity on demand, start and stop billing at their leisure and get real world class power, virtualization and networking infrastructure without having to put an ounce of thought into it. Yeah, it costs more than simply running a server would cost, but when you add the infrastructure/managed hosting fees/colo leases, cloud almost always looks like the better deal. At least that’s what George claims his numbers show, and I’m willing to accept that. It makes sense to me.

I, on the other hand, was thinking of medium to large enterprises which already own significant data center infrastructure, and already have sunk costs in power, cooling and assorted infrastructure. When looking at this class of business, these sunk costs must be added to server acquisition and operation costs when rationalizing against the costs of gaining the same services from the cloud. In this case, these investments often tip the balance, and it becomes much cheaper to use existing infrastructure (though with some automation) to deliver fixed capacity loads. As I discussed recently, the cloud generally only gets interesting for loads that are not running 24X7.

(George actually notes a class of applications that sadly are also good candidates, though they shouldn’t necessarily be: applications that IT just can’t or won’t get to on behalf of a business unit. George claims his business makes good money meeting the needs of marketing organizations that have this problem. Just make sure the ROI is really worth it before taking this option, however.)

This existing investment in infrastructure therefore acts almost as a “barrier-to-exit” for these enterprises when considering moving to the cloud. It seems to me highly ironic, and perhaps somewhat unique, that certain aspects of the cloud computing market will be blazed not by organizations with multiple data centers and thousands upon thousands of servers, but by the little mom-and-pop shop that used to own a couple of servers in a colo somewhere that finally shut them down and turned to Amazon. How cool is that?

The good news, as I hinted at earlier, is that there is technology that can be rationalized financially–through capital equipment and energy savings–which in turn can “grease the skids” for cloud adoption in the future. Ask the guys at 3tera. They’ll tell you that their cloud infrastructure allows an enterprise to optimize infrastructure usage while enabling workload portability (though not running workload portability) between cloud providers running their stuff. VMWare introduced their vCloud initiative specifically to make enterprises aware of the work they are doing to allow workload portability across data centers running their stuff. Cisco (my employer) is addressing the problem. In fact, there are several great products out there who can give you cloud technology in your enterprise data center that will open the door to cloud adoption now (with things like cloudbursting) and in the future.

If you aren’t considering how to “cloud enable” your entire infrastructure today, you ought to be getting nervous. Your competitors probably are looking closely at these technologies, and when the time is right, their barrier-to-exit will be lower than yours. Then, the true costs of moving an existing data center infrastructure to the cloud will become painfully obvious.

Many thanks to George for the excellent discussion. Twitter is becoming a great venue for cloud discussions.

Cisco’s Nexus 1000v and the Cloud: Is it really a big deal?

September 17, 2008 Leave a comment

Yesterday, the big announcements at VMWorld 2008 were about Cloud OSes. Today, the big news seemed to be Maritz’s keynote (where he apparently laid out an amazing vision of what VMWare thinks they can achieve in the coming year), and the long rumored Cisco virtual switch.

The latter looks to be better than I had hoped for functionally, though perhaps a little more locked in to VMWare than I’d like. There is an explanation for the latter, however, so it may not be so bad…see below.

I’ve already explained why I love the Nexus concept so much. Today, Cisco and VMWare jointly announced the Nexus 1000v virtual machine access switch, a fully VI compatible software switch that…well, I’ll let Cisco’s data sheet explain it:

“The Cisco Nexus™ 1000V virtual machine access switch is an intelligent software switch implementation for VMware ESX environments. Running inside of the VMware ESX hypervisor, the Cisco Nexus 1000V supports Cisco® VN-Link server virtualization technology, providing

  • Policy-based virtual machine (VM) connectivity
  • Mobile VM security and network policy, and
  • Non-disruptive operational model for your server virtualization, and networking teams.

When server virtualization is deployed in the data center, virtual servers typically are not managed the same way as physical servers. Server virtualization is treated as a special deployment, leading to longer deployment time with a greater degree of coordination among server, network, storage, and security administrators. But with the Cisco Nexus 1000V you can have a consistent networking feature set and provisioning process all the way from the VM to the access, aggregation, and core switches. Your virtual servers can use the same network configuration, security policy, tools, and operational models as physical servers. Virtualization administrators can leverage predefined network policy that follows the nomadic VM and focus on virtual machine administration. This comprehensive set of capabilities helps you to deploy server virtualization faster and realize its benefits sooner.”

In other words, the 1000v is a completely equal player in a Cisco fabric, and can completely leverage all of the skill sets and policy management available in its other switches. Think “my sys admins can do what they do best, and my network admins can do what they do best”. Further more, it supports VN-Link, which allows VMWare systems running on Cisco fabric to VMotion without losing any network or security configuration. Read that last sentence again.

(I wrote some time about about network administrators facing the most change by this whole pooled-resource thing–this feature seals the deal. Those static network maps they used to hang on your wall, showing them exactly what system was connected to what switch port with what IP address are now almost entirely obsolete.)

I love that feature. I will love it even more if it functions in its entirety in the vCloud concept that VMWare is pitching, and all indications are that it will. So, to tell the story here as simply as possible:

  • You create a group of VMs for a distributed application in VConsole
  • You assign network security and policy via Cisco tools, using the same interface as on the physical switches
  • You configure VMWare to allow VMs for the application to get capacity from an external vendor–one of dozens supporting vCloud
  • When an unexpected peak hits, your VM cluster grabs additional capacity as required in the external cloud, without losing network policy and security configurations.

Cloud computing nirvana.

Now, there are some disappointments, as I hinted above. First, the switch is not stackable, as originally hoped, though the interconnectivity of VN-Link probably overrides that. (Is VN-Link just another way to “stack” switches? Networking is not my strong point.)

Update: In the comments below, Omar Sultan of Cisco notes that the switches are, in fact, “virtually stackable”, meaning they can be distributed across multiple physical systems, creating a single network domain for a cluster of machines. I understand that just enough to be dangerous, so I’ll stop there.”

More importantly, I was initially kind of ticked off that Cisco partnered so closely with VMWare without being careful to note that they would be releasing similar technologies with Citrix and Red Hat at a minimum. But, as I thought about it, Citrix hitched its wagon to 3TERA, and 3TERA owns every aspect of the logical infrastructure an application runs on. In AppLogic, you have to use their network representation, load balancers, and so on as a part of your application infrastructure definition, and 3TERA maps those to real resources as it sees fit. For network connections, it relies on a “Logical Connection Manager (LCM)“:

“The logical connection manager implements a key service that abstracts intercomponent communications. It enables AppLogic to define all interactions between components of an application in terms of point-to-point logical connections between virtual appliances. The interactions are controlled and tunneled across physical networks, allowing AppLogic to enforce interaction protocols, detect security breaches and migrate live TCP connections from one IP network to another transparently.”

(from the AppLogic Grid Operating System Technical Overview: System Services)

Thus, there is no concept of a virtual switch, per se, in AppLogic. A quick look at their site shows no other partners in the virtual networking or load balancing space (though Nirvanix is a virtual storage partner), so perhaps Cisco simply hasn’t been given the opportunity or the hooks to participate in the Xen/3TERA Cloud OS.

(If anyone at 3TERA would like to clarify, I would be extremely grateful. If Cisco should be partnering here, I would be happy to add some pressure to them to do so.)

As for Red Hat, I honestly don’t know anything about their VMM, so I can’t guess at why Cisco didn’t do anything there…although my gut tells me that I won’t be waiting long to hear about a partnership between those two.

This switch makes VMWare VMs equal players in the data center network, and that alone is going to disrupt a lot of traditional IT practices. While I was at Cassatt, I remember a colleague predicting that absolutely everything would run in a VM by the end of this decade. That still seems a little aggressive to me, but a lot less so than it did yesterday.

Cloud Computing and the Constitution

September 8, 2008 Leave a comment

A few weeks ago, Mark Rasch of SecurityFocus wrote an article for The Register in which he described in detail the deterioration of legal protections that individuals and enterprises have come to expect from online services that house their data. I’ll let you read the article to get the whole story of Stephen Warshak vs. United States of America, but suffice to say the case opened Rasch’s eyes (and mine) to a series of laws and court decisions that I believe seriously weaken the case for storing your data in the cloud in the United States:

  • The Stored Communications Act, which was used to allow the FBI to access Warshak’s email communications without a warrant, his consent, or any form of notification.

  • The appeals court decisions in the case that argue:

    1. Even if the Stored Communications Act is unconstitutional, Warshak cannot block introduction of the evidence as “the cops reasonably relied on it
    2. Regardless of that outcome, the court could not determine if “emails potentially seized by the government without a warrant would be subject to any expectation of privacy”
  • The Supreme Court decision in Smith v. Maryland, in which the court argued that people generally gave up an expectation of privacy with regards to their phone records simply through the act of dialing their phone–which potentially translates to removing privacy expectation on any data sent to and accessible by a third party.

Rasch notes that in cloud computing, because most terms of service and license agreements are written to give the providers some right of access in various circumstances, all data stored at a provider is subject to the same legal treatment.

This is a serious flaw in the constitutional protections against illegal search and seizure, in my opinion, and may be a reason why US data centers will lose out completely on the cloud computing opportunity. Think about it. Why the heck would I commit my sensitive corporate data to the cloud if the government can argue that a) doing so removes my protections against search and seizure, and b) all expectations of privacy are further removed should my terms of service allow anyone other than myself or my organization to access the data? Especially when I can maintain both privileges simply by storing and processing my data on my own premises?

Couple this with the fact that the Patriot Act is keeping many foreign organizations from even considering US-based cloud storage or processing, and you see how it becomes nearly impossible to guarantee to the world market the same security for data outside the firewall as can be guaranteed inside.

It is my belief that this is the number one issue that darkens the otherwise bright future of cloud computing in the United States. Simple technical security of data, communications and facilities is a solvable problem. Portability of data, processing and services across applications, organizations or geographies is also technically solvable. But, if the US government chooses to destroy all sense of constitutional protection of assets in the cloud, there will be no technology that can save US-based clouds for critical security sensitive applications.

It may be too late to do the right thing here; to declare a cloud storage or processing facility the equivalent of a rented office space or an apartment building–leased spaces where all constitutional protection against illegal search and seizure remain in full strength. When I was younger and rented an apartment, I had every right to expect law enforcement wishing to access my personal spaces would be required to obtain a warrant and present it to me as they began their search. The same, in my opinion, should apply to data I store in the cloud. I should rest assured that the data will not be accessed without the same stringent requirements for a search warrant and notification.

Still, there are a few things individuals and companies can do today that appear OK to thwart attempts to secretly access private data.

  1. Encrypt your data before sending it to your cloud provider, and under no circumstances provide your provider with the keys to that encryption. This means that the worse a provider can be required to do is to hand over the encrypted files. You may even be able to argue that your expectations of privacy were maintained, as you handed over no accessible information to the provider, simply ones and zeros.

  2. Require that your provider modify their EULA/ToS to disavow ANY right to directly access your data or associated metadata for any reason. The exception might be file lengths, etc., required to run the hardware and management software, but certainly no core content or metadata that might reveal the relevant details about that content. This would also weaken the government’s case that you gave up privacy expectations when you handed your data to that particular cloud provider.

  3. Store your data and do your processing outside of the United States. It kills me to say that, but you may be forced into that corner.

If there are others that have looked at this issue and see other approaches (both political and technical) towards solving this (IMHO) crisis, I’d love to hear it. I have to admit I’m a little down on the cloud right now (at least US-based cloud services) because of the legal and constitutional issues that have yet to be worked out in a cloud consumer’s favor.

Oh, and this issue isn’t even close to being on the radar screen of either of the major presidential candidates at this point. I’m beginning to consider what it would take to get it into their faces. Anyone have Lawrence Lessig’s number handy?

Are We Overselling the Cloud to Ourselves?

I was doing some casual reading tonight (which is all I have time to do lately, it seems), when I came across this post from Thomas Wailgum of CIO.com on InfoWorld. (Ain’t syndication grand?) The majority of the post is commentary from Gartner about the relative infancy of SaaS ERP solutions relative to their on-premises bretheren. Interesting in and of itself, but not normally worthy of a post here.

However, on the second page, I came across the following quote:

Other inhibitors to more widespread SaaS ERP adoption, Ganly contends, relate to total cost of ownership (TCO). TCO of “SaaS ERP suites likely will be significant and may not compare favorably with on-premises solutions,” she adds. This problem applies to vendors as well. SaaS vendors “often have unrealistic expectations of their operating costs,” she writes. “The multitenant architecture needed for SaaS ERP suites results in high internal efforts and costs for the initial setup and the ongoing maintenance and upgrade of the system.”

Security has also been an issue with SaaS ERP offerings, “especially with regard to financial data and privacy concerns,” Ganly writes. “Vendors must prove to organizations that are considering SaaS ERP adoption that their security and privacy concerns are unfounded through super low-cost or no-cost, proof-of-concept trials, encouraging early adoption through value pricing and getting early adopters to share their success stories.”

[Emphasis mine.]

It occurs to me that this is a really good point to consider when looking at the economics of the cloud computing market. For SaaS vendors, cost-of-sales is still high, as the sale is (and always will be) a hybrid of the traditional enterprise sales model: high investment in building customer relationships, proving technical and business feasibility, and navigating corporate politics, though likely with fewer of the “big meeting” costs found in traditional relationship sales.

Thus, the “economies of scale” from data center operations may be vastly overshadowed by cost of acquiring customers.

However, a “pure infrastructure” play (such as poster child Amazon), eliminates most of the cost-of-sales if they can prove low barrier to entry and significant flexibility of use. Most customers discover Amazon, get set up for free, then pay nominal charges to figure out for themselves how to use the platform. There is no real data lockin, as the storage services are essentially device storage (as opposed to specific data schemas), so the cost of choosing not to move forward with a pure IaaS vendor is relatively low.

There are few CxO level relationships between AWS and their customers (though I don’t doubt there are several with, say, financial services megoliths with deep pockets and an interest in influencing AWS).

The point is, when most technologists think of the cloud, they think of something like Amazon, not something like DemandERP. But much of the value of the cloud comes from getting the resources you need in (usually) an on demand model. If price and experience can’t be both superior to on-premises ERP for the customer, and profitable for the vendor…well, as the kids say these days, “fail”.

I worry that many of the boutique IaaS vendors are also going to fall into the same trap–not understanding how the cost of acquiring customers to a specialized platform or service will wipe out the economies of scale savings of multitennancy. There will be a lot of churn out there in the coming years, and a lot of wispy corpses floating in the clouds. Caveat emptor.

Oh, and to the point of the efficiency of Amazon’s model: Jeff Barr notes that he can’t find a failed startup that used EC2/S3 as its core infrastructure:

“One of the major value propositions of Amazon Web Services is the utility pricing plan. That is, you only pay for what you use, and the cost is very low. Sometimes it feels like I am just saying that: not because there is any doubt that it’s true; rather because it’s difficult to produce metrics to back up assertions that “low cost utility pricing” is truly a game changer.

Then it hit me… Looking at the list of Start-Up Project presentations on Slideshare’s site, I realized that not a single one of these companies is “off the air”; that is, they all are still in business. In the Startup world that is nothing short of amazing—especially in this economy. (Some of the decks on Slideshare’s site are not from last year’s startup events; however even those other companies appear to be alive and well.) Amazon can’t take all the credit for this track record; however it does seem to be a solid data point that validates the value proposition.”

That is amazing, if it holds true.

The Principles of a Cloud Oriented Architecture

The market is hot. The technologies are appearing fast and furious. The tools you need are out there, but they are young, often untested, and always deliver unpredictable reliability. You’ve researched the economics, and you know now that cloud computing is a) here to stay, and b) offers economic advantages that–if realized–could stretch you IT budget and quite possibly catapult your career.

Now what?

What is often overlooked in the gleeful rush to cloud computing is the difficulty in molding the early technologies in the space into a truly bulletproof (or even bullet-resistant) business infrastructure. You see it all over the Internet; the push and pull between innovation and reliability, the concerns about security, monitoring and control, even the constant confusion over what entails cloud computing, what technologies to select for a given problem, and how to create an enterprise-class business system out of those technologies.

The truth is, cloud computing doesn’t launch our technical architectures into the future. It is, at its heart, an economic model that drives the parameters around how you acquire, pay for and scale the infrastructure architectures you already know. Its not a question of changing the required problems to solve when utilizing data centers, just a change to the division of responsibilities amongst yourself, your organization, your cloud providers and the Internet itself.

To this end, I offer you a series of posts (perhaps moving to a WIKI in the near future) describing in depth my research into what it takes to deliver a systems architecture with the following traits:

  1. It partially or entirely incorporates the clouds for at least one layer of the Infrastructure/Platform/Application stack.
  2. Is focused on consumers of cloud technologies, not the requirements of those delivering cloud infrastructures, either public or private (or even dark).
  3. Takes into account a variety of technical, economic and even political factors that systems running in the “cloud” must take into account.
  4. Is focused at least as much on the operational aspects of the system as the design and development aspects

The idea here is not to introduce an entirely new paradigm–that’s the last thing we need given the complexity of the task ahead of us. Nor is it to replace the basic principles of SOA or any other software architecture. Rather, the focus of this series is on how to best prepare for the new set of requirements before us.

Think about it. We already deal (or try to deal) with a world in which we don’t entirely have control over every aspect of the world our applications live in. If we are software developers, we rely on others to build our servers, configure our networks, provide us storage and weld them all together into a cohesive unit. System administrators are, in large enterprises anyway, specializing in OS/application stacks, networking, storage or system management. (Increasingly you can add facilities and traditional utilities to this list.)

Even when we outsource to others–shifting responsibility for management of parts or all of our IT infrastructure to a vendor–the vendor doesn’t have control over significant elements of the end-to-end operations of our applications; namely, the Internet itself. But with outsourcing, we typically turn over entire, intact architecture stacks, with a few, very well bounded integration points to manage (if any) between outsourced systems and locally maintained systems.

The cloud is going to mess this up. I say this not just because the business relationship is different from outsourcing, but also because what you are “turning over” can be a *part* of a system stack. Smugmug outsources storage and job processing, but not the web experience that relies on both. Applications that run entirely on EC2/S3 outsource the entire infrastructure, but not the application development, or even the application system management. (This is why RightScale, Hyperic and others are finding some traction with AWS customers.)

To prepare for a cloud oriented architecture, one understand what responsibilities lie where. So, I’ll give you a teaser of what is to come with the short-short version of where I see these responsibilities lying (subject to change as I talk to others, including yourselves if you choose to comment on this post):

  • The enterprise has responsibility for the following:
    • Defining the business solution to be solved, the use cases that define that solution, and the functional requirements to deliver those use cases
    • Evaluating the selection of technical and economic approaches for delivering those functional requirements, and selecting the best combination of the two. (In other words, the best combination may not contain either the best technical or best economic selection, but will outweigh any other combination of the two.)
    • Owning the service level agreements with the business for the delivery of those use cases. This is critically important. More on this below.
  • The cloud provider has responsibility for the following:
    • Delivering what they promised you (or the market) that they would deliver. No more, no less.
    • Providing you with transparent and honest billing and support services.
  • The Internet itself is only responsible for providing you with an open, survivability reliable infrastructure for interconnecting the networks you need to run your applications and/or services. There are no promises here about reliability or scalability or even availability. It should be considered a technical wilderness, and treated accordingly.

Now, about SLAs. Your cloud provider does not own your SLAs, you do. They may provide some SLAs that support your own, but they are not to be blamed if you fail to achieve the SLAs demanded of you. If your applications or services fail because the cloud failed, you failed. Given that, don’t “outsource” your SLAs, at least not logically. Own them.

In fact, I would argue that the single most important function of a cloud-centric IT shop after getting required business functionality up and running in the first place, is monitoring and actively managing that functionality; switching vendors, if necessary, to continue service at required levels. The one big piece of IT-specific software that should always run in IT data centers, in my opinion, is the NOC infrastructure. (Although, perhaps in this context its more of a Cloud Operations Center, but I hate the resulting acronym for obvious reasons.)

I’ll focus more on these responsibilities in future posts. All posts in this series will be tagged “coa principles”. Please feel free to provide me feedback in the comments, contact me to review your thoughts on this topic, or simply to send me links that you think I should be aware of. I am also working to find other bloggers who wish to take ownerships of parts of this primer (cloud security, for example) so let me know if you are interested there as well.

I am excited about this. This body of knowledge (or at least the faint traces of knowledge) have been rattling inside my head for some time, and it feels good to finally be sharing them with you.

Cloud Outages, and Why *You* Have To Design For Failure

I haven’t posted for a while because I have been thinking…a lot…about cloud computing, inevitable data center outages, and what it means to application architectures. Try as I might to put the problem on the cloud providers, I keep coming back to one bare fact; the cloud is going to expose a lot of the shortcomings of today’s distributed architectures, and this time it’s up to us to make things right.

It all started with some highly informative posts from the Data Center Knowledge blog chronicling outages at major hosting companies, and failures that helped online companies learn important lessons about scaling, etc. As I read these posts, the thought that struck my mind was, “Well, of course. These types of things are inevitable. Who could possibly predict every possible negative influence on an application, much less a data center.” I’ve been in enough enterprise IT shops to know that even the very best are prepared for something unexpected to happen. In fact, what defines the best shops are that they assume failure and prepare for it.

Then came the stories of disgruntled employees locking down critical information systems or punching the emergency power kill switch on their way out the door. Whether or not you are using the cloud, human psychology being what it is, we have to live every day with immaturity or even just plain insanity.

Yet, each time one of the big name cloud vendors has an outage–Google had one, as did Amazon a few times, including this weekend–there are a bunch of IT guys crying out, “Well, there you go. The cloud is not ready for production.”

Baloney, I say. (Well, I actually use different vocabulary, but you get the drift.) Truth is, the cloud is just exposing people’s unreasonable expectations for what a distributed, disparate computing environment provides. The idea that some capacity vendor is going to give you 100% up time for years on end–whether they promised it or not–is just delusional. Getting angry at your vendor for an isolated incident or poo-pooing the market in general just demonstrates a lack of understanding of the reality of networked applications and infrastructure.

If you are building an application for the Internet–much less the cloud–you are building a distributed software system. A distributed system, by definition, relies on a network for communication. Some years ago, Sun’s Peter Deutsch and others at Sun postulated a series of fallacies that tend to be the pitfalls that all distributed systems developers run into at one time or another in their career. Hell, I still have to check my work against these each and every time I design a distributed system.

Key among these is the delusion that the network is reliable. It isn’t, it never has been, and it never will be. For network applications, great design is defined by the application or application system’s ability to weather undesirable states. There are a variety of techniques for achieving this, such as redundancy and caching, but I will dive into those in more depth in a later post. (A great source for these concepts is http://highscalability.com.)

Some of the true pioneers in the cloud realized this early. Phil Wainwright notes that Alan Williamson of Mediafed made what appears to be a prescient decision to split their processing load between two cloud providers, Amazon EC2/S3 and FlexiScale. Even Amazon themselves use caching to mitigate S3 outages on their retail sites (see bottom of linked post for their statement).

Michael Hickins notes in his E-Piphanies blog that this may be an amazing opportunity for some skilled entrepreneurs to broker failure resistance in the cloud. I agree, but I think good distributed system hygiene begins at home. I think the best statement is a comment I saw on ReadWriteWeb:

“People rankled about 5 hours of downtime should try providing the same level of service. In my experience, it’s much easier to write-off your own mistakes (and most organizations do), than it is to understand someone else’s — even when they’re doing a better job than you would.”

Amen, brother.

So, in a near future post I’ll go into some depth about what you can do to utilize a “cloud oriented architecture”. Until then, remember: Only you can prevent distributed application failures.

"Follow the law" computing

A few days ago, Nick Carr worked his usual magic in analyzing Bill Thompson’s keen observation that every element of “the cloud” eventually boils down to a physical element in a physical location with real geopolitical and legal influences. This problem was first brought to my attention in a blog post by Leslie Poston noting that the Canadian government has refused to allow public IT projects to use US-based hosting environments for fear of security breaches authorized via the Patriot Act. Nick added another example with the following:

Right before the manuscript of The Big Switch was shipped off to the printer (“manuscript” and “shipped off” are being used metaphorically here), I made one last edit, adding a paragraph about France’s decision to ban government ministers from using Blackberrys since the messages sent by the popular devices are routinely stored on servers sitting in data centers in the US and the UK. “The risks of interception are real,” a French intelligence official explained at the time.

I hadn’t thought too much about the political consequences of the cloud since first reading Nick’s book, but these stories triggered a vision that I just can’t shake.

Let me explain. First, some setup…

One of the really cool visions that Bill Coleman used to talk about with respect to cloud computing was the concept of “follow the moon“; in other words, moving running applications globally over the course of an earth day to where processing power is cheapest–on the dark side of the planet. The idea was originally about operational costs in general, but these days Cassatt and others focus this vision around electricity costs.

The concept of “moving” servers around the world was greatly enhanced by the live motion technologies offered by all of the major virtualization infrastructure players (e.g. VMotion). With these technologies (as you all probably know by now), moving a server from one piece of hardware to another is as simple as clicking a button. Today, most of that convenience is limited to within a single network, but with upcoming SLAuto federation architectures and standards that inter-LAN motion will be greatly simplified over the coming years.

(It should be noted that “moving” software running on bare metal is possible, but it requires “rebooting” the server image on another physical box.)

The key piece of the puzzle is automation. Whether simple runbook-style automation (automating human-centric processes) or all-out SLAuto, automation allows for optimized decision making across hundreds, thousands or even tens of thousands of virtual machines. Today, most SLAuto is blissfully unaware of runtime cost factors, such as cost of electricity or cost of network bandwidth, but once the elementary SLAuto solutions are firmly established, this is naturally the next frontier to address.

But hold on…

As the articles I noted earlier suggest, early cloud computing users have discovered a hitch in the giddy-up: the borders and politics of the world DO matter when it comes to IT legislation.

If law will in fact have such an influence on cloud computing dynamics, it occurs to me that a new cost factor might outshine simple operations when it comes to choosing where to run systems; namely, legality itself. As businesses seek to optimize business processes to deliver the most competitive advantage at the lowest costs, it is quite likely that they will seek out ways to leverage legal loopholes around the world to get around barriers in any one country.

Now, this is just pie-in-the-sky thinking on my part, and there are 1000 holes here, but I think its worth going through the exercise of thinking this out. The problem is complicated, as there are different laws that apply to data and the processing being one on that data (as well as, in some jurisdictions, the record keeping about both the data and the processing). However, there are technical solutions available today for both data and processing that could allow a company to mix and match the geographies that give them the best legal leverage for the services they wish to offer:

  • Database Sharding/Replication

    Conceptually, the simplest way to keep from violating any one jurisdiction’s data storage or privacy laws is to not put the data in the jurisdiction. This would be hard to do, if not for some really cool data base sharding frameworks being released to the community these days.

    Furthermore, replicate the data in multiple jurisdictions, but use the best-case instance of that data for processing happening in a given jurisdiction. In fact, by replicating a single data exchange into multiple jurisdictions at once, it becomes possible to move VMs from place to place without losing (read-only, at least) access to that data.

  • VMotion/LiveMotion

    From a processing perspective, once you solve legally accessing the data from each jurisdiction, you can now move your complete processing state from place to place as processing requires, without losing a beat. In fact, with networks getting as fast as they are, transfer times at the heart of the Internet may be almost as fast as on a LAN, and those times are usually measured in the low hundreds of milliseconds.

    So, run your registration process in the USA, your banking steps in Switzerland, and your gambling algorithms in the Bahamas. Or, market your child-focused alternative reality game in the US, but collect personal information exclusively on servers in Madagascar. It may still be technically illegal from a US perspective, but who do they prosecute?

Again, I know there are a million roadblocks here, but I also know both the corporate world and underworld have proven themselves determined and ingenious technologists when it comes to these kinds of problems.

As Leslie noted, our legislators must understand the economic impact of a law meant for a physical world on an online reality. As Nick noted, we seem to be treading into that mythical territory marked on maps with the words “Here Be Dragons”, and the dragons are stirring.