Archive

Archive for January, 2008

One Step To Prepare For Cloud Computing

January 29, 2008 Leave a comment

Some of you may be wondering why I am making such a big stink about software architecture on a blog about service level automation (SLAuto). Well, as Todd Biske points out, “the relationships (and potentially collisions) between the worlds of enterprise system management, business process management, web service management, business activity monitoring, and business intelligence” are easier to resolve if the appropriate access to metrics is provided for a software service. For SLAuto, this means the more feedback you can provide from the service, process, data and infrastructure levels of your software architecture, the easier it is to automate service level compliance.

Let’s look at a few examples for each level:

  • Service/Application: From the end user’s perspective, this is what service levels are all about. Key metrics such as transaction rates (how many orders/hour, etc.), response times, error rates, and availability are what the end users of a service (e.g. consumers, business stakeholders, etc.) really care about.
  • Business Process: Business process metrics can warn the SLAuto environment about cross-service issues, business rule violations or other extraordinary conditions in the process cycle that would warrant capacity changes at the BPM or service levels.
  • Data Storage/Management: Primarily, this layer can inform the SLAuto system about storage needs and storage provisioning, which in turn is critical to automated deployment of applications into a dynamic environment.
  • Infrastructure: This is the most common form of metric used to make SLAuto decisions today. Such metrics as CPU utilization, memory utilization and I/O rates are commonly used in both virtualized and non-virtualized automated environments.

As noted, digital measurement of these data points can feed an SLAuto policy engine to trigger capacity adjustment, failure recover or other applicable actions as necessary to remain within defined service thresholds. While most of the technology required to support SLAuto is available, the truth is that the monitoring/metrics side of things is the most uncharted territory. As an action item, I ask all of you to take Todd’s words of wisdom into account, and design not only for functionality, but also manageability. This will aid you greatly in the quest to build fluid systems that can best take advantage of utility infrastructure today.

Advertisements

It’s the labor, baby…

January 28, 2008 2 comments

I’m getting ready to go back to work on Wednesday, so I decided today (while Owen is at school and Mia has Emery) to get caught up on some of the blog chatter out there. First, read Nick Carr’s interview with GRIDToday. Damn it, I wish this was the sentiment he communicated in “The Big Switch“, not the “its all going to hell” tone the book actually conveyed.

Second, Google Alerts, as always, is an excellent source, and I found an interesting contrarian viewpoint about cloud computing from Robin Harris (apparently a storage marketing consultant). Robin argues there are two myths that are propelling “cloud computing” as a buzz phrase, but that private data centers will never go away in any real quantity.

Daniel Lemire responds with a short-but-sweet post that points out the main problem with Robin’s thinking: he assumes that hardware is the issue, and ignores the cost of labor required to support that hardware. (Daniel also makes a point about latency being the real issue in making cloud computing work, not bandwidth, but I won’t address that argument here, especially with Cisco’s announcement today.)

The cost of labor, combined with real economies of scale is the real core of the economics of cloud computing. Take this quote from Nick Carr’s GRIDToday interview:

If you look at the big trends in big-company IT right now, you see this move toward a much more consolidated, networked, virtualized infrastructure; a fairly rapid shift of compressing the number of datacenters you run, the number of computers you run. Ultimately … if you can virtualize your own IT infrastructure and make it much more efficient by consolidating it, at some point it becomes natural to start to think about how you can gain even more advantages and more cost savings by beginning to consolidate across companies rather than just within companies.

Where does labor come into play in that quote? Well, consider “compessing of the number of datacenters you run”, and add to that to the announcement that the Google datacenter in Lenoir, North Carolina will hire a mere 200 workers (up to 4 times as many as announced Microsoft and Yahoo data centers). This is a datacenter that will handle traffic for millions of people and organizations worldwide. If, as Robin implies, corporations will take advantage of the same clustering, storage and network technologies that the Googles and Microsofts of the world leverage, then certainly the labor required to support those data centers will go down.

The rub here is that, once corporations experience these new economies of scale, they will begin to look for ways to push the savings as far as possible. Now the “consolidat[ion] across companies rather than just within companies” takes hold, and companies begin to shut down their own datacenters and rely on the compute utility grid. Its already happening with small business, as Nick, I and many others have pointed out. Check out Don McAskill’s SmugMug blog if you don’t believe me. Or GigaOM’s coverage of Standout Jobs. It may take decades, as Nick notes, but big business will eventually catch on. (Certainly those startups that turn into big businesses using the cloud will drive some of these economics.)

One more objection to Robin’s post. To argue that “Networks are cheap” is a falicy, he notes that networks still lag is speed behind processors, memory, bus speeds, etc. Unfortunately, that misses the point entirely. All that is needed are network speeds that get to the point where functions complete in a time that is acceptible for human users and economically viable for system communications. That function is independent of the network’s speed relative to other components. For example, my choice of Google Analytics to monitor blog traffic is solely dependent on my satisfaction with the speed of the conversation. I don’t care how fast Google’s hardware is, and all evidence seems to point to the fact that their individual systems and storage aren’t exceptionally fast at all.

Data propagation and software fluidity

January 24, 2008 Leave a comment

Jon Udell has an interesting post commenting on Jeff Jonas’ explaination of Out-bound Record-level Accountability in Information Sharing Systems. The central thesis of Jeff’s post is that tracking who specifically received a given datum is very expensive yet highly necessary in many applications. The example given is that of a user who wishes to no longer receive email from a site they have an account with, or any of the other sites that the original site shared that preference with. How does the original site know who to contact? The high cost is a result of the difficulty in tracking who data has been forwarded to.

John replies very simply that a “publish” model, much like blogging, might be the answer. “Data blogging“, coined by fellow blogger Galvin Carr, refers essentially to the problem of syndication, but Udell projects that to a much wider arena of data types. As he notes, there is much evidence out there that “push” models are generally only applicable to edge systems calling “inward”. “Publish and Subscribe”-type pull models are far easier to implement when running “outward” from the cloud to edge systems (as well as, generally, within the cloud–aka event-driven architectures).

There are two valuable results of this approach:

  1. The originating system can require users of data to subscribe with a unique identity, and each “pull” of published data could be tracked (if necessary) to identify who is up to date and who isn’t.
  2. For software fluidity purposes, it further decouples the originating system from its subscribers, meaning both the subscribers and the originating system can be “moved” from physical environment to physical environment with no loss of communication. The most negative action that could take place here is if the originating publisher’s DNS name changed in the course of the move, but redirects and other techniques could even mitigate that issue.

I am commenting on this, of course, largely for the second item. Access to data, services and even edge devices must be very loosely coupled to work in a cloud computing world. This is one great example of how you could architect for that eventuality, in my opinion.

Children of the Net: Why Our Decendents Will Love The Cloud

January 23, 2008 2 comments

Our children–or perhaps our grandchildren–won’t remember a time when there was a PC on every desk, or when you had to go to Fry’s Electronics to buy a shrink-wrapped copy of your favorite game. This, as Nick notes frequently in The Big Switch, is one of the real parallels between what our ancestors went through with electrification and what we have yet to go through with compute utilities. Heck, I already find it hard to remember when I didn’t have access to the World Wide Web, and in what year all of that changed. Also, I’m frankly already taking the availability of services from the cloud for granted.

My Dad used to tell me stories of when he lived in a house in Scotland with only a few lights and no other electrical appliances, no indoor plumbing and no telephone. I can’t imagine living like that, but it was just about 50-60 years ago. Those born in the latter half of the twentieth century (in an industrialized country) are perhaps the first to live a lifetime without seeing or experiencing life without multiple sockets in every room. It is unimaginable what life was like for our ancestors pre-electrification.

There will likely be both positive and negative consequences that come from any innovation, but to the innovator’s descendants, they won’t remember things any other way. In the end, once basic needs are taken care of, all human kind cares about is lifestyle anyway, so the view of how “good” an “era” is, is largely driven by how well those needs are taken care of. One of those basic needs is the need to create/learn/adapt, but another one is the need for predictability of outcome. This constant battle between the yearn for freedom and the yearn for control is what makes human culture evolve in brilliantly intricate ways.

I for one hold out hope that our descendants will be increasingly satisfied with their lifestyles, which–in the end–is probably what we all want to see happen. Will those lifestyles be better or worse from our perspective as ancestors? Who knows…but it won’t really matter, now, will it?

Of course, one of the biggest challenges to humanity is meeting even the basic needs of its entire population. To date, the species has failed to achieve this–the study of economics is largely targeted at understanding why this is. Cloud computing could, as Nick suggests, actually make it more difficult for some groups of people to meet their basic needs, but I would argue that this would be counter productive to the rest of society.

At the core of my argument is the fact that so much of online business is predicated on massive numbers of people being able to afford a given product. Nick argues that life in the newspaper world shows us the future of most creative enterprises; the ease of the masses to create and find content makes it difficult to sell advertising to support newspapers, thus the papers struggle. But if huge numbers of people are out of work, with no one valuing their talents and experience, that will lead to less consumer spending. Less consumer spending will lead to less advertising, which will in turn lead to less income for “the cloud” (i.e. those companies making money from advertising in the cloud). Its a horribly negative feedback cycle for online properties/services, and one I think will fail to come to pass.

The alternative is that the best of the talent out there continue to find ways to get paid, while the masses are still encouraged to participate. Newspaper journalists are already finding opportunities online, though perhaps at a slower pace then some would like. I believe that ventures such as funnyordie.com and even YouTube will create economic opportunities for videographers and film makers to rise above the noise. Musicians are already experimenting with alternative online promotion and sales tools that will change the way we find, buy and consume music. Yes, the long tail will flourish, but the head of the tail will continue to make bank.

The result of this is simply a shifting of the economic landscape, not a wholesale collapse into a black hole. Yeah, the wealth gap thing is a big deal (see Nick’s book), but I believe that the rich are going to start investing some of that money back into the system when the new distribution mechanisms of the online world mature–and that should create jobs, fund creative talent and create a new world in which those that adapt thrive, and those that don’t struggle.

Did I mention I think the utility computing market is a complex adaptive system?

Evidence of pending doom and imminent salvation…

January 21, 2008 Leave a comment

Two news articles that occurred as soon as I went offline for the birth of my daughter provide increasing evidence of the importance of service level automation and image portability between vendors:

  • Joyent, one of the most ambitious new “capacity on demand” managed hosting services, has experienced a multi-day outage that has affected two of their prime storage services. No failover path was available to users of the services, and there is no mention of functionality or services to assist customers with moving–temporarily or permanently–to another vendor’s service. Odds are high that some of these customers have lost access to key data, or are flying without substantial backups to key systems. Any decision to move to a different servers (like Twitter will according to the post) is on the customer’s own dime.

    A prime example of the dangers of vendor lock-in that Simon and I have been warning you about…

  • Oracle has announced its intention to build and sell “Grid 2.0” technology that will target–yes, you heard right–service level automation. Welcome to the SLAuto game boys. I hope you’re ready to talk standards for image and policy portability; as well as policy platform interoperability. Otherwise, you’re just creating a new DB grid “silo”, and not helping anyone in the long run. Please, feel free to educate me if you think otherwise…

These events show the caution that users of cloud services must employ. Be ready to take on increased integration responsibilities as you deploy more and more elements of your datacenter to the cloud, automate more of the management of those elements, and find the product landscape one in which there (still) is no silver bullet. You may not be writing apps, but you sure as heck will be writing the orchestration that will tie the apps you employ into a cohesive business process ecosystem. You may also find yourself writing backup integration again, just in case you experience “Joyent 2.0″…

Off Topic: Introducing Emery Anne Urquhart

January 18, 2008 3 comments

Emery Anne Urquhart
Born 1/16/2008
7lb, 12 oz
19 3/4″
Mom and baby are fine. Dad is scared silly, however…
Categories: Uncategorized

Off Topic:The next two weeks…

January 15, 2008 1 comment

Just a quick note about what I will be doing for the next two weeks starting tomorrow morning. At 5:30AM sharp tomorrow, my wife and I will arrive at the hospital for the birth of my daughter, my second child. I will post pictures and/or video when it is available. (Also off topic, I know, but I’m just too proud…)

Once “Baby Girl” has arrived, I will be splitting my time between caring for my son, caring for my wife and baby girl, and caring for all three. In other words, the blogging will suffer a bit. Once we get settled in a bit, I’ll start posting again. That may be a few days or a few weeks. Please be patient.

On a vaguely related note, I finally got Feedburner set up for my site, and I was happy to find so many of you were regular subscribers. I hope many of these are mutual subscriptions where I also follow your work, but if you’d like to let me know where and what you post, please post a comment here and I’ll check it out.

In the meantime, for utility computing related topics, stay in touch with Nicholas Carr, Simon Wardley and Anne Zelenka. (Anne’s post on GigaOM is especially good, and one that I seriously wish I wrote myself. She captured much of what I would want to say about the effect of utility computing on the middle class, and placed Nick’s book in exactly the right context.)

Categories: James Urquhart, personal