Archive

Archive for February, 2008

Fun with Simon

February 29, 2008 2 comments

Simon Wardley created a couple of posts this week that make for good smiles. The first is his maturity model for cloud computing:


This one I agree with. Very funny, but funny because it reflects truth.

The second is a post on open source computing. I completely disagree with the concept that open source can keep up with closed source in terms of innovation (Anne Zelenka makes a great argument here), and that closed source is bad for ducks (see Simon’s post).

However, I do believe that standardization spreads faster with open source than with closed source. For what its worth, I would also like to see a major utility computing platform release its technology to open source. (Well, at least the components that are required for portability.) I just wonder why any of them would without pressure from the market.

My equations would reflect the “Schrodinger’s Cat” aspects of closed source products prior to the introduction of accepted standards,

open source == kindness to ducks
closed source == ambivolence towards ducks; could go either way
🙂
Advertisements

FriendFeed

February 29, 2008 1 comment

I just wanted to let everyone know that my entire time-wasting life–er, online research–can be found at http://friendfeed.com/jamesurquhart. I love this site, but wonder how the heck they are going to make any money. There are no ads or anything.

If you are on friendfeed, subscribe to my feed. Several other big name bloggers are also there, which makes it very cool for understanding what they are reading or commenting on.

Categories: Uncategorized

Enterprise Architecture, Business Continuity and Integrating the Cloud

February 28, 2008 Leave a comment

(Update: Throughout the original version of this post, I had misspelled Mr. Vambenepe’s name. This is now corrected.)

William Vambenepe, a product architect at Oracle focusing on enterprise management of applications and middleware, pointed me to a blog by David Linthicum on IntelligentEnterprise that makes the case for why enterprise architects must plan for SaaS. In a very high level, but well reasoned post, Linthicum highlights why SaaS systems should be considered a part of enterprise architectures, not tangential to them.

As Vambenepe points out, perhaps the most interesting observation from Linthicum is the following:

Third, get in the mindset of SaaS-delivered systems being enterprise applications, knowing they have to be managed as such. In many instances, enterprise architects are in a state of denial when it comes to SaaS, despite the fact that these SaaS-delivered systems are becoming mission-critical. If you don’t believe that, just see what happens if Salesforce.com has an outage.

I don’t want to simply repeat Vambenepe’s excellent analysis, and I absolutely agree with him. So let me just add something about SLAuto.

Take a look at Vambenepe’s immediate response:

I very much agree with this view and the resulting requirements for us vendors of IT management tools.

Now add the comments from Microsoft’s Gabriel Morgan that I discussed a couple of weeks ago.

Take for example Microsoft Word. Product Features such as Import/Export, Mail Merge, Rich Editing, HTML support, Charts and Graphs and Templates are the types of features that Customer 1.0 values most in a product. SaaS Products are much different because Customer 2.0 demands it. Not only must a product include traditional product features, it must also include operational features such as Configure Service, Manage Service SLA, Manage Add-On Features, Monitor Service Usage Statistics, Self-Service Incident Resolution as well.

Gabriel’s point boiled down to the following equation:

Service Offering = (Product Features) + (Operational Features)

which I find to be entirely in agreement with Linthicum and Vambenepe.

As I am wont to do, let me push “Operational Features” as far as I think they can go.

In the end, what customers want from any service–software, infrastructure or otherwise–is control over the balance of quality, cost and time-to-market. Quality is measured through specific metrics, typically called service level metrics. Service level agreements (SLAs) are commitments to maintain service level metrics within commonly agreed boundaries and rules. In the end, all of these “operational features” are about allowing the end user to either

  1. define the service level metrics and/or their boundaries (e.g. define the SLA), or
  2. define how the system should respond if a metric fails to meet the SLA.

Item “2” is SLAuto.

I would argue that what you don’t want is a closed loop SLAuto offering from any of your vendors. In fact, I propose right here, right now, that a standard (and, I am sure Simon Wardley would argue, open source) protocol or set of protocols for the following:

  1. Defining service level metrics (probably already exists?)
  2. Defining SLA bounds and rules (may also exist?)
  3. Defining alerts or complex events that indicate that an SLA was violated

Vendors could then use these protocols to build Operational Features that support a distributed SLAuto fabric, where the ultimate control over what to do in severe SLA violations can be controlled and managed outside of any individual provider’s infrastructure, preferably at a site of the customer’s choosing. This “customer advocate” SLAuto system would then coordinate with all of the customer’s other business systems’ individual SLAuto to become the automated enforcer of business continuity. In the end, that is the most fundamental role of IT, whether it is distributed or centralized, in any modern, information driven business.

“Nice, James,” you say. “Very pretty ‘pie-in-the-sky’ stuff, but none of it exists today. So what are we supposed to do now?

Implement SLAuto internally in your own data centers with your existing systems, that’s what. Integrate SLAuto for SaaS as you understand the Operational Feature APIs from your vendors, and those vendors, your SLAuto vendor and/or your systems talent can develop interfaces into your own SLAuto infrastructure.

Evolve towards nirvana, don’t try to reach it by taking tabs of vendor acid.

If you want more advice on how to do all of this, drop me a line (james dot urquhart at cassatt dot com) or comment below.

Jonathon Schwartz hints a MySQL cloud

February 26, 2008 Leave a comment

Robert Scoble interviewed Jonathan Schwartz today using his ultracool Nokia N95/Qik personal broadcasting package. During that interview, Jonathan made an interesting non-announcement. It seems, he notes, a natural fit that a data center expert like Sun could leverage their new highly scalable database environment, MySQL, to build a MySQL cloud service.

I think this is would be awesome; not only because it forces Oracle to consider getting into the same market (thus potentially creating a competitive commodity database service market), but also because it opens all kinds of possibilities for add-on capabilities that might not be economically feasible to develop in a traditional enterprise sales model.

Here’s my only suggestion to my former boss, Mr. Schwartz. Buy Endeca. Not because they own the ecommerce search for most combination “bricks-and-mortar”/online retailers (they do), but because the technology has been developed in such a way that it can be used as tag-based search for just about any data source. (They don’t present it that way, but I got an in depth demo, and that is what it is.) What I imagine would be a competitive advantage for Sun/MySQL would be a cost-per-byte data source with both SQL and tag-based or unstructured querying. Buy Endeca!

OK, soon we will have capacity, storage and databases in the cloud. Who wants to be first in the “System Management as a Service” game?

Categories: cloud computing

Comments on Paul Wallis: Cloud Computing

February 26, 2008 Leave a comment

Paul WillisWallis has an excellent post tying the history of prior utility/cloud/grid computing attempts to the current hype. I’ve been trying to comment for a while, but haven’t been able to get comment submission to work until today. This is a reworking of that response, in case it doesn’t get through moderation for some reason.

Let me just say that, contrary to Paul’s description of my position may sound to others, I am not blindly “pro-cloud”. In fact, I firmly recommend that existing enterprise data centers and applications think hard before going “outside” to a commercial capacity-on-demand provider. In most cases, it would actually be better for such enterprises to convert their own infrastructure to a utility computing model first, while the necessary technologies and businesses mature.

I also define the cloud broadly, to include SaaS, PaaS (e.g. force.com) and HaaS (e.g. Amazon, Mosso, etc.). SaaS is in clearly in play today, HaaS is being experimented with, but PaaS may be the most interesting facet of the cloud in the long term.

That being said, Paul provides very valuable information in this post, and I for one very much appreciate the work put onto it. It is very true that bandwidth is something to be nervous about (especially when Amazon charges as much as it does for bandwidth), and I have had some interesting discussions (such as the one Paul references) about how data integration will happen over the cloud. Finally, cloud-lockin is indeed something to be concerned about; as in, what happens if my first choice provider sucks? Can I move my applications, data, etc. to someone else cheaply enough that it doesn’t put me out of business? Simon Wardley has a good post on that today.

Update: Er, two seconds and I could have confirmed the spelling of Paul’s last name. Sorry, Paul!

HPC in the Cloud

February 25, 2008 Leave a comment

Check out Blue Collar Computing. High Performance Computing is one area that should really benefit from utility computing models. Imagine gaining access to the worlds most powerful computers (with reasonable assistance from experts on programming and deploying on those systems) at a price made reasonable by paying only your “share” of resource usage costs.

Cool to see someone try this business model out for real.

Data Goes SLAuto at Oracle

February 21, 2008 Leave a comment

Thanks to Steve Jones, check out this presentation from David Chappell, Oracle VP and CTO of SOA, titled “Next-Generation Grid Enabled SOA“. (A shorter written article can be found in at SOA Magazine’s site.) Chappell outlines the work that Oracle is doing at turning the traditional model of application scalability on its head; instead of a fixed amount of database resources and scaling the applications/services horizontally, scale the database (using a cool complex adaptive systems approach) and alleviate much of the need to scale apps and services (except for CPU bound services). For someone like me, that’s mind blowing.

Add to that the fact that the data management functions are relatively homogenous (though the infrastructure may not be), and aware of its resource utilization, and you can see why they are claiming a certain amount of hardware-metric based SLAuto.

(Hardware metric based SLAuto is based in measurements of hardware components, such as CPU utilization, memory utilization and so on. Software-based SLAuto usually uses business metrics such as transaction rates, active accounts, etc. to make scaling decisions.)

The catch? Well, everything must be written to use the “Data Grid” if its to take advantage of these capabilities. Legacy applications need not apply. (Could be the deal killer for David’s “Not your MOM’s Bus” concept.)

It seems to me that if Oracle wants this approach to catch on, it should open source a reference implementation as soon as possible. I’m not an expert at the most recent data processing approaches, but it would seem to me that Map-Reduce approaches would be complimentary to the Data Grid. However, Hadoop implementations would generally only be integrated with a data grid if there was an open source alternative. Otherwise, MySQL will continue to be the first choice. Open Source would also speed up integration between the data grid and infrastructure automation such as Cassatt and its competitors.

Dave hints at a URL for more info on the Oracle site, but I can’t find it. If anyone tracks it down, I would appreciate any help I can get.