Archive

Archive for June, 2007

God in the Grid…

Utility computing must really be hitting the mainstream when its seen as the future of Christian fellowship

By the way, I’ve been meaning to preorder Nicholas Carr’s book as well. He has a web site that explains the book’s premise. If you want to begin dreaming about a world where computing costs very little, and automation, virtual societies and electronic economies abound, there seems to be plenty of inspiration here.

I just hope it’s as good as it sounds…

Advertisements
Categories: Uncategorized

Clarification on James McGovern’s recent comments

Man, its been a while since I posted. Sorry to my loyal fans, but my last post should explain where my head has been at the past few weeks.

I wanted to take a moment to clear up some misunderstandings in James McGovern’s comment on my recent third-hand coverage of the Forrester Research 2007 IT Forum conference (via Ken Oestreich who actually attended). I have huge respect for James’ pragmatic, get to the point style, and he is definitely a leading edge player in both the Enterprise Architecture and Security spaces. However, James mistakenly credits the comment, “There are no IT projects anymore, only business projects”, to “Ken Oestreich of Forrester”. Ken is not an employee of Forrester Research (he is employed by Cassatt as Director of Product Marketing), and he was just reporting what he heard. I am not aware of the name of the Forrester Analyst who made that statement.

Having said that, clearly this quote can be interpreted in a variety of ways. James seems to have read the quote as “IT is dead” or some such thing, where I think the quote is intended to show that IT projects are now founded initially on business need, not on the whims of IT professionals eager to introduce new technologies. As I read James’ blog, this seems right up his alley. However, I would agree that the statement is more hype than substance–thus the lack of a specific interpretation.

One other thing James says bugs me a bit. Perhaps I misinterpreted, but the following statement seems way off:

“Maybe he should acknowledge that the vast majority of CIOs aren’t even focusing on data centers as this has been commotitized (sic) and therefore pushed several layers down in the organization. Likewise, infrastructure stuff simply doesn’t allow an enterprise to either innovate nor sustain competitive advantage where as software development still has the potential for both.”

Really? I think if James were to ask his CIO what his number one expense line item was, it wouldn’t be research and development. Wonder why utility computing is a key initiative for CIOs in the coming years? Why is “Green Data Center” all over the press these days? I can tell you: operations (labor, facilities, infrastructure and utilities) accounts for the vast majority of enterprise IT budgets. Even those that “outsource” (domestically or otherwise) to Managed Hosting Providers are paying a princely sum for the service.

Add to that the “siloed” nature of most application deployments in data centers, and the incredible barrier to market agility that causes. A giant “bank of our continent” recently filled an RFP for utility computing (in the “turn our IT into a utility” sense), but cost savings wasn’t even the most important factor. The bank can only grow through (international) acquisition, and each acquisition has been burdensome largely due to the service level losses caused by integrating each IT organization and its infrastructure. Agility with guaranteed service levels is the bank’s number one operations priority.

That’s not to say that software isn’t also a priority. It is for all the reasons that James alludes to. It certainly is the quickest (only?) route to new revenue streams, and it can also lead to significant cost savings if done right. Hell, if we can get the cost of operations down, it will free up more funds for this important endeavor! But to say infrastructure has been pushed way down the list just doesn’t jibe with the pain we are finding in corporate IT today.

Let me make it clear that I have great respect for James McGovern, and I read his blog every day. I hope that we can continue a conversation about both the role of infrastructure innovation in the future of IT, and the relationship between application architectures and their deployment architectures in the data center.

Categories: Uncategorized

Want to save gas? Stop leaving your car idling in the garage!

History doesn’t repeat, but it sometimes rhymes…

I remember the seventies, when gas prices skyrocketed (the first time) and there were suddenly all these tiny cars on the road. One member of my mom’s congregation even showed up one Sunday with this crazy little car that ran on a motorcycle engine. It was made by some new car company called Honda, and it was one of the first years that Civics were sold in America.

As a nation, we clamored to change our lifestyles–ditching heavy steel muscle cars for sporty (or utilitarian) little “economy cars”. Our approach to solving the energy crisis was to increase the efficiency with which our cars consumed energy. Note, however, it was not (by a long shot) to reduce the amount of driving we did.

Now flash forward to today, and take a look at the current energy crisis in America’s (and the world’s) data centers. Electricity is expensive, and growing more so (except for those lucky enough to have subsidised power). Add to that concerns about global climate change, and you’ve got company after company scrambling to be “green”.

Again, however, note that the target is not to do less computing than we did before. In fact, if anything, the demand is increasing for information technology and business automation. I believe pushing the automation envelope is going to take more computing power than we know.

So, like the automobile vendors of the seventies, today’s systems vendors are working hard to release “energy efficient” models of servers, laptops and desktops. They do this ostensibly to give us all a good feeling about what good stewards of our tiny planet we are, but in reality its all about saving money. None of this changes our worst behaviour, however; our tendency to leave as much capacity running as possible at all times, “just in case”.

Of course, the server that uses the least amount of power is the one that is turned off. That’s where Service Level Automation comes into the picture. As noted in the past, one of the key aspects of a good Service Level Automation platform is the capability of shutting down anything that isn’t serving an immediate business need. Traditionally, I’ve always talked about this in relation to scale-out applications–your SLA platform should shut down servers not needed to meet current demand in such applications. Now, however, I want to talk about three use cases where SLA enhances the day to day power consumption of all applications in the data center.

  1. Job-specific management. OK, think of every server you’ve touched in the last six months. How many of those served a short term purpose (e.g. getting a software release out the door), but frequently spend days unused for any purpose. I remember going days or even weeks between placing builds on staging servers in my previous life. Service Level Automation should be able to detect unused software payloads, and shut down that equipment until needed again by that or any other payload.
  2. Time-specific management. Almost every data center (especially development and test labs) have systems that are hit hard during some portion of the day, then remain idle for the remainder. SLA should provide the capability to not only schedule system shutdowns, but to actually look at that status of systems to determine which are best candidates for shutdown. In other words, go beyond automating “blind” scheduled events to delivering intelligent management of system power cycles.
  3. Power emergency management. One of the great benefits of living in the San Francisco Bay area is the incredible ingenuity of our power utility in encouraging companies to conserve power and “be good neighbors” in a power emergency. PG&E offers rebates to companies willing to join Demand Response programs, where they agree to voluntarily reduce electric consumption to help the utility avoid the infamous “rolling blackout”.

The Silicon Valley Leadership Group has recently been hosting a series of events around “Energy Efficient Data Centers”, one of which targeted how SLA could deliver on all three of the above. The response was tremendous–so much so that my employer has asked me to join a team building a simple targeted solution to these problems based on our already innovative SLA platform. I can’t say much more right now, but I certainly will communicate all that I can as soon as I can.

By the way, the first lesson I’ve learned from all of this is that power measuring capabilities varies widely from data center to data center. Some companies can’t tell you anything more than their monthly bill, others can show you power consumption over time at the individual server level. Part of the issue is that there are no “simple” power metering solutions at the server level…power controllers (i.e. iLO2) are just now starting to give management systems access to the power measurement tools on Intel and AMD boards. MPDUs have some good features, but they vary widely from vendor to vendor.

You can’t control what you can’t measure, so get on board system vendors! Give us the tools we need to measure and manage those beautifully efficient next generation servers. Heck, give us the tools we need to measure and manage all those older systems we have out there now. That would be more green by far than just squeezing another milliamp out of a MIP.