Archive

Archive for October, 2007

Is your software ready for utility computing?

October 15, 2007 2 comments

I’ve been seeing more thoughts on the effect of utility computing on software architectures lately, and one very well stated argument comes from Alistair Croll, Vice President of Product Management and co-founder of Coradiant, a performance tool company. Though clearly self-serving, his message is simple: if you are going to pay by the cycle–or even just share cycles between applications–you’d better make sure your software takes as few cycles as possible to do its job well.

This is one of the unforeseen effects of “paying for what you use”, and I have to say its an effect that should scare the heck out of most enterprise IT departments. Although I would argue part of that fear should come from the exposure of lousy coding in most custom applications, the worst part is the lack of control most organizations will have over the lousy coding in the packaged applications they purchased and installed. Suddenly, algorithms matter again in all phases of software development, not just computing intensive steps.

The worst offender here will probably the the user interface components: Java SWING, AJAX and even browser applications themselves. To the extent that these are hosted from centralized computing resources (and even most desktops fall into this category in the some visionaries’ eyes), then the incredible amount of constant cycling, polling and unnecessary redrawing will be painfully obvious in the next 10 years or so.

I have always been a strong proponent for not over-engineering applications. If you can meet the business’s ongoing service levels with an architecture that cost “just enough” to implement, you were golden in my book. However, utility computing changes the mathematics here significantly, and that key phrase of “meet the business’s ongoing service levels” comes much more into play. Ongoing service levels now include optimizations to the cost of executing the software itself; something that could be masked in a underutilized, siloed-stack world.

The performance/optimization guys must be loving this, because they now have a product that should see immediate increase in demand. If you are building a new business application today, you had better be:

  1. Building for a service-based, highly distributed, utility infrastructure world, and
  2. Making sure your software is a cheap to run as possible.

Number 2 above itself implies a few key things. Your software had better be:

  • as standards based as possible–making it possible for any computing provider to successfully deploy, integrate and monitor your application;
  • as simple to install, migrate and upgrade remotely as possible–to allow for cheap deployment into a competitive computing market;
  • as efficient to execute as possible–each function should take as few cycles as possible to do its job

The cost dynamics will be interesting to note, especially their effects on the agile processes, SOA, and ITIL movements. I will keep a careful tab on this, and will share my ongoing thoughts in future posts.

Advertisements

Links – 10/4/2007

A Classic Introduction to SOA (DanNorth.net): Thanks to Jack van Hoof, I was led to this brilliant article on modelling SOAs in business terms. (Check out the PDF, the graphics and layout make it an even more fun read.) Rather than spend a bunch of words “me too”-ing Jack and Dan, let me just say that this is exactly the technique I have always used to design service oriented architectures, ever since my days in the mid-90s designing early service oriented architectures at Forte Software.

Classic examples of where this led to better design were the frequent arguments that I would have with customers and partners about where to put the “hire” method in a distributed architecture. Most of the “object oriented architects” I worked with would immediately jump to the conclusion that the “hire” method should be on the Employee class. However, if you sat down and modelled the hiring process, the employee never hired himself or herself. What would happen is the hiring manager would send the information about the employee to the HR office, who would then receive more information, create a new employee file and declare the new employment to the tax authorities. Thus, the “hire” method needed to be on the HR service, with the call coming from the application (or service) initiating employment (i.e. the hiring manager in software form), passing the employee object (or a representation of that object) for processing.

Without exception, that approach led to better architectures than trying to map every method that had any relation to a class of objects directly on the class itself.

Twilight of the CIO (RoughType: Nicholas Carr): Man, Nick is in rare form now that his is back from his blogging hiatus. His thesis here is that, with the advent of technologies that can be more easily managed outside of IT, and with IT departments doing less R&D and more shepherding outsourced and SaaS infrastructure, the need for the CIO role is diminishing–which I react to with mixed feelings.

On the one hand, there is no doubt that small and mid-sized non-high-tech businesses are going to have less need for a voice representing technical infrastructure issues on the executive board. There will still need to be management (as the first comment to Nick’s post alludes to), but they will be a lot like the facilities guy in most businesses today–simply shepherding the services hired by the business.

(Perhaps the “centralized/decentralized pendulum” is definitely shifting wildly, with decentralization this time actually resulting in business systems residing outside of IT entirely?)

On the other hand, I’m not seeing the “simplified” nature of technology happening yet in most mid- to large-sized businesses. Cassatt sells utility computing platform software–basically an operating system for your data center. Resources are pooled and distributed as needed to meet the businesses needs (as defined in SLAs assigned to software). We make it easy to cut tremendous amounts of waste, rigidity and manual labor out of the basic data centers. CIOs love this vision, and drive technical changes in the customers we work with. However, implementations still take a long time. Why? Because most existing infrastructures are about 10 years behind the desired state of the art the IT department is trying to achieve. Also because its not just a technical change, its a cultural change. (By the way, so is SaaS.)

I fear that the lack of technical leadership on the executive team will actually hinder adoption of these critical new technologies and other technologies only being thought of now, or in the future. What I think ultimately needs to happen is that high level technical critical thinking skills need to be taught to the rest of the line-of-business executives, so that interesting new technologies will drive interesting new business opportunities in the years to come.

This goes to Marc Andreesen’s recent post on how to prepare for a great career. Don’t rest on your technical skills, or your business skills, but work hard to develop both. (Marc is another blogger who has been on a streak lately…read his career series and learn from someone who knows a little about success.)