Archive

Archive for the ‘datacenter migration’ Category

7 Businesses to Start in 2008

January 8, 2008 4 comments

Rather than offer a list of predictions for 2008, I thought I’d have some fun suggesting some businesses that could make you money in 2008 or the few years following.

  1. SaaSEnterprise data conversion practice: All those existing enterprise apps will need to have their data migrated to that trendy new SaaS tool; and should anyone actually decide they hate their first vendor, they’ll be spending that money again to convert to the next choice. Perhaps they’ll even get fed up and return to traditional enterprise software. Easy money.
  2. Enterprise Integration as a Service: No matter how much functionality one SaaS vendor will provide, it will never be enough. Integration will always be necessary, but where/how will it be delivered? Go for the gold with a browser based integration option. Just figure out how to do it better/cheaper/faster than force.com, Microsoft, Google, Amazon, etc…
  3. SaaS meter consolidation service: Given the problem stated in 2 above, who wants 5 or 6 bills where its impossible to trace the cost of a transaction across vendors? Provide a single billing service that consolidates the charges of the vendor stable and provides additional analytic capabilities to break down where costs and revenues come from. Then get ready to defend yourself against the data ownership walls put up by those same vendors (see 4 below).
  4. SaaS/HaaS Customer litigation practice: Given the example of Scoble’s experience with Facebook, there are clearly a lot of sticky legal issues to be worked out about “who owns what”. Ride that gravy train with litigation expertise in data ownership, vendor contractual obligations and the role of code as law.
  5. SaaS industry (or SaaS customer) data ownership rights lobbyist: Given 4 above, each industry player is going to want their voice in congress to protect/promote their interest. Drive the next set of legislation that screws up online equality and individual rights.
  6. Sys Admin retraining specialist: All those sys admins who will be out of work thanks to cloud computing are going to need to be retrained to monitor SLAs across external vendor properties, and to get good at waiting on hold for customer service representatives.
  7. Handset recycling services: The rate at which “specialized” hardware will evolve will raise the rate of obsolescence to a new high. Somebody is going to make a killing from all those barely used precious metals, silicon and LCD screens going to waste. Why not you?
Advertisements

Beating the Utility Computing Lockdown

November 5, 2007 1 comment

If you haven’t seen it yet, there is an interesting little commotion going on in the utility computing blogosphere. Robert X. Cringley and Nick Carr, with the help of Ashley Vance at The Register, are having fun picking apart the announcement that Google is contributing to the MySQL open source project. Cringley started the fun with a conspiracy theory that I think holds some weight, though–as the others point out–perhaps not a literally as he states it. In my opinion, Cringley, Carr and Vance accurately raise the question, “will you get locked into your choice of utility computing capacity vendor, whether you like it or not?”

I’ve discussed my concerns about vendor lock in before, but I think its becoming increasingly clear that the early capacity vendors are out to lock you in to their solution as quickly and completely as possible. And I’m not just talking about pure server capacity (aka “HaaS“) vendors, such as Amazon or the bevy of managed hosting providers that have announced “utility computing” solutions lately. I’m talking about SaaS vendors, such as Salesforce.com, and PaaS vendors such as Ning.

Why is this a problem? I mean, after all, these companies are putting tremendous amounts of money into building the software and datacenter platforms necessary to deliver the utility computing vision. The problem, quite frankly, is that while lock-in can increase the profitability of the service provider, it is not always as beneficial for the customer. I’m not one to necessarily push the mantra “everything should be commodity”, but I do believe strongly that no one vendor will get it entirely right, and no one customer will always choose the right vendor for them the first time out.

With regards to vendor lock-in and “openness”, Ning is an interesting case in point; I noticed with interest last week Marc Andreesen’s announcements regarding Ning and the Open Social API. First, let me get on the record as saying that Open Social is a very cool integration standard. A killer app is going to come out of social networking platforms, and Open Social will allow the lucky innovator to spread the cheer across all participating networks and network platforms. That being said, however, note that Marc announced nothing about sharing data across platforms. In social networking, the data is what keeps you on the platform, not the executables.

(Maybe I’m an old fogey now, but I think the reason I’ve never latched on to Facebook or MySpace is because I started with LinkedIn many years ago, and I though most of my contacts are professional, quite a few of my personal contacts are also captured there. Why start over somewhere else?)

In the HaaS world, software payloads (including required data) are the most valuable components to the consumer of capacity. As most HaaS vendors do little (or nothing) to ease the effort it takes to provision a server with the appropriate OS, your applications, data, any utilities or tools you want available, security software, etc. So there is little incentive for the HaaS world to ease transition between vendors until a critical mass is reached where the pressure to commoditize breaks the lock-in barrier. All of the “savings” purported by these vendors will be limited to what they can save you over hosting it yourself in your existing environment.

Saas also has data portability issues, which have been well documented elsewhere. Most companies that have purchased ERP and CRM services online have seen this eventuality, though most if not all have yet to feel that pain.

Where am I going with all this? I want to reiterate my call for both server and data level portability standards in the utility computing world, with a target of avoiding the pain to customers that lock-in can create. I want the expense of choosing a capacity or application vendor to be the time it takes to research them, compare competitors and sign up for the service. If I have to completely re-provision my IT environment to change vendors, then that becomes the overwhelming costs, and I will never be able to move.

Truth is, open standards don’t guarantee that users will flee one environment for another at the drop of a hat. Look at SQL as an example. When I worked for Forte Software many years ago, we had the ability to swap back end RDBMS vendors without changing code long before JDBC or Hybernate. The funny thing is, in six years of working with that product, not one customer changed databases just because the other guy was cheaper. I grant you that there were other costs to consider, but I really believe that the best vendors with the best service at the right price for that service will keep loyal customers whether or not they implement lock-in features.

For HaaS needs, there are alternatives to going out of house for cheap capacity. Most notably, virtualization and automation with the right platforms could let you get those 10 cents/CPU-hour rates with the datacenter you already own. The secret is to use capital equipment more effectively and efficiently while reducing the operations expenses required to keep that equipment running. In other words, if you worry about how you will maintain control over your own data and applications in a HaaS/SaaS world, turn your own infrastructure into a SaaS.

That’s not to say I never see a value for Amazon, Google, et al. Rather, I think the market should approach their offerings with caution, making sure that the time and expense it takes to build their business technology platforms is not repeated when their capacity partners fail to deliver. Once portability technologies are common and supported broadly, then the time will come to rapidly shut down “private” corporate datacenters and move capacity to the computing “grid”. More on this process later.

Greasing the skids…Simplifying Datacenter Migration

January 25, 2007 Leave a comment

Here are a couple of fun buzzwords that have created all kinds of interesting headaches in IT of late: “rationalization” and “consolidation”. I’m not talking about servers here…I’ve covered that somewhat earlier. Instead, I’m talking about datacenter rationalization and consolidation.

This is a huge trend amongst Fortune 500 companies. In my work, I keep hearing VPs of Operations/Infrastructure and the like saying things like “we are consolidating from [some large number of] datacenters to [some small number, usually 2 or 3] datacenters.” In the course of these migrations, they are rationalizing the need for each application that they must migrate from one datacenter to another.

The cost of these migrations can be staggering. “Fork-lifting” servers from one site to another incurs costs in packaging, shipping and replacing damaged goods (hardware in this case). Copying an installation from one datacenter to another involves the same issues: packaging (how to capture the application at the source site and unpack it at the destination site), shipping (costs around bandwidth use or physical shipping to move the application package between sites) and repair of damaged goods (fixing apps that “break” in the new infrastructure).

What if something could “grease the skids” of these moves–reduce the cost and pain of migrating code from one datacenter to another?

One approach is to package your software payloads as images that are portable between hardware, network and storage implementations. Now the cost of packaging the application is taken care of, the cost of shipping the package remains the same or gets cheaper, and the odds of the software failing to run are greatly reduced because it is already prepared for the changing conditions of a new set of infrastructure.

Admittedly, the solution here is more related to decoupling software from hardware than Service Level Automation, per se. But a good Service Level Automation environment will act as an enabler for this kind of imaging, as it too has to solve the problem of creating generic “golden” images that can boot on a variety of hardware using a variety of network and storage configurations. In fact, I have run into several customers in the last couple of months that have a) recognized this advantage and b) rushed to get a POC going to prove it out.

Of course, if you can easily move software images between datacenters, simpler disaster recovery can’t be far behind…