Archive

Archive for October, 2008

Why I Think CohesiveFT’s VPN-Cubed Matters

October 28, 2008 Leave a comment

You may have seen some news about CohesiveFT’s new product today–in large part thanks the the excellent online marketing push they made in the days preceding the announcement. (I had a great conversation with Patrick Kerpan, their CTO.) Normally, I would get a little suspicious about how big a deal such an announcement really is, but I have to say this one may be for real. And so do others, like Krishnan Subramanian of CloudAve.

CohesiveFT’s VPN-Cubed is targeting what I call “the last great frontier of the cloud”, networking. Specifically, it is focusing a key problem–data security and control–in a unique way. The idea is that VPN-Cubed gives you software that allows you to create a VPN of sorts that is under your personal control, regardless of where the endpoints reside, on or off the cloud. Think of it as creating a private cloud network, capable of tying systems together across a plethora of cloud providers, as well as your own network.

The use case architecture is really very simple.


Diagram courtesy of CohesiveFT

VPNCubed Manager VMs are run in the network infrastructure that you wish to add to your cloud VPN. The manager then acts as a VPN gateway for the other VMs in that network, who can then communicate to other systems on the VPN via virtual NICs assigned to the VPN. I’ll stop there, because networking is not my thing, but I will say it is important to note that this is a portable VPN infrastructure, which you can run on any compatible cloud, and CohesiveFT’s business is to create images that will run on as many clouds as possible.

Patrick made a point of using the word “control” a lot in our conversation. I think this is where VPN-Cubed is a game changer. It is one of the first products I’ve seen target isolating your stuff in someone else’s cloud, protecting access and encryption in a way that leaves you in command–assuming it works as advertised…and I have no reason to suspect otherwise.

Now, will this work with PaaS? No. SaaS? No. But if you are managing your applications in the cloud, even a hybrid cloud, and are concerned about network security, VPN-Cubed is worth a look.

What are the negatives here? Well, first I think VPN is a feature of a larger cloud network story. This is the first and only of its kind in the market, but I have a feeling other network vendors looking at this problem will address it in a more comprehensive solution.

Still, CohesiveFT has something here: it’s simple, it is entirely under your control, and it serves a big immediate need. I think we’ll see a lot more about this product as word gets out.

Even Microsoft is Cautious About the Legal State of the Cloud, and More

October 27, 2008 Leave a comment

Tucked in a backwater paragraph of this interesting interview with Microsoft corporate VP Amitabh Srivastava is an interesting note about the prioritization and pacing for rollout of Azure into the various data centers Microsoft owns worldwide:

“Also, for now, Azure services will be running in a single Microsoft data center (the Quincy, Wash. facility). Sometime next year, Microsoft will expand that to other U.S. data centers and eventually move overseas, though that brings with it its own set of geopolitical issues that Srivastava said that the company would just as soon wait to tackle.”

No kidding. Let’s not even get into the unique legal challenges that Microsoft faces in the EU (perhaps especially because they are proposing a Windows-only cloud offering?). Just figuring out how to lay out the technical and business policies around data storage and code execution will be a thrill for the be-all, end-all PaaS offering that is Azure.

(On a side note, perhaps it presents a unique opportunity for regulation-aware infrastructure?)

There was one positive note in this interview, however. Apparently Microsoft has non-.NET code running internally on Azure, and will offer those services sometime next year. Furthermore, services must meet a template today, but template-independent services are currently on the roadmap. Perhaps a move from PaaS to IaaS is also in store?

Categories: Uncategorized

Microsoft chooses the Azure PaaS to the Clouds

October 27, 2008 Leave a comment

The Microsoft PDC2008 keynote presentation just concluded, and the team in Redmond announced Azure, a very full featured cloud PaaS allowing for almost the entire .NET stack to run in Microsoft’s data centers, or on-premises at your organization. (The keynote will be available on-demand on the Microsoft PDC site.)

I find myself impressed, underwhelmed and, in fact, a little disappointed. Here’s how that breaks down:

Impressed

  • This is clearly the most full featured PaaS out there. Service frameworks, a service bus, identity, database services (both relational and unstructured), a full featured IDE integration. No one else is doing this much–not even Google.

  • I love the focus on hybrid implementations (both on-premesis and “in the cloud”). Software plus Services can really pay off here, as you look at the demonstrations give in the keynote.

  • The identity stuff is a key differentiator. Not your Live account, but whatever federated identity you are using.

Underwhelmed

  • They used an opportunity to announce a revolutionary change to Microsoft’s technology and business to demonstrate how all the things people have already been doing in .NET can be shoehorned into the cloud. Product recalls? Really?

  • It started to sound like they would dig deep into architecture and radical new opportunities, but in the end they just showed off an awful lot of “gluing” existing products together. *Yawn*

Disappointed

  • Its PaaS. There is no Amazon-killer, no opportunity for the masses to leverage Microsoft data centers, no ability to deploy “raw” Windows applications into the cloud. Just a tool to force adoption of full scale .NET development and Microsoft products. Good for Microsoft, but will it win any converts?

  • I wanted more from Ray. I wanted a peek into a future that I never considered; an understanding of where it was that Microsoft’s DNA was going to advance the science of the cloud, rather than just provide Microsoft’s spin on it. With the possible exception of identity, I’m not sure I saw any of that.

So, a good announcement overall, but pretty much well within the bounds of expectations, perhaps even falling short in a couple of places. I can’t wait to see how the market reacts to all of this.

By the way, Azure is only open to PDC2008 participants at first. The floodgates will slowly be opened over the next several months–in fact, no upper bound was given.

Categories: Uncategorized

Is Amazon in Danger of Becoming the Walmart of the Cloud?

October 25, 2008 Leave a comment

Update: Serious misspelling of Walmart throughout the post initially. If you are going to lean an argument heavily on the controversial actions of any entity, spell their name right. Mea culpa. Thanks to Thorsten von Eicken for the heads up.

Also, check out Thorsten’s comment below. Perhaps all is not as bleak as I paint it here for established partners…I’m not entirely convinced this is true for the smaller independent projects, however.


I grew up in the great state of Iowa. After attending college in St. Paul, Minnesota, I returned to my home state where I worked as a computer support technician for Cornell College, a small liberal arts college in Mount Vernon, Iowa. It was a great gig, with plenty of funny stories. Ask me over drinks sometime.

While in Mount Vernon, there was a great controversy brewing–well, nation wide, really–amongst the rural towns and farm villages struggling to survive. You see, the tradition of the family farm was being devastated, and local downtowns were disappearing. Amidst this traumatic upheaval appeared a great beast, threatening to suck what little life was left out of small town retail businesses.

The threat, in one word, was Walmart.

Walmart is, and was, a brilliant company, and their success in retail is astounding. In a Walmart, one can find almost any household item one needs under a single roof, including in many cases groceries and other basic staples. Their purchasing power drives prices so low, that there was almost no way they can get undercut. If you have a WalMart in your area, it might find it the most logical place to go for just about anything you needed for your home.

That, though, was the problem in rural America. If a Walmart showed up in your area, all the local household goods stores, clothing stores, electronics stores and so on were instantly the higher price, lower selection option. Mom and Pop just couldn’t compete, and downtown businesses disappeared almost overnight. The great lifestyle that rural Americans led with such pride was an innocent bystander to the pursuit of volume discounts.

Many of the farm towns in Iowa were on the lookout then, circa 1990, for any sign that Walmart might be moving in. (They still are, I guess.) When a store was proposed just outside of Cedar Rapids, on the road to Mount Vernon, all heck broke loose. There was strong lobbying on both sides, and businesses went on a media campaign to paint Walmart as a community killer. The local business community remained in conflict and turmoil for years on end while the store’s location and development were negotiated.

(The concern about Walmart stores in the countryside is controversial. I will concede that not everyone objects to their building stores in rural areas. However, all of the retailers I knew in Mount Vernon did.)

If I remember correctly, Walmart backed off, but its been a long time. (Even now, they haven’t given up entirely.)

While I admire Amazon and the Amazon Web Services team immensely, I worry that their quest to be the ultimate cloud computing provider might force them into a similar role on the Internet that Walmart played in rural America. As they pursue the drive to bring more and better functionality to those that buy their capacity, the one-time book retailer is finding themselves adding more and more features, expanding their coverage farther and farther afield from just core storage, network and compute capacity–pushing into the market territory of entrepreneurs who seized the opportunity to earn an income off the AWS community.

This week, Amazon may have crossed an invisible line.

With the announcement that they are adding not just a monitoring API, not just a monitoring console, but actual interactive management user interface, with load balancing and automated scaling services, Amazon is for the first time creeping into the territory held firm by the partners that benefited and benefited from Amazon’s amazing story. The Sun is expanding into the path of its satellites, so to speak.

The list of the endangered potentially include innovative little projects like ElasticFox, Ylastic and Firefox S3, as well as major cloud players such as RightScale, Hyperic and EUCALYPTUS. These guys cut their teeth on Amazon’s platform, and have built decent businesses/projects serving the AWS community.

Not that they all go away, mind you. RightScale and Hyperic, for example, support multiple clouds, and can even provide their services across disparate clouds. EUCALYPTUS was designed with multiple cloud simulations in mind. Furthermore, exactly what Amazon will and won’t do for these erstwhile partners remains unclear. Its possible that this may work out well for everyone involved. Not likely, in my opinion, but possible.

Sure, these small shops can stay in business, but they now have to watch Amazon with a weary eye (if they weren’t already doing that). There is no doubt that their market has been penetrated, and they have to be concerned about Amazon doing to them what Microsoft did to Netscape.

Or Walmart did to rural America.

Amazon Enhances "The Proto-Cloud"

October 23, 2008 Leave a comment

Big news today, as you’ve probably already seen. Amazon has announced a series of steps to greatly enhance the “production” nature of its already leading edge cloud computing services, including (quoted directly from Jeff Barr’s post on the AWS blog):

  • Linux on Amazon EC2 is now in full production. The beta label is gone.
  • There’s now an SLA (Service Level Agreement) for EC2.
  • Microsoft Windows is now available in beta form on EC2.
  • Microsoft SQL Server is now available in beta form on EC2.
  • We plan to release an interactive AWS management console.
  • We plan to release new load balancing, automatic scaling, and cloud monitoring services.

There is some great coverage of the announcement already in the blog-o-sphere, so I won’t repeat the basics here. Suffice to say:

  • Removing the beta label removes a barrier to S3/EC2 adoption for the most conservative of organizations.
  • The SLA is interestingly organized to both allow for pockets of outages while promoting global up-time. Make no mistake, though, some automation is required to make sure your systems find the working Amazon infrastructure when specific Availability Zones fail.
  • Oh, wait, they took care of that as well…along with automatic scaling and load balancing.
  • Microsoft is quickly becoming a first class player in AWS, which removes yet another barrier for M$FT happy organizations.

Instead, let me focus in this post on how all of this enhances Amazon’s status as the “reference platform” for infrastructure as a service (IaaS). In another post, I want to express my concern that Amazon runs the danger of becoming the “WallMart” of cloud computing.

First, why is it that Amazon is leading the way so aggressively in terms of feature sets and service offerings for cloud computing? Why does it seem that every other cloud provider seems to be catching up to the services being offered by Amazon at any given time? For example:

The answer in all cases is because Amazon has become the default standard for IaaS feature definition–this despite having no user interface of their own (besides command line and REST), and using “special” Linux images (the core Amazon Machine Images) that don’t provide root access, etc. The reason for the success in setting the standard here is simple: from the beginning, Amazon has focused on prioritizing feature delivery based on barriers to adoption of AWS, rather than on building the very best of any given feature.

Here’s how I see it:

  • In the beginning, there was storage and network access. Enter S3.
  • Then there were virtual servers to do computational tasks. Enter EC2, but with only one server size.
  • Then there were significant complaints that the server size wasn’t big enough to handle real world tasks. Enter additional server types (e.g. “Large”) and associated pricing
  • Then there was the need for “queryable” data storage. Enter SimpleDB.
  • Somewhere in the preceding time frame, the need for messaging services was identified as a barrier. Enter Amazon Simple Queue Service.
  • Now people were beginning to do serious tasks with EC2/S3/etc., so the issues of geographic placement of data and workloads became more of a concern. (This placement was both for geographic fail over, and to address regulatory concerns.) Enter Availability Zones.
  • Soon after that, delivering content and data between the zones became a serious concern (especially with all of the web start ups leveraging EC2/S3/etc.) Enter the announced AWS Content Delivery Service
  • Throw in there various partnership announcements, including support for MySQL and Oracle.

By this point, hundreds of companies had “production” applications or jobs running on Amazon infrastructure, and it became time to decide how serious this was. In my not-so-humble opinion, the floundering economy, its effects on the Amazon retail business, and the predictions that cloud computing could benefit from a weakened economy fed into the decision that its time to remove the training wheels and leave “beta” status for good. Add an official SLA, remove the “beta” label, and “BAM!“, you suddenly have a new “production” business to offset the retail side of the house.

Given that everyone else was playing catchup to these features as they came out (mostly because competitors didn’t realize what they needed to do next, as they didn’t have the customer base to draw from), it is not surprising that Amazon now looks like they are miles ahead of any competitor when it comes to number of customers and (for cloud computing services) probably revenue.

How do you keep the competitors playing catchup? Add more features. How do you select which features to address next? Check with the customer base to see what their biggest concerns are. This time, the low hanging fruit was the management interface, monitoring, and automation. Oh, and that little Windows platform-thingy.

Now, I find it curious that they’ve pre-announced the monitoring and management stuff today. Amazon isn’t really in the habit of announcing a feature before they go private-beta. However, I think there is some concern that they were becoming the “command-line lover’s cloud”, and had to show some interest in competing with the likes of VirtualCenter in the mind’s eye of system administrators. So, to undercut some perceived competitive advantages from folks like GoGrid and Slicehost, they tell their prospects and customers “just give us a second here and we will do you right”.

I think the AWS team has been brilliant, both in terms of marketing and in terms of technology planning and development. They remain the dominant team, in my opinion, though there are certainly plenty of viable alternatives out there that you should not be shy of both in conjunction with and in place of Amazon. Jeff Barr, Werner Vogels and others have proven that a business model that so many other IT organizations failed at miserably could be done extremely well. I just hope they don’t get too far ahead of themselves…as I’ll discuss separately.

Google AppEngine to Support Java and JavaScript–Soon?

October 21, 2008 Leave a comment

Thanks to Luis Sala, I was alerted tonight to one of the really big rumors working its way through the “cloud-o-sphere”. Apparently, it was officially stated that Google AppEngine will be supporting Java, and (via Rhino) JavaScript at a recent Google event in India, with an official launch coming as soon as this week (that last part is pure rumor, however). I haven’t seen any hints of an announcement party as of yet, but it seems the Google staff are already talking about it, and feeding the rumor mill.

This is big for many, many reasons, not the least of which is the challenge that Google must face in supporting a language that is so challenged at running in a modular fashion. I will be extremely curious to see if the announcement includes OSGi (or, god help us, JSR-277) as a way to mitigate the difficulties in “mashing up” different applications, services and libraries in the same VM. Alternatively, like the Python implementation of AppEngine, it could just be that a restrictive set of Google libraries will be required, which in turn dictate which versions of core common libraries are used in any given application. The former would allow for a wider variety of applications, but the latter is far more likely, in my opinion.

Will any old Java-compatible library not considered “common” be importable, I wonder? How will they achieve the same balance of control and flexibility they achieved with Python? Somehow, I don’t find myself concerned that the end product will have any serious bugs, just that there will be hidden restrictions that most Java developers will find annoying at first.

At the very least, I think we can anticipate that the Java architecture on AppEngine will closely align with the Python architecture, meaning all data access will be BigTable, etc. So the same pros and cons will apply to AppEngine whether you use Java, JavaScript or Python. Which still leaves an amazingly large market for Google to grab in the PaaS space.

Hey, Google, if you do have a launch party, I would certainly love an invitation…

Categories: cloud computing

The Significance of the Cloud Proxy

October 19, 2008 Leave a comment

There is a dirty little secret about the current spate of cloud computing storage offerings out there today. Yeah, they are a powerful alternative to buying your own disks and filers or SANs, but to use them–pretty much any of them–you need some development skillz. Not “deep and ugly” systems programming skills, mind you, but certainly these basics:

  • Ability to read a REST API specification
  • Ability to build a URL to meet said specification
  • In almost all cases, ability to write a script or program that hides building said URL from humans using the storage service

This is fine if you are hiding the use of a cloud storage service behind the facade of some application or other. However, what if you want to make “infinite” storage available to the masses? How do you let your everyday Windows or Mac user simply connect their system to a cloud storage drive?

The easiest way for desktop OS users to access traditional shared storage today is by mounting a shared network drive, through whatever mechanism is provided by the OSes basic file system interface. Once the drive is mounted, it behaves just like any other network or local drive, with the same commands to navigate, augment and utilize the file system as any other file system. Why couldn’t getting access to storage in the cloud be just that easy?

Enter a new offering from Nirvanix, announced a couple of weeks ago. Nirvanix CloudNAS is basically a simple program that you install on the “pizza box” of your choice. However, what that software does is huge, in my opinion. Basically, it provides a simple proxy to the Nirvanix Storage Delivery Network, their brand name for their cloud computing storage service. This proxy offers three incredibly common and appropriate standard interfaces for desktop users to utilize: CIFS, NFS and FTP.

So now, with one of these pizza boxes installed inside your firewall, with the appropriate secure connections (I would hope) to the Nirvanix cloud through said firewall, any user can locate the “network drive”, mount it in their local file system, and start using it from ANY application that already recognizes the OS file system–which is anything that reads and/or writes files–to utilize that drive. No programming, no “learn this new interface”, just a shared drive mapped to your “N:” drive (or some such thing).

This is a concept near and dear to my heart, as a similar approach towards content repository access is offered from my employer, Alfresco. Say you have 1000 MS Office users out there, and you want to carefully track certain documents which they are maintaining:

  • You could give them some sort of plug-in to install into each and every application, but that would cost you–especially of you have dozens of different applicable applications.

  • You could give the users a new web application that they would use to “upload” documents to the repository outside of the existing application interfaces, but that would probably lead to much confusion and misuse.

  • You could use Alfresco’s CIFS interface to mount a shared drive on each user’s desktop, with an agreed upon folder structure for the documents being edited, and simple instructions to maintain the document on the “Alfresco” drive.

In other words, Alfresco and Nirvanix are using CIFS, NFS and FTP (as well as WebDAV in Alfresco’s case) to “instantly” integrate any file system aware application with their respective offerings.

I loved this concept when Alfresco explained it to me, and I may love it even more in Nirvanix’s case. With one simple announcement, they’ve completely differentiated themselves from Amazon S3, in that this is now a data center friendly (not just a developer friendly) cloud storage service. Almost any user in the enterprise can quickly and easily be connected to the cloud, and near infinite storage is at their fingertips. Best of all, the software download is free.

(Truth be told, the reality is probably less sexy than the vision. Latency will certainly be a concern, as will the availability of Nirvanix services over the Internet. While I am positive about the vision, I would sure as heck do my due diligence before implementing a project that relied on CloudNAS.)

There are a lot of smart people thinking about cloud computing these days. I expect to see this “proxy” concept copied by several other vendors. I could see a Google AppEngine “proxy” that hosts the development bits and automated “push” of final bits to the Google cloud. I could see a “hybrid” cloud management proxy, perhaps from someone like RightScale or Cassatt or one of the new Cloud OS offerings, that manages application and service provisioning both intra- and extra-enterprise.

These are the innovations that I think are most exciting these days. Not a new API, or a new “do it yourself” network service, but integrations into the “traditional” IT technologies in ways that are as transparent as possible to end users and system administrators alike. Good luck to Nirvanix. I think they have something here.

Update: InformationWeek has an excellent article covering five “good deals” for the use of cloud storage, and three “risky propositions”. Cloud NAS is listed as one of the good deals.

Categories: cloud computing, storage