Archive for the ‘social production’ Category

Is the Future of Global Services "Work From Home"?

September 12, 2008 Leave a comment

Software consulting is a heck of a fun gig. However, one of the downsides to this…well…lifestyle, really, is that the big money jobs almost always require a willingness to travel–a lot. There is good reason for this; consultants are expected to be deep experts on specific technologies or processes, and the market for each of those specifics is limited in any one city. However, nation-wide there is plenty of business in most mature markets.

I always loved the job of consulting, but the lifestyle beat me up pretty bad. Truth be told, I probably wouldn’t be married with two lovely kids today if I had stayed on the road. I’m just not good at maintaining distance relationships, and I had to get off the road to meet and spend time with the perfect woman before she would agree to marry me. (OK, enough of that schmaltz.)

Something intriguing occurred to me while researching cloud vendors for Alfresco, however. What if the “network centric” nature of the cloud actually creates an opportunity to change the lifestyle of software consulting? What if consultants didn’t have to travel for every billable hour, but could do a significant portion–if not all–of their work from a local office, or even from home?

First, think about the possibility. How should, for instance, vendor services be handled when the software is delivered in the cloud?

  • If most of the work of the consultant is assisting in planning and reviews, does every engagement need to be face to face, even if neither the hardware nor the network is owned by the client?
  • For longer term engagements, given the collaboration tools that are now (and will soon be showing up) on the Web, do teams really need to sit in the same building to be effective?
  • If the cost of travel (air and lodging) can be eliminated from the overall cost of using vendor services, would clients be more likely to use the service or less?

I honestly don’t know the answers to these questions. But I think the requirements for consulting services are significantly different in the cloud, especially when it comes to what you can do for your client when and from where. I’d be interested in what others think about that.

I do know that there are certain services that will always be face-to-face; workshop facilitation, for instance; or certain kinds of project reviews. However, open source has taught us a lot about how “network organized” teams can work, and I think more and more consulting will look like open source contribution and less “on-site guru”. Then, maybe..just maybe…I can be a big time consultant and still tuck my kids into bed every night…

Yahoo goes Social with Paas Offering

Well, no time to really expound on this, but I thought it was important to highlight: Yahoo! announced a PaaS offering at Web 2.0, and it is yet another interesting twist on a theme. The best overview I found is a video of Yahoo! CTO Ari Balogh’s keynote at Web2.0.

What sets Yahoo!’s offering apart (at least in theory–it isn’t all delivered yet) is the focus on turning all of Yahoo’s properties, services and content into:

  1. An open API based mash-up ready smorgasbord of development opportunity, complete with development environment and optional hosting in their infrastructure.
  2. A completely interconnected social network that differentiates itself by being a feature, not a destination. This, for me, is a wise move on Yahoo’s part, as no one else is willing to say their network is simply a part of the overall user experience of a destination, rather than being a destination that users must conciously choose to navigate to to use its advantages.

I think Yahoo! is looking at an interesting play, though you have to wonder how they will steal developer mind-share from Microsoft and Google–that is, unless they become either Microsoft or Google…

More in the next few days when I have time. Till then, stop by my main page to check out what I am reading on a day-to-day basis, with some commentary. You can comment yourself on my page at FriendFeed.

Greg Linden on the Cloud

Greg Linden, of Geeking with Greg fame, was interviewed on Mix about his work in search personalization, recommendation engines and cloud computing. Most of the interview is only sort of interesting, but what really perked my ears up was Greg’s observation that anyone scaling a software environment to thousands or tens of thousands of servers will likely continue to run their own data centers, if only because they will want to tweak the hardware to meet their specific needs.

Initially, I thought of this as just another example of a class of data center that will not be quickly (if ever) moved to a third party capacity vendor. Based on examples like Kevin Burton’s fine tuning of Spinn3r‘s infrastructure using Solid State Drives (SSD) instead of RAID and traditional disks, it even seems like there would be many such applications. Ta da! It is proven that there will always be private data centers!

Yet, the more I think about it, I wonder if I wouldn’t pay Google’s staff to run my Map/Reduce infrastructure, even if it used tens of thousands of servers. I mean, where is the economic boundary between when it is cheaper to purchase your computing from clouds that already have your needed expertise versus hiring staff with specialized skills to meet those same needs?

Alternatively, is this kind of thing a business opportunity for a “boutique” cloud vendor? “Come to Bob’s MapReduce Heaven. We’ll keep your Hadoop systems running for $99.95, or my name isn’t Bob Smith!”

I’ll just leave it at that. I’m tired tonight, and coherence has left the building.

The Social Enterprise Opportunity

March 19, 2008 2 comments

I want to begin today with a quick shout-out to my fellow bloggers at Data Center Knowledge. In a recent post, they identified me as one of the bloggers they follow for cloud and utility computing, and I’m honored to me included among such a strong list of bloggers. (Rich Miller, who posted the list, is no slouch himself.) Update: I violated the cardinal rule of Internet social networking: assuming a given name applies to one person. Rich Miller from Data Center Knowledge is not the same Rich Miller that writes Telematique. My apologies to both.

One of those bloggers is Phil Wainwright, whose Software as Services blog is one of my regular reads. He is the most aggressive, forward thinker in the SaaS space, and he is very often sees opportunity that most of us miss. (Phil’s blog is also a great way to stay on top of the companies and technologies that specifically support the SaaS market.)

Phil recently wrote an interesting post about SaaS and Web 2.0 concepts, titled “Enter the socialprise”, in which he points out that the very nature of an “enterprise” is changing thanks to the Internet and cloud computing concepts. He notes that loyalty between individuals is replacing corporate loyalty, and that social networking on the Internet is creating a new work economy for individual knowledge workers.

He then goes on to challenge enterprise computing models:

But enterprise computing is still designed for the old, stovepipe model in which every transaction took place within the same firm. There’s no connection with the social automation that’s happening between individuals. Many enterprises even resist talking about social networking. And even when an application vendor adds some kind of social networking features, there’s always the suspicion that they’re just painting social lipstick on a stovepipe pig.

This yawning chasm is an opportunity for a new class of applications to emerge that can harness the social networks between individuals and make them relevant to the enterprise. Or perhaps reinvent a new kind of enterprise, better suited to the low-friction reality of the connected Web. Enter the socialprise.

The example he gives of a company leveraging this is InsideView, which is creating a very cool sales intelligence application that integrates with major SaaS CRM vendor products to aggregate information from a variety of online sources into a single prospect activity dashboard. This is an incredibly cool example of how rich data about individuals within and across firms can be used at an enterprise level.

Another product that is similar that struck me was JobScience, which is one of the companies whose blog is in the Data Center Knowledge list referenced above. JobScience is using to create a rich social intelligence engine for customers. Their product, aptly called Genius, is an excellent example of what they are able to do. Read the post for all the features, but my favorite is:

The Genius Tracker. Not only does the tracker pop up to tell me an email recipient has just opened my email, or is visiting my web site, but the more important intelligence this gives me is that this prospect is is online and engaged with our solution. If a sales rep can call 40 people in a day, and a blast to 5000 prospects shows me that 40 of those prospects are online and engaged, it doesn’t take a genius to figure out who to call. That rep’s going to have a much more productive day calling people who they know are in the office. Less voicemails, less brushoffs, less calls to people who don’t work there anymore.

Bordering on privacy issues, I know, but an amazing level of detail, and invaluable if used wisely. More importantly, it goes to show what is possible in a stable, shared application environment.

By the way, this direct integration with a given CRM platform by a “value added extender” is an interesting twist to the dependency issues that Bob Warfield writes about on the SmoothSpan blog. JobScience’s products are services that become a feature of the destination both visually as well as functionally. Bob’s point about being a component provider to the actual product is well taken, and I wonder if the only exit strategy for these guys is acquisition by Salesforce. What else can they hope for as a company dependent on Talk about cloud lock-in.

Update on activities from the source

Interesting interview of Chris Saad and Frank Arrigo (Chris is organizing, and Frank is a Microsoft employee that is somehow related) by Robert Scoble.

Interesting in here is the update on what is focusing on right now–standard “best practices” for open data, and a “logo” to indicate standards are followed–plus the discussion of Silverlight, etc.

The Compute Grid is Like Nothing Before It

January 11, 2008 Leave a comment

In a continuation of the discussion regarding Nick Carr’sThe Big Switch: Rewiring the World from Edison to Google” and Yochai Benkler’sThe Wealth of Networks: How Social Production Transforms Markets and Freedom“, I want to focus today on the shortcomings of the electric utility analogy–or any other analogy I have heard of for that matter–in describing the compute capacity utility story. It is important to note that, while the electricity-as-utility story has dominated the utility computing discussion to date, other interesting analogies have been put forth lately that enlighten some aspects of the compute story while clouding (no pun intended) others.

Let’s start with the electric utility analogy that Carr focuses on in his work. Nick does an excellent job of laying out both the history of electric production and distribution in the United States, as well as mapping those to similar aspects of compute utilities. As Nick puts it:

“The commercial and social ramifications of the democratization of electricity would be hard to overstate…Cheap and plentiful electricity shaped the world we live in today. Its a world that didn’t exist a mere hundred years ago, and yet the transformation that has played out over just a few generations has been so great, so complete, that it has become almost impossible for us to imagine what life was before electricity began to flow through the sockets in our walls.

Today we’re in the midst of another epochal transformation, and its following a similar course. What happened to the generation of power a century ago is now happening to the processing of information. Private computer systems, built and operated by individual companies are being supplanted by services provided over a common grid–the Internet–by centralized data-processing plants. Computing is turning into a utility, and once again the economic equations that determine the way we work and live are being rewritten.”

OK, so its hard to argue with the basic premise that we are undergoing a change that is similar to the introduction of cheap, readily available electricity in the early twentieth century. Nick is a master for pointing how the evolution of electric technology fed changes in societal norms, and vice versa. “It’s a messy process–when you combine technology, economics and human nature, you get a lot of variables”, he writes, “but it has an inexorable logic, even if we can trace it only in retrospect.”

Unfortunately, the same can be said about a variety of other technical advances that didn’t end up looking like the electric marketplace; take manufacturing, food production, and music and film production, for example. All of these have elements that can be seen as paralleling utility computing, social production or both. Yet none of them really map completely, and the flaws in the analogy have a “chaos”-like ability to magnify as history bears out.

Now, to Nick’s credit, he does start Part 2 of the book–his in depth comparison of the social implications of utility computing–with the following comments:

“Before we can understand the implications for users…we first need to understand how computing is not like electricity, for the differences between the two technologies are as revealing as their similarities.”

He goes on to highlight the following differences, using them to make key points about how the effects of compute utilities on society may not be nearly as beneficial as the effect of electric utilities:

  1. With electricity, the applications of the commodity lie outside of the utility–i.e. the appliances, electronics, lighting, etc. that consume the power. With computing, the applications themselves are deliverable over the network, and can be shared by anyone that wants to (and is allowed to) use them.
  2. Computing is much more modular than the electric grid, meaning that the components that make up the commodity service (storage, processing, networking) can be split up and offered by a variety of different parties.
  3. The compute utility is programmable; it can be made to perform a variety of custom tasks are required by its customers. Electricity from your basic power outlet is a fixed state commodity–there are exacting standards to what it is and how it is delivered, as well as laws of physics that limit how it can be used.
  4. Choosing an electric utility was generally an all-or-nothing choice; you either got power from the grid, or you had your own power generation. The modularity of computing, however allows for a slow transitional change from private to public consumption. (I think there is a serious flaw in this analogy, for what its worth. Look at the increasing installation of solar power systems in residential applications–all while remaining a part of the grid. This seems to indicate a gradual transition to a hybrid public/private power grid in the electricity space.)
  5. The compute utility allows others to participate directly in creating value for the utility, and do so cheaply and simply. Providing power to the electric grid has always been expensive and very technical (as, I have to admit, is true in my objection in point 4).

These are excellent examples, and are all important to note (even point 4). However, I think Nick fails to note the most important difference between electricity and data processing; namely

data != electricity

There are huge implications to what is being moved over the network versus what is being moved over the power grid, beyond just the programmable elements. These differences are critical when analyzing the compute as utility story, and its a shame he doesn’t address them.

For example, checking his index for the terms “security”, “data security” or “software security” shows exactly zero entries. When talking about the transition of data vs electricity, it seems critical that one consider the sensitivities that people and organizations have about how it is transmitted. “Privacy” is the subject of a 7 page essay highlighting what we have been willing to give up so easily, but he basically uses the subject to highlight a specific trait of the network without investigating how related issues will cause compute capacity to differ from electricity. My own opinion is that these two subjects–security and privacy–are exactly what will slow down the “total conversion” to centralized computing utilities for customers like banks, classified federal bureaucracies and health care. I spoke of this in detail before.

As noted earlier, others have commented on some of these issues, and have used other analogies like manufacturing to counter the electricity analogy. One excellent example of this is an an article by Michael Feldman of HPCwire in which he argues that a better analogy is food production. As he puts it:

“When food became a commodity, agribusiness conglomerates took over and replaced lots of family farms with much larger, more efficient “factory farms.” Today, crops like wheat and soybeans are typically grown on multi-hundred acre land parcels. But not all food products are easily commoditized. Specialty fruits, vegetables, and organic products don’t usually lend themselves very well to large-scale production. According to the U.S. Department of Agriculture about a quarter of farm revenue is still generated on family farms. Many of these farms are focusing on these specialty items and have formed cooperative arrangements in order to remain economically viable.”

This analogy works from the standpoint that it describes a system in which people care about the varying qualities of the service output by the “utility”. For example, we all know the amount of effort spent by the FDA and others to make sure our meats aren’t tainted with deadly bacteria. In fact, some specialty food producers have built their marketing message around food safety and health, and many of those are small, boutique producers. Other small players have provided specialty food items to very specific markets with great success. I have believed all along that the compute market will evolve into a few major players and hundreds (thousands?) of small boutique specialty players, especially in the SaaS space. (“Special SaaS with that?” Please forgive me…)

Unfortunately, the food analogy also breaks down in one critical way:

data != food

In this case, its the real, physical nature of food, and the accompanying issues with logistics, cost of production (including fixed real estate costs), and brick-and-mortar sales that don’t compare well to the zero marginal production cost nature of data. Replicating food and shipping it to a new customer destination are expensive acts; doing the same with data costs nearly zero. Furthermore, geographic location means nearly nothing for computing. Food, on the other hand is subject to cultural, climatological and logistical limitations to where it can be produced and sold.

For this reason, computing will tend to a much higher level of centralization than food production has seen. Intuitively, one must believe that this will lead to larger displacement of private data centers than would have happened if it was more expensive to share infrastructure.

I’m still trying to digest all of this, but I have a growing feeling that Carr’s dependency on the “Edison analogy” (to coin a phrase for no good reason) actually limits the likelihood of some of his arguments. He also seems to assume that the economics of the web won’t evolve much from where it is today–largely advertising based, with millions of people willing to do stuff for free and few existing cultural industries willing to produce for online audiences. I want to bring Benkler back into the conversation when I cover this in a later post.

(One side bar on the commercial production of online content: did anyone see the news from NBC today?)

7 Businesses to Start in 2008

January 8, 2008 4 comments

Rather than offer a list of predictions for 2008, I thought I’d have some fun suggesting some businesses that could make you money in 2008 or the few years following.

  1. SaaSEnterprise data conversion practice: All those existing enterprise apps will need to have their data migrated to that trendy new SaaS tool; and should anyone actually decide they hate their first vendor, they’ll be spending that money again to convert to the next choice. Perhaps they’ll even get fed up and return to traditional enterprise software. Easy money.
  2. Enterprise Integration as a Service: No matter how much functionality one SaaS vendor will provide, it will never be enough. Integration will always be necessary, but where/how will it be delivered? Go for the gold with a browser based integration option. Just figure out how to do it better/cheaper/faster than, Microsoft, Google, Amazon, etc…
  3. SaaS meter consolidation service: Given the problem stated in 2 above, who wants 5 or 6 bills where its impossible to trace the cost of a transaction across vendors? Provide a single billing service that consolidates the charges of the vendor stable and provides additional analytic capabilities to break down where costs and revenues come from. Then get ready to defend yourself against the data ownership walls put up by those same vendors (see 4 below).
  4. SaaS/HaaS Customer litigation practice: Given the example of Scoble’s experience with Facebook, there are clearly a lot of sticky legal issues to be worked out about “who owns what”. Ride that gravy train with litigation expertise in data ownership, vendor contractual obligations and the role of code as law.
  5. SaaS industry (or SaaS customer) data ownership rights lobbyist: Given 4 above, each industry player is going to want their voice in congress to protect/promote their interest. Drive the next set of legislation that screws up online equality and individual rights.
  6. Sys Admin retraining specialist: All those sys admins who will be out of work thanks to cloud computing are going to need to be retrained to monitor SLAs across external vendor properties, and to get good at waiting on hold for customer service representatives.
  7. Handset recycling services: The rate at which “specialized” hardware will evolve will raise the rate of obsolescence to a new high. Somebody is going to make a killing from all those barely used precious metals, silicon and LCD screens going to waste. Why not you?