Some recent podcasts

I wanted to call attention to four good podcasts that I listened to recently. The first is from IT Conversations and the Interviews with Innovators series hosted by Jon Udell. In this one, he speaks with Raymond Yee of UC Berkeley, discussing mashups. I especially liked to discussion about public events, and getting feeds from the local YMCA. I always wind up putting in all my kids games into iCal from their various sports teams, it would be great if I could simply subscribe from somewhere on the internet. Jon himself called out the emphasis on this in the podcast in his own blog.

The next two are both from Dana Gardner’s Briefings Direct series. The first was a panel discussion from his aptly-renamed Analyst’s Insight series (it used to be SOA Insights when I was able to participate, but even then, the topics were starting to go beyond SOA), that discussed the recent posts regarding SOA and WOA. It was an interesting listen, but I have to admit, for the first half of the conversation, I was reminded of my last post. Throughout the discussion, they kept implying that SOA was equivalent to adopting SOAP and WS-*, and then using that angle to compare it to “WOA” which they implied was the least common denominator of HTTP, along with either POX or REST. Many people have picked up on one comment which I believe was from Phil Wainewright, who said, “WOA is SOA that works.” Once again, I don’t think this was a fair characterization. First off, if we look at a company that is leveraging a SaaS provider like Salesforce.com, Salesforce.com is, at best, a service provider within their SOA. If the company is simply using the web-based front end, then Salesforce.com isn’t even a service provider in their SOA, it’s an application provider. Now, you can certainly argue that services from Amazon and Google are service providers, and that there’s some decent examples of small companies successfully leveraging these services, we’re still a far cry away from having an enterprise SOA that works, whichever technology you look at. So, I was a bit disappointed in this part of the discussion. The second half of the discussion got into the whole Microhoo arena, which wound up being much more interesting, in my opinion.

The second one from Dana was a sponsored podcast from HP, with Dana discussing their ISSM (Information Security Service Management) approach with Tari Schreider. The really interesting thing in this one was to hear about his concept of the 5 P’s, which was very familiar to me, because the first three were People, Policies, and Process (read this and this). The remaining two P’s were Products and Proof. I’ve stated that products are used to support the process, if needed, typically making it more efficient. Proof was a good addition, which is basically saying that you need a feedback loop to make sure everything is doing what you intended it to. I’ll have to keep this in mind in my future discussions.

The last one is again from IT Conversations, this time from the O’Reilly Open Source Conference Series. It is a “conversation” between Eben Moglen and Tim O’Reilly. If nothing else, it was entertaining, but I have to admit, I was left thinking, “What a jerk.” Now clearly, Eben isn’t a very smart individual, but just as he said that Richard Stallman would have come across as to ideological, he did the exact same thing. When asked to give specific recommendations on what to do, Eben didn’t provide any decent answer, instead he said, “Here’s your answer: you’ve got another 10 years to figure it out.”

It’s not about the technology!

I was listening to David Linthicum’s latest podcast in my car, and it just struck me the wrong way. I literally was yelling, “no, no, no” and wanted to beat my head against the steering wheel. This particular podcast was related to this post by Dave. In the podcast, he also made plenty of references to this post from Dion Hinchcliffe. The podcast, these blogs posts, and others by Dana Gardner and Joe McKendrick, are all about this notion of WOA- Web Oriented Architecture. The reason that it hit me the wrong way is that it just screamed out to me as another case of an over-hyped technology driven approach that tries to give the impression that it will make everything better. I just don’t feel that’s the case. Now, since Dave reads this blog, I should point out that he does have a track record of also railing against enterprises and vendors just throwing technology at problems, so don’t hold any of this against him. This is simply how I perceived the podcast, and just as people have perceived my posts in ways that I hadn’t intended, I’m sure my reaction to this podcast may be a bit of a surprise to Dave.

For the record, I don’t have any issues with SaaS, REST, WOA, Web 2.0, or anything else. What I do have issue with, however, is thinking that the use of any of these technologies (and I may as well include SOAP, WS-*, and all of those in the mix) will heal all that is wrong with large enterprise IT. If an organization chooses to Salesforce.com CRM on demand instead of SAP or Oracle CRM deployed within their firewall, does anything really change? Yes, there’s no doubt that there are potential benefits as far as getting CRM up and running goes, but then what? In the enterprises I’ve worked with, the bulk of the projects were not about implementing some new vendor package. Some of that was always occurring, but there was plenty more that was about integration, enhancements, and other development activities. As we’ve already seen in the SOA space, defining SOA as strictly a technical thing limits it to just another distributed computing technology that provides incremental improvements, at best.

If SOA is instead looked upon as a new way of organizing the assets and the organization, now we’re talking about dramatic change. We’re talking about breaking down silos and managing dependencies on boundaries that make business sense rather than boundaries that were arbitrarily defined by projects at hand. When IT can’t get something done in the time the business wants, it’s usually due to poor processes and politics, not due to their use of WS-*, REST, or SaaS. If you choose to go down the SaaS route, or any other offsite approach, it may get that portion done faster, but it also takes a whole bunch of things and makes them someone else’s problem, rather than solving those problems. At that point, you’re now in line with all the rest of their customers, so that thing had better be something that you don’t see as a competitive advantage. This doesn’t mean SaaS or WOA is bad, but what it means is that you need to have IT working properly to make good decisions on how to use SaaS or WOA appropriately. I stated this in a post back in December of 2006. When an organization views itself as a collection of services, it is now in a position to pick those services and determine where a service provider, whether in the traditional sense or in the SaaS/WOA sense, can be leveraged optimally. There are plenty of things that should be someone else’s problem. There are plenty of things that shouldn’t. Making IT work better is a problem that the organization must address, and simply throwing new technologies at it has a proven track record of not fixing it. At the same time, these problems are not going to be fixed in the timeframes that the media would like, so instead, we’re going to have to deal with naysayers when we don’t have the dramatic success stories after a year, two years, or now in the case of SOA, five or more years. It is a difficult proposition, and honestly also one that may actually be a low priority for many organizations. If things are going well, why not treat it as a slow, incremental improvement that we only notice after we look back to where we were 10 years ago? If things are going poorly, however, something needs to change. Make sure you take the time to change the right thing, rather than just putting more lipstick on the technology pig.

Software in the Cloud versus Software as a Service

I recently listened to a Churchill Club podcast from ZDNet which was a recording of a debate between Marc Benioff, CEO of Salesforce.com, and SAP Chairman Hasso Plattner. Besides being very entertaining and a great example of the cultural differences of the two executives involved, the podcast was also a good discussion around Service Management.

One of the things that came across to me throughout the discussion is that there’s a fundamental difference between what I’ll term Software in the Cloud and Software as a Service. Marc took a few jabs at Hans in discussing traditional enterprise software in talking about how the installation CDs show up in the mail, and the customer is left to install, customize, maintain, etc. until the next version is released and the vendor comes back and asks for more money. A bit extreme, but it served to illustrate his point. Take the money portion out of it, because there’s no such thing as a free upgrade. If you’re paying subscription fees, you’re paying for upgrades. Anyway, I began thinking about this whole space and asking the question, “Where’s the service?” Clearly, by having someone else host the application, there are efforts associated with maintaining the system that go away. But is this a service?

To me, the difference is service management. As I’ve emphasized in past blogs, putting an interface and an associated endpoint behind it may make you a service provider in the technical sense, but I don’t think it makes you a good service provider. Being a good service provider requires bi-directional communication between the provider and the consumer on how the service is being used, new capabilities desired, new capabilities coming, etc. Therefore, in the case of Software in the Cloud, there’s just as much risk of just being a faceless entity as there is in traditional enterprise software. I take your subscription fees happily, but that’s it. I am now at your mercy for the future of that system. To provide Software as a Service, the relationship needs to go beyond just providing logins and a URL. I’m sure there are many things placed in contracts to try to better define “the service,” but in reality, service is a differentiator. Look at any industry and you’ll see that there’s a relationship between cost and service. In general, for a given capability, you can provide it at low cost, but usually with relatively poor service, or you can provide it at a higher cost, with typically better service. The challenge is that the larger you become, the greater the challenge there is to provide consistent service. You can differentiate by cost levels, as many software companies (web-based or not) do, but then you’re at risk of damaging your ability to associate great service with your brand. I don’t know if Salesforce.com does this or not, but if they do, are they at risk of taking criticism on the whole “as a Service” because their lower-paying customers are seeing them as a faceless entity? In general, I’ve found that companies are either perceived as providing great support or providing lousy support, and it has very little corrolation to how much money you’ve spent. Just because I’ve only paid for 9×5 support instead of 24×7 doesn’t typically change how the person answering the phone or the sales staff treats me.

So, the gist of this post is a simple message. If you’re going to call yourself a service provider, simply sticking something in the cloud, in the service repository, etc. is not enough. You need to define what your service is, and make sure everything you do is valuable from not your point of view, but the view of your consumers, the ones to whom you are providing the service.

Micromessaging

I had the opportunity to attend a great presentation from Stephen Young, Founder and Senior Partner of Insight Education Systems. The talk was around the concept of micromessaging, which can be further broken down into MicroInequities and MicroAdvantages. To understand these, Stephen started with the difference between denotation and connotation. Denotation represents the words we use, while the connotation behind those words reveals the true meaning of what is being said. This can involve body language, inflection, tone of voice, and much more. As he pointed out, humans (or humanoids) have been communicating for hundreds of thousands if not millions of years. The written word has only been around for a fraction of that time. Therefore, so much more about how and what we communicate is conveyed by more than just the words. He did an exercise with us where we tried to describe what a dog does when it is happy. While most dog owners know within a fraction of a second the mood of their dog when they walk in the door, trying to convey that recognition process to someone else solely through words is incredibly difficult. In other words, our brains are very well tuned to picking up on things outside of the words in a conversation.

Getting back to the concept of micromessaging, MicroInequities are all of the very small things that we do in a conversation that have negative connotations, such as folding your arms, losing eye contact, giving a “ho-hum” response to the work of some individuals while lavishing excessive praise to others when the outcome of each was similar, etc. In contrast, MicroAdvantages are positive micromessages. The FAQ at the Insight Education Systems page does an even better job of explaining this. A very simple example of MicroInequities was the use of apologies in the workplace (and elsewhere). How often have you heard someone say, “If I offended you, I’m sorry.” Right off the bat, the use of the word “if” makes it a qualitative apology, and not a sincere one. Stephen said, “If you step on someone’s foot, do you say, ‘If that hurt, I’m sorry?’ No, we simply say, ‘I’m sorry.'” He used the example of Michael Nifong, the former attorney in the Duke lacrosse team case. His apology contained this:

To the extent that I made judgments that ultimately proved to be incorrect, I apologize to the three students that were wrongly accused. I also understand that, whenever someone has been wrongly accused, the harm caused by the accusations might not be immediately undone merely by dismissing them.

Analyzing this, we see that the apology immediately starts out with a qualifier. It then uses the terms “that ultimately proved to be incorrect” which connotes “I still think I was right.” His apology is only directed at three students, who he does not name, which excludes the impact to Duke University, the coach who lost his job, the team who lost their season, the families of the players, etc. On top of that, he didn’t even show enough respect to mention the players by name. The gist of this was that apologies should be about the impact, not the intent.

The talk went on to demonstrate the impact of our body language when listening and how it causes speakers to behave differently even when conveying the same or similar information, as well as the differences (and similarities) in different areas of the world. Already today, I have stepped back and adjusted the wording in an email I was composing as a result of this talk. I plan on putting his book on my Amazon wish list and encourage all of you to do the same, and, if you have the chance, hear him speak or bring him to your organization for a workshop.

Earthquake in St. Louis?

At about 4:35 AM Central Daylight Time this morning, I was woken up to my house getting a pretty good shake for about 10-15 seconds. While this may not be news for a good portion of my readers who probably reside in the San Francisco Bay Area, it’s certainly news when you live hear in the midwest. Admittedly, we do have the New Madrid Fault, which did have a major quake over a century ago, but tremor are a very rare occurrence, in contrast to the west coast. I lived out there for four summers and then three full years and felt about three of them. I’ve been here in the St. Louis area now for over ten years and this is the first one. Anyway, it was different than a Bay Area quake. Of the three I felt out there, one was more of a rolling motion, but that was probably attributable to me being on the seventh story of a building. It gave me the weird sensation like dizziness, but without the normal internal sensation you’d normally have in your head. The other two, where I was closer to the epicenter, felt like someone slamming a door really hard. A quick jolt and it was done. This one was a good shake, similar to when I lived near Chicago and we had train tracks on the other side of the street, but a much stronger shake. Of course, probably the most unusual thing of it all was that I was the only person in my house who woke up. I’m normally the one who can sleep through anything. Oh well.

Update:It’s been confirmed. It was a 5.4 magnitude quake centered near West Salem, Illinois, about 130 miles from St. Louis. That was stronger than any one that I experienced in the bay area.

My First Computer(s)

Charles Cooper reminisced today about his first computer, an IBM PCjr, and encouraged other to share their stories. Not counting the Basic Programming cartridge for the Atari 2600 and its awkward controller, my first computer was the Texas Instruments TI-99/4A.
I received this as a present from my parents while in junior high school (grades 6-8) and remember the days of loading/saving programs from the cassette player interface, subscribing to a print magazine dedicated to it, spending many an afternoon typing in all of the BASIC code in that magazine, as well as building some of my own games.

The one thing that still sticks out in my mind was the sprite handling primitives that TI provided in its BASIC language, including things like collision detection. It allowed me to create some pretty cool games very easily. I remember thinking about this later in life as I became an Apple owner with the Apple //c my freshman year of high school (I debated quite a bit between the //c and the PCjr) and then moving up to the Apple ][gs my senior year and using that machine throughout my 7 years in college (undergraduate and graduate school, by the end of my days at the University of Illinois, it was essentially a dumb terminal since it didn’t even have a hard drive).

I wrote programs in BASIC on both the PCs in my high school lab as well as my Apple machines, and neither had the robust graphics libraries of my TI-99/4A. It makes for an interesting debate on flexibility versus productivity. On the one hand, the TI-99/4A was very well suited for game development, winning on the productivity side. On the other hand, computers like the Apple //c, ][gs, and the IBM PCjr were more focused on flexibility and supporting a broad range of tasks. Clearly, flexibility won out over the next 15 years, but we’re now entering a new phase with mobile devices and iPods. Will the pendulum swing back toward productivity, with focused development and tools for a narrow range of capability for which the device is best suited, or will flexibility win out?

The whole computing space is a very interesting ecosystem because of the notion of the platform and how that platform is exposed to the consumer. While other industries leverage similar concepts, such as the auto industry building different lines of cars based upon the same basic frame or drive-train, they don’t expose those platforms to the consumer. The computing space does. When I choose to buy a laptop, a mobile phone, or increasingly, a media player, I’m choosing a platform and that decision has implications on how I can leverage that device in the future. While the hardware side of things has evolved to create an ecosystem of commodity providers that compete solely on cost and quality, the software side of things still competes on consumer mindshare which can be a disincentive to creating open, standard libraries that enhance productivity. While many of us in the industry may clamor for open platforms, I doubt too many consumers outside of us are walking into Best Buy asking for a phone that supports the Open Mobile Alliance technical specifications.

Piloting within IT

Something I’ve seen at multiple organizations is problems with the initial implementation of new technology. In the perfect world, every new technology would be implemented using a carefully controlled pilot that exercised the technology appropriately, allowed repeatable processes to be identified and implemented, and added business value. Unfortunately, it’s that list item that always seems to do us in. Any project that has business value tends to operate under the same approach that any project for the business does, which usually means schedule first, everything else second. As a result, sacrifices are made, and the project doesn’t have the appropriate buffers to account for the lack of experience the organization has. Even if professional services are leveraged, there’s still a knowledge gap that relates the product capabilities to the business need.

One suggestion I’ve made is to look inside of IT for potential pilots. This can be a chicken versus the egg situation, because sometimes funding can not be obtained unless the purchase is tied to a business initiative. IT is part of the business, however, and some funding should be reserved for operating efficiency improvements within IT, just as the same should be done for other non-revenue producing areas, such as HR.

BPM technology is probably the best example to discuss this. In order to fully leverage BPM technology, you have to have a deep understanding of the business process. If you don’t understand the processes, there’s no tool that you can buy that will give you that knowledge. There are packaged and SaaS solutions available that will give you their process, but odds are that your own processes are different. Who is the keeper of knowledge about business processes? While IT may have some knowledge, odds are this knowledge resides within the business itself, creating the challenge of working across departments when trying to apply the new technology. These communication gaps can pose large risks to a BPM adoption effort.

Wouldn’t it make more sense to apply BPM technology to processes that IT is familiar with? I’m sure nearly every large organization purchases servers and installs them in its data center. I’m also quite positive that many organizations complain about how long this process takes. Why not do some process modeling, orchestration, and execution using BPM technologies in our own backyard? The communication barriers are far less, the risk is less, and value can still be demonstrated through the improved operational efficiencies.

My advice if you are piloting new technology? Look for an opportunity within IT first, if at all possible. Make your mistakes on that effort, fine tune your processes, and then take it to the business with confidence that the effort will go smoothly.

Infrastructure in the Cloud

James Urquhart sent me an email about one of his posts and invited me to join the conversation. After reading his post and Simon Wardley’s post, it was interesting enough that I thought I’d throw in my two cents.

The topic of discussion was Google’s new App Engine. Per Google’s site:

Google App Engine lets you run your web applications on Google’s infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. With App Engine, there are no servers to maintain: You just upload your application, and it’s ready to serve your users.

The theme of James’ post, and why I think he invited me into the conversation, is does this matter to the large enterprise? I tend to agree with James. While I think this is cool technology, in its present form it’s probably of little value to the typical large enterprise. At the same time, I would definitely qualify this, and Amazon’s cloud of services, as disruptive technologies. I can’t help but find myself making mental comparisons to Clayton Christensen’s discussion of steel mills. Smaller steel mills came along and catered to the low end, low margin area of the market that the larger, integrated steel mills were happy to give up. Over time, however, those smaller mills expanded their offerings until the business model of the larger mills was completely disrupted. Will something similar occur in the infrastructure space?

There are certainly parallels in the potential markets. Big enterprises are not the target of Google or Amazon, just as the smaller steel mills focused on re-bar rather than on the more expensive and potentially lucrative market for structural beams or sheet metal. One key difference, however, is that it’s hard to figure out who the “big steel mill” is in this case. Clearly, both Google and any major enterprise currently buy servers. So, Google is not disrupting HP, IBM, Dell, etc. What we’re really talking about disrupting is the internal IT data center. In most cases, save for outsourcing the data center to EDS or IGS, there is no business to be disrupted. The right comparison of this to the steel mills would be a comparison of companies that leveraged the products from the smaller mini-mills disrupting the companies that leveraged the products from the larger, integrated mills. While efficient cost controls are certainly part of the equation, there’s much more that goes into the disruption equation.

In the end, it’s very clear to me that tools like Google App Engine are good for the industry as a whole. They cater nicely to the low end of the market and the size of Google can sustain low margins or even a loss on making these services available. Over time, some of the companies that leverage them will become bigger companies, making additional requests of Google, which will in turn evolve the product that with each evolution makes it more attractive to a broader set of customers, eventually including the big enterprise.

Aligning REST with Services

I’ve been meaning to call out a blog from Anne Thomas Manes posted back in March, along with a message in the Yahoo SOA group, as they were finally something that, in my opinion, added some useful information to the ever present REST versus SOAP/WSDL debate. Normally, I stay out of this religious war when it kicks up in the blogs or the Yahoo SOA group, but that’s not to say that I don’t care about it. The fact is, I’m a practicing enterprise architecture in a Fortune 500 company, so I need to be providing guidance when a team comes to me asking about REST versus SOAP/WSDL.

In most of the conversations about REST and SOAP/WSDL, it’s usually a comparison of a single SOAP endpoint (a single URI) to a single REST endpoint (again, a single URI). Invariably, the conversation always wind up being about the uniform interface (GET/PUT/POST/DELETE when using HTTP) versus the non-uniform interface (whatever operations are defined in WSDL) tunneled through the transport (POST when using HTTP) in SOAP. I’ve always felt that this was a bit of an apples and oranges debate because the REST endpoint is exposing a resource, and the SOAP endpoint was exposing a service. When thinking of services in a conceptual, technology independent manner, the mapping to a resource just didn’t seem as straightforward.

The comment that Anne made that helped put things in the right perspective was this:

Service consumers interact with the service through the set of resources it exposes. In other worlds, the resource model is the interface to the service. Each resource exposes a uniform interface (e.g., GET, PUT, POST, and DELETE), but an individual resource is not the complete service.

This made it very clear. If you’re trying to go from a conceptual service model to a design based on REST, an individual service does not equate to a single REST endpoint. Rather, it equates to a collection of REST endpoints that together comprise the resource model associated with that service. In my opinion, the lack of an understanding around this concept is probably also why most of the “REST” services out there really aren’t REST, but rather are XML over HTTP without the WSDL and SOAP envelope. The people involved are still trying to do a single endpoint comparison and not thinking about the resource model as a whole.

Now, I’ll admit that this insight still doesn’t solve the uniform versus non-uniform debate, but I do think that it brings us to the point where a valid comparison of approaches for a particular problem could be taken on within an organization. Thanks Anne!

Dealing with committees

If you work in a typical large IT enterprise, it’s very likely that there are one or more committees that frequently receive presentations from various people in the organizations. These can be some of the most painful meetings for an organization, or they can be some of the most productive. Here are some of my thoughts on how to keep them productive.

First, if you are a member of one of these committees, you need to understand your purpose. If you are part of the approval pipeline, then it should be clear. Your job is to approve or deny, period. If you can’t make that decision, then your job is tell the presenter what information they need to come back with so you can either approve or deny. Unfortunately, many committee members often forget this as they get caught up in the power that they wield. Rather than focusing on their job, they instead focus on pointing out all the things that the presenter did or did not do, regardless of whether those things have any impact on the decision.

Second, the committee should make things as clear as possible for the incoming presenters. I’ve had to endure my fair share of architecture and design reviews where my guidance was, “You need to have an architecture/design review.” At that point, the presenter is left playing a guessing game on what needs to be presented, and it’s likely to be wrong. Nobody likes being stuck in a meetings all day long, so give the people the information they need to ensure that your time in that weekly approval meeting is well spent.

From the perspective of the presenter, you need to know what you want from the committee, even if it should be obvious. As I’ve stated before, the committee members may have easily lost sight of what their job is, so as the presenter, it’s your job to remind them. Tell them up front that you’re looking for approval, looking for resources, looking for whatever. Then, at the end of your presentation, explicitly ask them for it. Make sure you leave enough time to do so. It’s your job to watch the clock and make sure you are able to get your question answered, even if it means cutting off questions. Obviously, you should recognize that there is some debate, and ask appropriately. In the typical approval scenario, you should walk out of the meeting with one of three possibilities:

  1. You receive approval and can proceed (make sure you have all the information you need to take the next step, such as the names of people that will be involved)
  2. You are denied.
  3. The decision is deferred. In this scenario, you must walk out of the meeting knowing exactly what information you need to bring back to the committee to get a decision at your next appearance. Otherwise, you’re at the risk of creating an endless circle of meeting appearances with no progress.

I hope you find these tidbits useful. They may seem obvious, but personally, I find them useful to revisit when I’m in either situation (reviewer or presenter).

Is your vendor the center of the universe?

A recent post from James McGovern reminded me about some thoughts I had after a few different meetings with vendors.

Vendors have a challenge, and it all stems from a view that they can be the center of the universe. A customer buys their product and builds around it, thereby becoming the “center of the universe” for that customer, exhibiting a gravitational field that attempts to mandate that all other products abide by its laws of physics. In other words, every other product must integrate with it, but that’s the responsibility of those products. For reasons I went into in my last post, that doesn’t work well. It’s a very inward-facing view rather than being the consumer-oriented view.

The challenge is that even if a vendor didn’t want to come across as the center of the universe, for some customers, it is required. For example, if a customer doesn’t have a handle on enterprise identity management, a vendor can shoot themselves in the foot if their own product doesn’t provide some primitive identity management capabilities to account for customers that don’t have an enterprise solution. In the systems management space, you may frequently hear the term “single pane of glass” intended for the Tier 1 operations person. Once again, however, every monitoring system that deals with a specialized portion of the infrastructure will have its own console. It’s a difficult challenge to open up that console to other monitoring sources, and it’s a difficult challenge to open up the data and events to an outside challenge. So what’s an enterprise to do?

To me, it all comes back to architecture. When evaluating these products, you have to evaluate them for architectural fit. Obviously, in order to do that, you need to have an architecture. The typical functional requirements don’t normal constitute an architecture. You can make this as complicated or as simple as you’d like. A passion of mine tends to be systems management capabilities, so I normally address this in an RFI/RFP with just one question:

Are all of the capabilities that are available in your user-facing management console also available as services callable by another system, orchestration engine, or script?

Now, there are obviously follow-ons to this question, but this does serve to open up the communication. Simply put, the best advice for corporate practitioners is to ensure that you are in charge of your architecture and setting the laws of physics for your universe, not the vendors you choose.

More on SOA Organizational Challenges

I just listened to Anne Thomas Manes’ podcast, “What Does it Take to Succeed With SOA?” that was released on Burton Group’s Inflection Point channel. One of the things that Anne pointed out is that many organizations do not have the right culture, especially on the business side, that promotes the sharing of services. A culture of sharing, collaboration, and trust is required to be successful. She also pointed out that the IT organization frequently mirrors the organization of the business, and if those business organizations don’t share, it makes it very difficult.

I started thinking about the organizational aspects of this. Many people in IT only have awareness of the highest levels of the business organization, and it may not be apparent that there’s a problem. But here are two common patterns that clearly point out the potential problems. First, there’s what I’ll call regionalization. Whether we’re dealing with global entities, or national companies with regional presence, it’s very likely that they have business units aligned along regions rather than along business capabilities. There may be very valid reasons for doing this, but it must also be recognized that they may all be performing the same business functions, only with the expected regional nuances. While it’s oversimplifying the problem, if a company sells widgets in 52 countries, there should be enough commonality to warrant some common services that all of them can leverage. Second, there’s product specialization. I have first-hand experience with an organization that had separate business units (and associated IT staff) for different products that the company offered. There were opportunities for shared services that were recognized within IT, but never made it to reality because of the cultural challenges within the organization. In this case, the cultural challenges were within IT, but it’s just as likely that the same challenges existed on the business side as well.

As Anne rightly points out in the podcast, somehow, we need to present the business value associated with the sharing. In some ways, this shouldn’t be any different than the value that makes companies choose to centralize sales and marketing. One of the big challenges faced within IT, however, it that the associated costs of redundant technology can be a bit harder to quantify. Yes, there are software licensing fees and maintenance agreements, but some of these one time costs are glossed over in deference to the project schedule. While I’ve never personally been involved with a centralization of sales and marketing, I’m guessing that a big part of the equation was an associated reduction in cost attributable to staff reduction, or in a more positive light, getting the cost to revenue ratio looking better by resource re-allocation. If we’re talking information technology, while it’s certainly a possibility, staff reduction doesn’t necessarily come into play, so that associated cost reduction must come from somewhere else, and at least in my experience, IT isn’t very good at quantifying it. So, across the board, our work is cut out for us, but that’s not to say it can’t be done. The one takeaway is to invest heavily in making your pitch. If you don’t have baseline metrics to be able to show the value improvement, whether in increased revenue or decreased cost, take the time to get them together before trying to make your pitch. It needs to be in terms the business can understand and is used to dealing with.

SEP Fields and Service Consumption

A comment from Steve Jones on the Yahoo SOA group a while back really struck a chord with me as a great observation and a conversation yesterday reminded me that I wanted to blog about it (Steve did so as well, in this blog entry). In discussing the difference between a service and an application, Steve had pointed out that the integration-centric view within application development tends to create a “series of ‘not my responsibility’ handoffs.” I think this is very true. Application development is frequently very producer-centric. That is, the focus is on the application itself, not on the users of the application, whether it be a person or another system. Where these dependencies occur, the parties involved can frequently be more concerned about their own pieces, and not about the experiences of those using them. When something goes wrong, as Steve points out, it can frequently be a series of handoffs between groups that are saying “it’s not my problem.”

This was reminded to me in a conversation yesterday when the person I was speaking with made a reference to the SEP field from “The Hitchhiker’s Guide to the Galaxy.” An SEP Field was a mechanism for making something invisible. From everything2.com, I found this explanation:

A SEP field is used to hide something in plain sight. This phenomenon works on the principle that the human brain will filter out impossible objects and situations so as to preserve the sanity of the owner of the brain. When an impossible object or situation is encountered, the brain decides that it is somebody else’s problem (SEP) and promptly deletes it from its perception of reality.

This is an area I’m pretty passionate about, and why I think I’ve enjoyed both usability and SOA. Both of these practices are inherently about consumption, not production. While clearly we need to produce applications and produce services, the focus needs to be producing something that is consumable. If someone has a problem with it, don’t make it invisible by saying, “well, that’s your problem,” because it’s not. Without consumers, you have no service. As a service provider, you must be concerned with how your service is being consumed and be doing everything in your power to ensure successful consumption and a positive experience.

Organizing for SOA

On the Yahoo SOA group, there’s been a long conversation going on regarding (among other things) whether or processes operate at a higher level of abstraction than services which inevitably leads to a BPM first or SOA first debate. For the record, I don’t think that a process-centric approach necessarily leads to success. I made the point that I’ve seen first hand where a process-based viewpoint did nothing more than turn the silos 90 degrees. That is, where we previously had some capability locked away inside an application and only of value to that application, we now have the same capability locked away in a process, and only of value to that process. Both situations can be problematic. A service-centric approach, however, where we focus on a business capability and build outward focusing on consumption, seems to represent that “middle-out” approach.

Anyway, to get to the point of the post, a comment that both I and Anne Thomas Manes of the Burton Group made is that we’ve seen very few (in my case none) organizations that are structured around this notion of capabilities. Rob Eamon, a frequent commenter on this blog, replied to one of Anne’s messages (where she suggested that IT organize around capabilities) with this:

What does this really look like? Does the business organization line up in any way with the capabilities? What is the interaction between those responsible for business and those responsible for IT? Does business group A accept that “capability group X” has responsibility for business groups A, B, C, M, and N? So before capability X can be extended or changed, coordination is needed with all of them? Or is there a proliferation of different interfaces for X? …To paraphrase an old, old Byte magazine humor piece (http://home.tiac.net/~cri/1998/elehunt.html), we have little information about hunting the elephants but lots and lots about packing the jeep.

I thought Rob’s message was great, and one that we should all think about. SOA isn’t a panacea for all things wrong in IT, and even more importantly, if it isn’t broken, don’t fix it. That being said, there’s also a lot of “well, we’ve been doing it this way for 20 years and nothing has fallen apart yet” mentality as well, and the right answer certainly lies somewhere in the middle.

Getting back to Rob’s message, he presents a scenario where 5 business groups all need a common capability. If this is the case, the question is how is that being handled today, and is it a problem? If all 5 groups have their own implementation of that capability, is that an issue or not? If the current organization handles that dependency fine, and the current path is in alignment with the long term strategy, why change anything? If it’s not in alignment with the long term strategy, then you have justification to change. The real disaster is when we don’t even know that those common capabilities exist. Then, you don’t even know whether you have a problem or not, and by the time you figure it out, you now lack the time to get it fixed. This is the situation where I think most organizations are. They’ve read the press and think SOA has potential. They know that there is room for improvement in the IT/Business relationship. Beyond that point, it gets really fuzzy. An integration-centric approach to SOA has some legs, but that really lives in the IT space and produces incremental gains, at best. Any other approach really requires some analysis of the business and the information technology that supports it to determine whether there’s value to be obtained or not. Unless someone has that view, it’s hard to say whether enterprise SOA will provide significant value or not. I’d go so far to say that it’s difficult to say whether anything of a strategic nature will provide significant value or not.

So, the question remains, how do you organize for SOA? Clearly, there’s no one answer. As many, many people have said, it has to start with the business and the things it’s trying to do. As Rob suggested in his message, simply changing the structure of IT may not be enough. If the business side hasn’t recognized that there are benefits to leveraging shared services, then anything IT does along those business capabilities may not help. Sure, there are some things that can be done strictly within IT, but those have more to do with the business of IT, than the true business. Any change in organization has to make sense for the business objectives, however. Take sales and marketing as an example. There are plenty of organizations that have shared sales and marketing, and plenty of organizations that have separate sales and marketing organizations along some other dimension. I think the same thing will likely hold true for organizing around SOA. Where the business needs dictate shared business capabilities, you adjust the organization, both business and IT. Where the business needs don’t, efforts to push SOA may run into resistance. If the business lacks the necessary domain knowledge to know where SOA can fit and where it may not, then it really doesn’t matter what structure you pick, it’s going to be a struggle.

Setting SOA Expectations

It’s been a while since I’ve blogged (a really nasty sinus infection contributed to that), so this comic seemed appropriate.

At a Loss for Words

Anyway, I, like many other people, did a double-take after I read this blog post from Anne Thomas Manes of the Burton Group. In it, Anne states:

It has become clear to me that SOA is not working in most organizations. … I’ve talked to many companies that have implemented stunningly beautiful SOA infrastructures … deploying the best technology the industry has to offer … And yet these SOA initiatives invariably stall out. … They have yet to demonstrate how all this infrastructure yields any business value. More to the point, the techies have not been able to explain to the business units why they should adopt a better attitude about sharing and collaboration. … Thus far I have interviewed only one company that I would classify as a SOA success story.

I’ve always believed that the changes associated with SOA adoption were a long term effort, at least 5 years or so, but it was very surprising to only see Anne only find one success story, although further comments indicated she had only talked to 7 companies. 1 out of 7 certainly feels right to me. Things became even more interesting, though, with some comments on the post which indicated that Anne’s definition of success was, “Has the initiative delivered any of the benefits specified as the goals of the initiative?” She indicated that every company said that their goals were cost reduction and increased agility. My question is how did they measure this?

Another recent post that weighs into this was Mike Kavis’ post on EA, SOA, and change. Mike quotes Kotter’s 8 steps for transformation which include creating a vision and communicating a vision. Taking this back to Anne’s comments, if a company’s SOA goals are increased agility, my first question is how do you measure your agility today? Then, where do you want it to be at some point in the future (creating the vision)? If you can’t quantify what agility is, how can you ever claim success? Questions about success then are completely subjective, rendering surveys that go across organizations somewhat meaningless. While cost reduction may appear to be more easily quantified, I’d argue that many organizations aren’t currenlty collecting metrics that can give an accurate picture of development costs other than at a very coarse-grained level. It would be very difficult to attribute any cost reduction (or lack thereof) to SOA or anything else that changed in the development process.

This brings me to the heart of this post- setting expectations for your SOA efforts and measuring the success of your SOA journey is not an easy thing to do. Broad, qualitative things like “increased agility” and “reduced costs” may be very difficult to attribute directly to an SOA effort and may also fail to address the real challenge with SOA, which is culture change. If culture change is now your goal, then you need to work to describe the before and after states, as well as some tactical steps to get there. In a comment I made on one of my own posts from quite some time ago, I spoke of three questions that all projects should be asked:

  1. What services does your solution use / expose?
  2. What events does your solution consume / publish?
  3. What business process(es) does your solution support?

The point of those three questions was that I felt that many projects today probably couldn’t provide answers to those questions. That behavior needs to change. Now while these were very tactical, they were also easily digestible by an organization. The first step of change was simply to get people thinking about these things. The future state certainly had solutions leveraging reusable services, but I didn’t expect projects to do so out of the gate. I did expect projects to be able to tell me what services and events they were exposing and publishing, though.

This type of process can then lead to one of the success factors that Anne called out in her comments which was the creation of a portfolio of services. Rather than starting out with a goal of “cost reduction” or “increased agility”, start out with a goal of “create a service portfolio.” This sets the organization up for an appropriate milestone on the journey rather than exclusively focusing on the end state. Without interim, achievable milestones, that end state will simply remain as an ever-elusive pot of gold at the end of the rainbow.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.