Tagged
I was wondering how long it would take before I got tagged. If you haven’t seen this going around, you don’t read enough blogs. So, here are five things that you might not know about me:
- I was in the New York Times on June 10, 1996 as part of an article on using the web for wedding planning. While my wife and I did not use the web to plan our wedding, we did put up a whole bunch of pictures and other information about our wedding on my web site. Believe it or not, it’s still up, complete with circa-1996 background, cheesy web award and a last updated date of Dec. 17, 1996! 10 years, wow!
- I was quoted for an article on Pekingese in Dog Fancy magazine a few years ago thanks to my web site on pekes. Besides putting wedding pictures up, this was my first foray into putting something informative on the web. I’m not a breeder, I just had one as a pet for nearly 14 years. Unfortunately, not much has happened on that site in the past 3+ years, either…
- I’ve sung in front of an audience of over 1000 people. That was part of a 100 person chorus, though. I’ve also sung God Bless America with a chorus at Busch Stadium before a St. Louis Cardinals game. As for solo performances, I’m a regular psalmist every Sunday at 11 AM. I would love to sing the National Anthem solo at a Cardinals game one day.
- I was a VIP at the NASA Cassini/Huygens launch. Unfortunately, it didn’t go up the first night, and since my wife and I were on vacation at Disneyworld and had to drive the hour or so to the Kennedy Space Center at midnight for the 4AM launch, we didn’t go back two days later when it did go up. My wife went to Spacecamp in Huntsville when she was growing up and had dreamed of being an astronaut, so she wasn’t too happy about missing it. My parents now live in Florida, so perhaps we’ll be able to get down for some other launch in the future.
- My brother is an excellent artist. He was an effects artist with Disney until they closed the Orlando animation studio, and prior to that, he did a lot of work for Fasa, creator of Shadowrun and Battletech, in addition to some freelance Magic cards. I don’t play any of those games, but his artwork is awesome.
And now, I tag the following people: Tom Rose, Steve Anthony, Phil Ayres, Pankaj Arora, and Miko Matsumura.
Service Taxonomy
Something that you may run into in your conquest of adopting SOA is the Service Taxonomy. Taxonomy, according the Oxford English dictionary on my Mac, is the branch of science concerned with classification. Aside: Did you know that you can hit control-open apple-d with the mouse over any word and MacOS X will pop up a definition? I never knew MacOS X had this feature but now find it incredibly useful. A service taxonomy, therefore, is a classification of services. Back when I was first getting into the SOA space, the chief architect at the company I was with told me that we needed a service taxonomy, so I set about trying to create one.
What I found in that endeavor was that taxonomies are difficult. It isn’t difficult to come up with a taxonomy, however, it is difficult to come up with a taxonomy that adds appropriate value. There are an infinite number of ways of classifying things. Take a deck of cards, for example. I could sort them by suit, by color, by face value, or by number (and probably many of other ways, as well). Which is the right way to do so? It depends on what your objectives are. To jump into a classification process without any objectives for the classification effort is futile. It may keep someone busy in Visio or PowerPoint for a while, but what value does it add to the SOA effort?
Two areas that I’ve found where taxonomies can add value to the SOA effort are service ownership decisions and service technology decisions. I won’t say that these are the only areas (and I encourage my readers to leave comments with their ideas), but I think they are a good starting point for organizations. Solution architecture is what many of you may refer to as application architecture. I try to avoid using the term application anymore which I’ll go into in another post some time. In solution architecture, the architect defines the major subcomponents of the solution that will undergo detailed design. A key activity of this is service identification. Services represent boundaries, and one place a boundary typically exists is an ownership domain. Therefore, to assist the solution architect in identifying the right services, a taxonomy that can assist in ownership decisions makes a lot of sense. Be careful on this one, however, because ownership is often defined in terms of the existing organization. Anyone who has worked in a large enterprise knows that organizations are fluid, not static. Your taxonomy should be classifying the service, not the organization. The most common classification that clarifies ownership is that of business services versus infrastructure services. As a sample, you could define two the categories this way:
A business service is one whose description contains terms that are specific to the business. That is, if you were to take the service to another vertical, it may or not make sense.
An infrastructure service is one whose description is not specific to the business at hand. It is likely to be equally applicable across many or even all verticals.
Now, this classification certainly isn’t rock solid, since a business service like “Hire Employee” fits the later definition, but in general, it attempts to differentiate between things like security services from ordering services. Your organization may have an infrastructure area that handles infrastructure services, while the business services are likely handled by the development group. The classification itself doesn’t mention any specific organization, but can be easily aligned to the organizational model.
The second area I mentioned was service technology decisions. While it’s certainly possible to write all of your services in C#, Java, or any other programming language, odds are you have multiple technology platforms available for hosting services. It is the job of the enterprise architect to ensure that these platforms are used appropriately. Therefore, some taxonomy is needed that can allow the solution architect to define services that have architectural significance. If your taxonomy doesn’t differentiate between when a BPEL engine should be used from when Java should be used, then it probably is not capturing the architecturally significant characteristics. At a minimum, the taxonomy should make it clear where process orchestration engines should be used (think visual environments that are largely schema-driven), general purpose programming languages like Java or C# should be used, and where database technologies (stored procedures, views, etc.) should be used. These things are architecturally significant. A good taxonomy will remove the guesswork from the solution architect and result in solutions that adhere to the technology architecture.
Policy-Driven Infrastructure
One term that you may have run across from some of the marketing efforts of vendors touting SOA governance solutions is policy. They may even go so far to claim that governance is about policy enforcement. Technically, policing is probably more about policy enforcement is you think of governance in the municipal sense. Your local government collects taxes, makes laws (policy), and the police enforce those laws. But, that’s not the point of this entry because policy is actually a very important concept to understand.
Practitioners in the security space will frequently use the terms Policy Enforcement Point or PEP, Policy Management Point or PMP (also referred to as Policy Administration Point), Policy Information Point or PIP, and sometimes Policy Decision Point or PDP. Simply put, a policy enforcement point is the place where the determination is made that a policy needs to be enforced. A policy management point is where policies are defined or administered. A policy information point is the source of information necessary to enforce the policy, potentially including the policy itself, and a policy decision point is where the actual decision regarding the policy is made. Take your basic web access control product. A request shows up at your web server, the policy enforcement point. That request is intercepted, credentials extracted, and a call is made to the policy decision point asking, “Can the user named todd access the URL http://www.biske.com/blog?” The policy decision point accesses a policy information point (e.g. LDAP) to determine the allowed roles for that URL and the roles assigned to user todd, and returns a yes or a no. If an administrator has used the policy management point to assign allowed roles to the URL and assigned roles to user todd, everything works.
So what does this have to do with anything outside of the security domain? It comes back to policy. A policy is something that is declared. Container managed code was about externalizing concerns and declaring them in a configuration file. In other words, establishing policy. If you think about infrastructure today, all of them have a configuration file, or some management interface that allows it to be configured. Take your typical application server. It probably has some XML file that contains entries that specify what .ear or .war files will be executed. Hmm… this sounds like a policy information file. Clearly, the application server itself is an enforcement point. The policy is “load and execute this .ear file” and it does it. How do these policies get established? Well, while most enterprises probably manipulate the configuration file directly, that application server certainly has a management console (policy management point) that will manipulate the configuration file for you. Is this starting to become clear?
Now let’s look at the domain of the service intermediary, whether its an ESB, an XML appliance, a WSM gateway or agent, or anything else. What does the intermediary do? It enforces security policy. It enforces routing policy. It enforces transformation policy. It enforces logging policy. It enforces availability policy. Your intermediary of choice certainly has a management console, and it likely creates a config file somewhere where all these policies are stored. So, what’s the problem? Well, the problem is that security policy is governed by information security. Availability and routing policy by operations. Transformation policy may be the domain of the development team. Logging policy may be tied to compliance. Yet, we have one management console. To match the way policies are set, policy management must be externalized from policy enforcement. Furthermore, policies must be independent of policy enforcement points, because they may be heterogeneous. One security policy may be enforced by a Java EE server, another by an XML gateway, and a third by a .NET server. So, the most flexible scenario is one where policy management, policy enforcement, and policy information are all separated. Well, what do we have going on in the SOA space? AmberPoint recently announced agentless management, claiming that they will manage your enforcement points, whatever they may be. HP/Mercury/Systinet and webMethods/Infravio have both touted the role of the registry/repository as a policy information point. We’re getting there folks. I’m not implying that a mutli-vendor solution, is necessary, either. Certainly, SOA Software’s Closed Loop Infrastructure is showing this same separation of concerns. The world I envision has policy management points that are tailored toward the type of policy being established and the groups that do so, a distributed network of enforcement points capable of handling multiple policy domains at once (i.e. avoid having one enforcement point for security, another for routing, another for transformations, etc.), and information points capable of storing many types of policies.
My recommendation to allow of you when you’re off talking to these vendors, especially the ones offering enforcement points, is to ask them about policy. Do they even understand that they are a policy enforcement point? Can policies span multiple services? Can they be managed by an external entity? Can they be stored externally? If they don’t understand this, then be prepared for a conceptual mismatch in trying to use them.
Disclaimer: I personally have no relationship with any of the vendors listed, although I’ve certainly had conversations with them at conferences, follow their blogs, etc. I used them here only to illustrate the concepts, not as any kind of endorsement.
The Art of Strategic Planning
Back in October, I attended the Shared Insights EA Conference. I had never been to an Enterprise Architecture conference before, so I didn’t quite know what to expect. I’d been to various SOA conferences, which obviously have a lot to do with Enterprise Architecture, but there is a difference. David Linthicum has blogged in the past about the differences between the EA camp and the SOA camp, so if you’re interested in that, you can read some of his entries as that’s not the point of this discussion. The real point of this entry is strategic planning, which is important to both EA and SOA. The question is why is it so hard?
When I started looking into the EA discipline, I initially found Architecture and Governance magazine. I was surprised to see advertisements for Troux until I found out that the editor-in-chief works for them. The real reason for my surprise was that I had only known Troux as a vendor of a configuration management database. My first thought was what does CMDB have to do with EA? Then, at the Shared Insights EA Conference, nearly all of the vendors there also were CMDB vendors. Guaranteed, they had tailored products toward EA, but I’d venture a bet that they all started out in the CMDB space, so what gives?
I’m of the opinion that the EA discipline grew out of a desire to get some arms around the spaghetti bowl of technology that constituted the typical large enterprise. Note that the word “strategic” didn’t appear anywhere in that sentence. Hence, there is a connection to CMDB. Another word for what was going on was current state analysis, with the CMDB being a place where the results could be captured. This is actually a key first step for entering into a strategic planning effort, however, which is where EA should be headed.
As an analogy, let’s look at MapQuest. If my memory of the early days of the web serve me correctly, it started out as a pure mapping solution, and only later added driving directions. I could get a map of my current location, which would the equivalent of populating my CMDB. I could also get a map of some other location that I’d like to visit. This would represent some future state. Whether or not that could be represented in the CMDB is debatable, but also besides the point. If all I had was this map, however, I still may be no better off. If the map of my town and the place I want to go don’t overlap, I have, at best, only a general idea of how to get from point A to point B. What is needed is point A, point B, and the directions in between. Any practicing enterprise architect should understand that current state documentation, future state definition, and a roadmap to get between the two is what they are responsible for. Architects that are stuck in current state definition may help quantify the current problem, but not much more. Architects that are focused only on the future state wind up in an ivory tower, and architects who start throwing together an action plan without a future state in mind, well, who knows where that will take you, but it’s probably safe to say it is short-sighted.
So what are the challenges associated with this? Well, here are two. First off, people who excel at big picture thinking are normally poor at details. Likewise, people who are very detail-oriented are very poor at seeing the big picture. The tasks I described include both big picture thinking for the future state definition, as well as execution details for the action plan. The architect that can define the big picture may struggle in the finer details of the executable plan. That’s one. The other problem is that a strategic plan has many dimensions to it, as well as dependencies. Here’s an overly simplistic example, but it gets the point across. Let’s suppose that the engineering or infrastructure guys decide they’re going to stop buying infrastructure as projects demand, and instead develop a strategic plan for the technology infrastructure. Great, right? Well, it’s not that easy. While the organization may be able to define the current state of the technology, what happens when they try to define the future state? The infrastructure guys can probably get pretty far based on industry trends, but as soon as an attempt is made to turn that into an executable plan and assign some priorities, the applications that will utilize the infrastructure come into play. At this point, pressure is now placed on the application organization to think strategically. Again, there’s a certain amount of work that can be done based on industry trends and imitating others, but to be really strategic, it has to be driven by the business strategy. So, now the problem leaves the realm of IT and enters the world of the business. What’s the current state of the business, the future state of the business, and the plan to get there? That’s certainly not a question that a technology architect may be able to address.
At this point, the most mature companies for EA are now including a business architecture practice, whether it has that name or not. The strategic planning process is not an IT thing or a business thing, it’s an enterprise thing. While organizations may be able to achieve some limited success by simply following the paths of those before them as reported by the analysts, sooner or later, they’ll find that the barriers must be broken down to be successful.
SOA in a box, going quick!
Thanks to Brenda Michelson for pointing this out and giving me a good laugh this morning.
EDA Again
Joe McKendrick brought the subject of event driven architecture again on his SOA in Action blog. I’ve previously commented on this subject, most recently here, here, and here. This is a subject that I like to comment on, because I feel that appropriate use of events are a key to the agility that we strive to achieve. It’s very simple. Services execute actions. Processes orchestrate actions. Events trigger the transitions within the process. You need all of them. A solid messaging infrastructure is critical to event processing, so it’s very surprising to me that the MOM EAI ESB vendors aren’t all over this. Tibco has their complex event processing product, but they really haven’t pushed the event message very hard. What about the registry/repository vendors? Lots of talk about services, but not very much about events. The fact is, just as an enterprise can’t leverage SOA without services, they can’t leverage EDA without events. The two are complimentary, and I encourage the EA’s out there to start doing the work to identify the events that drive the business.
SOA and Virtualization
As I’ve been doing some traveling lately, I’ve been trying to catch up with some Podcasts. Dana Gardner of ZDNet and Interarbor Solutions has a new podcast entitled “BriefingsDirect SOA Insights Edition.” In this episode, Dana, Steve Garone, and Jon Collins discussed virtualization and SOA. It’s funny how every buzzword going on in IT these days is somehow being attached to SOA. I was a bit skeptical on the discussion when I saw the title, and in reality, the discussion was primarily on virtualization, and not as much on SOA.
Something that I didn’t feel came across clearly was that SOA creates a need for more efficient resource allocation. Interestingly, a lot of drive toward virtualization is based upon a need to get a handle on resource allocation. So, perhaps there is a connection between the two, after all. So why is resource allocation important? Well, let’s compare Web Services to Web Applications. The typical web application is deployed on an application server, perhaps in a cluster or on a redundant server. It may or may not share that server with other applications, if it does, the applications may compete for threads, or each application may have its own pool. The application server has some memory allocated to it, etc. The app gets deployed and then no one touches it. Unless this application is facing the internet, it’s unlikely that the loads for which it was initially configured will change dramatically. The line of business may add a few employees here or there, but that’s certainly not going to create enough additional load to crash the system.
Now let’s talk about Web Services. They, too, are deployed on an application server, potentially with other services, potentially with their own threads, some amount of memory, etc. Unlike the Web Application, it’s entirely possibly to have dramatic changes in load from when the Web Service is first deployed. As a new consumer comes on board, the load on the service can increase by tens of thousands of requests per day or more very easily. Furthermore, the usage patterns may be vary widely. One consumer may use the service every day, another consumer may use it once a month, but hammer it that day. All this poses a challenge for the operational staff to ensure the right amount of resources are available at the right time. The ease of virtualization can allow this to happen. BEA just announced their WebLogic Server Virtual Edition, and their VP of WebLogic products, Guy Churchward, was quoted on ZDNet stating, “the setup will allow companies to create new instances of Java applications to meet spikes in demand in a few seconds, compared with 45 minutes, as is the case now.”
Some final thoughts on this: a good friend and former colleague of mine once described what we were doing as the virtual mainframe. In a recent conversation with a client, I also brought up the venerable mainframe. Does your enterprise currently have a highly complicated batch processing window? Have you ever researched what goes into the management of that process when something goes awry? A wily mainframe operator can do quite a bit to make sure that everything still manages to get done within the processing window. Now move to the world of SOA, with increased interdependencies between systems and real time processing. If we don’t give the operational staff the tools they need to efficiently manage those resources, we’ll be in an even bigger mess. Virtualization is one tool in that chest.
Tying together SOA, BPM / Workflow, and Web 2.0
I was on my way home from the airport listening to Dana Gardner’s SOA Insights podcast from 11/15, which brought up the topic of Web 2.0 and SOA. Before I get started, I wanted to thank Dana for putting this together. It’s great to have another podcast out there on SOA. I’ve always enjoyed the panel format (it was a blast having the opportunity to be on one this past year), and Dana’s got a great group of panelists including Steve Garone, Joe McKendrick, and Neil Macehiter, plus the occasional guest.
Anyway, as they were discussing the Oracle OpenWorld Show, the conversation turned to Web 2.0 and SOA. Joe compared Web 2.0 and SOA to the recent “I’m a Mac” commercials from Apple, saying that “they’re trying to understand each other.” Steve went on to point out that he thinks that one is going to support the other. When I think about Web 2.0, I think about the user facing component of our IT systems. With that assumption, Web 2.0 and SOA are complementary, not competitive.
I’d like to take the conversation away from Web 2.0, however. I’d like to take a step back and look at a larger picture that tries to tie a number of these concepts together. I can’t say it ties everything together, as I’m not going to discuss BAM or MDM, as this post would get way too large, but they fit in as well. Let’s start with SOA. SOA is all about services. For the purpose of this discussion, let’s view services as the functional activities performed by our IT systems. (I realize SOA can be applied more broadly than this). Many enterprises are relying on Web Service technologies for these services, and Web Services are intended for system-to-system interactions. I can use a BPEL tool to externalize some automated processes, but it’s still all system to system. So, how do humans come into the mix? Clearly, humans are involved. So now we bring in a BPM / Workflow tool. If you’ve worked with the workflow tools, you’ll know that an integral part of it is task management. A user gets assigned a task, it pops up via some notification mechanism, and they go and do what needs to be done. Today, it probably involves firing up some application, which is built on top of those services we mentioned earlier. So we get a picture like this:
Simple, right? The real issue with this picture is the application/service component. Odds are that application does a lot more than just this particular task. In fact, there’s probably a whole bunch of things the user must go through to get to the right part of the application. How do I get contextual information out of the task and into the application, if that contextual information is even there? Would you rather get a task that says, “Check your email” or “Read the email from your boss, John Q. Hardnose, he’s in a bad mood today.”
Where the systems need to go in the future is in the form of actionable, task-based interfaces. A great example of task-based interfaces is the Dashboard in Apple’s MacOS X. It consists of widgets that, for the most part, have a very limited purpose. They are intended to be lightweight. For example, there’s an address lookup. It’s far more convenient for me to go there to lookup an address, than to have to launch AddressBook. This task-based approach hit home for me when I was reading Keith Harrison-Broninski’s book, “Human Interactions: The Heart And Soul Of Business Process Management.” Something that would help in the efficiency of business processes would be if the tasks contained appropriate context so that a lightweight user interface could be quickly provided that allowed the user to perform that action. Now if the tasks contained a link to an HTML page, that HTML could immediately be rendered within the task management system, allowing the user to do what they need to do. How could that be done? Well, if the task management system is built on Portal technology, now we have a vehicle for shared context, incorporation of Web 2.0 collaboration technologies, AJAX capabilities, and the HTML rendering engine we need. We wind up with this:
When I think about this, I think of huge potential gains in efficiency. I also hope that I’ll see it in my lifetime! The hard part will be determining in how this can be built out incrementally. Even SaaS has a role in this, since the traditional application providers are part of the problem. They simply don’t have a means or the architecture to take in appropriate context and quickly launch the application to the right user interface to allow tasks to happen efficiently. SaaS has the potential to do this, but there’s still a long, long way to go. I, for one, am excited to be along for the ride.
SOA Anthropologist
My colleague Ed Vazquez (Ed you need to start blogging and I) came up with a new role associated with SOA Adoption. It’s the SOA anthropologist. There is no shorting of pundits out there that feel that the most difficult aspect of SOA is the cultural change. I tend to agree. If you look at the average enterprise, they probably have 20 or more years of history of building stovepipe applications of various sorts. This doesn’t bode well for SOA. From top to bottom, there is baggage that exists that must be changed, ranging from project definition and funding models, to the project manager who is dependent on services and teams outside of their area of control, to the analyst and developer that must define and produce solutions that have the potential to be combined in a flexible and efficient manner in the future. It’s like taking a rugby team, putting some helmets and shoulder pads on them and saying, “go play football!” Some things look vaguely familiar, but in reality, they’ve got their work cut out for them.
A field that is slowly gaining some prominence, at least in very large enterprises, is corporate anthropology. Corporate anthropology is about understanding behavior and seeing whether technology (or anything else) will be applicable. If ever there was a role that could be useful in SOA adoption, it’s this. Someone can come in, dig around the enterprise, and attempt to classify the culture of the organization. Once the culture is properly understood, now the SOA adoption effort can be properly targeted to manage the inevitable culture change that must occur. Unfortunately, I have no formal training whatsoever in anthropology, but in my role as a consultant, I absolutely understand the criticality of identifying the way an organization works in order to be successful in the engagement. Back in my college days, I took a number of psychology courses (it wasn’t my major), and I’m better off for it.
So, anyone out there actually utilizing anthropology in their SOA efforts? If so, I’d love to hear about it.
Christmas gift for the cubicle developer
Thanks to Phil Windley for pointing this one out. I’d like one of these USB missile launchers for Christmas as well.
Update: There’s actually two different guns available according to Froogle, here’s an image of the second one.
You can find the first one at ThinkGeek for $39.99 and the second one at Vavolo.com (other stores were out of stock).
SOA and Outsourcing
Joe McKendrick recently posted some comments on the impact of SOA to outsourcing. He admits that he’s been going back and forth on the question, and provides some insight from Ken Vollmer of Forrester and Sanjay Kalra on both sides.
My own opinion is that an appropriately constructed SOA allows an organization to make more appropriate outsourcing decisions. First, there are two definitions of outsourcing to deal with here. Joe’s article primarily focuses on outsourcing development efforts. Odds are, however, that code still winds up being deployed inside the firewall of the enterprise. The other definition of outsourcing would include both the development and the execution and management of the system, more along the lines of SaaS.
SOA is all about boundaries. Services are placed at boundaries in the systems that make sense to be independent from other components. A problem with IT environments today is that those boundaries are poorly defined. There is redundancy in the processing systems, tight coupling between user interfaces, business logic, and databases, etc. An environment like this makes it difficult to be successful with outsourcing. If the definition of the work to be performed is vague, it becomes difficult to ensure that the work is properly done. With SOA, the boundaries get defined properly. At this point, now the organization can choose:
- To implement the logic behind that boundary on its own, knowing that it may be needed for a competitive advantage or requires intimate knowledge of the enterprise;
- To have a third party implement the logic;
- To purchase a product that provides the logic.
By having that boundary appropriately defined, opportunities for outsourcing are thus more easily identified, and can have a higher chance for success. It may not mean any more or any less outsourcing, but it should mean a higher rate of success.
Portfolio Management and SOA
The title of this blog is “outside the box.” The reason I chose that title is because I think it captures the change in thinking that must occur to be successful with SOA. It’s my opinion that most enterprises are primarily doing “user-facing” projects. That is, the entire project is rooted in the delivery of some component that interacts with an end user. Behind that user interface, there’s a large amount of coding going on, but ultimately, it’s all about what that end user sees. This poses constraints on the project in terms of what can be done. Presuming a standard N-tier architecture, the first thing behind that UI from a technical standpoint is the business tier, or better stated from an SOA perspective, the business service tier. In order to have these services provide enterprise value, rather than project value, the project team must think outside the box. Unfortunately, that means looking for requirements outside of the constraints that have been imposed, a project manager’s worst nightmare.
So how should this problem be addressed? The problem lies at the very beginning- the project definition process. Projects need to have scope. That scope establishes constraints on the project team. Attempts to go outside those constraints can put the project delivery at risk. In order to do things right, the initial constraints need to be set properly. The first step would be to never tie a project delivering a user interface with a project delivering services. Interestingly, this shows how far the loose coupling must go! Not only should the service consumer (the UI) be loosely coupled from the service provider in a technical sense, it’s also true from a project management sense! How does this happen? Well, this now bubbles up to the project definition process which is typically called IT portfolio management. Portfolio management is part of the standard definition of IT Governance, and it’s all about picking the projects to do and funding them appropriately. Therefore, the IT governance committee than handles portfolio management needs to be educated in SOA. The Enterprise Architects must deliver a Service Roadmap to that group so they know what services are needed for the future and can now make appropriate decisions to ensure that projects to create those services happen at the right time, and are not inappropriately bundled with service consumers, leading to a conflict of interest, and ultimately, a service that may not meet the needs of the enterprise.
Thrown under the enterprise service bus
I’ve recently been involved in a number of discussions around the role of an ESB. My “Converging in the middle” provided a number of my thoughts on the subject, but it focused exclusively on the things that belong in the middle. The things in the middle aren’t used to build services, but are used to connect the service consumer and the service provider. There will always been a need for something in the middle, whether it’s simply the physical network, or more intelligent intermediaries that handle load balancing, version management, content-based routing, and security. These things are externalized from the business logic of the consumer and the provider, and enforced elsewhere. It could be an intelligent network, ala a network appliance, or it could be a container managed endpoint, whether it’s .NET, Java EE, a WSM agent, an ESB agent or anything else. There are people who rail on the notion of the Intelligent Network, and there are people who are huge proponents of the intelligent network. I think the common ground is that of the intelligent intermediary, whether done in the network or at the endpoints. The core concerns are externalized away from the business logic of the consumer and provider.
This brings me to the subject line of this entry. I came across this quote from David Clarke of CapeClear on Joe McKendrick’s SOA In Action blog:
“We consider ESB the principal container for business logic. This is next generation application server.”
What happened to separation of concerns? In the very same entry, Joe quoted Dave Chappell of Progress Software/Sonic saying:
ESB as a platform [is] “used to connect, mediate and control the interactions between a diverse set of applications that are exposed through the bus using service level interfaces.”
To this, I say you can’t have your cake and eat it too, yet this is the confusion currently being created. The IT teams are being thrown under the bus trying to figure out appropriate use of the technology. The problem is that the ESB products on the market can be used to both connect consumers and providers and build new services. Orchestration creates new services. It doesn’t connect a provider and a consumer. It does externalize the process, however, which is where the confusion begins. On the one hand, we’re externalizing policy enforcement and management of routing, security, etc. On the other hand, we’re externalizing process enforcement and management. Just because we’re externalizing things doesn’t mean it all belongs in one tool.
Last point: Service hosting and execution clearly is an important component to the overall SOA infrastructure. I think traditional application servers for Java EE or .NET are too flexible for the common problems many enterprises face. There definitely is a market for a higher level of abstraction that allows services to be created through a visual environment. This has had many names: EAI, BPM, and now ESB. These tools tend to be schema-driven, taking a data-in, data-out approach, allowing the manipulation of an XML document to be done very efficiently. Taking this viewpoint, the comment from David Clarke actually makes sense, if you only consider the orchestration/service building capabilities of their product. Unfortunately, this value is getting lost in all the debate because it’s being packaged together with the mediation capabilities which have struggled to gain mindshare, since they are more operational focused than developer focused. A product sold on those capabilities isn’t as attractive to a developer. Likewise, a product that emphasizes the service construction capabilities may get implemented without regard for how part of it can be used as mediation framework, because the developers aren’t concerned about the externalization of those capabilities.
The only way this will change is if the enterprise practitioners make it clear the capabilities that they need before they go talk to the vendors, rather than letting the vendors tell you what you need, leaving you to struggle to determine how to map it back to your organization. I hope this post helps all of you in that effort, feel free to contact me with questions.
SOA Funding
In his eBizQ SOA In Action blog, Joe McKendrick presents some comments from industry experts of the funding of SOA. He starts the conversation with the question:
Who pays for SOA, at least initially? Should IT pay? Should the business unit that originally commissioned the service bear the initial development costs? What motivation is there for a business unit that spent $500,000 to build a set of services to share it freely across the rest of the enterprise?
First, this isn’t an SOA problem. SOA may bring more attention to the problem because more things (theoretically) should be falling into the “shared” category instead of the domain specific category. Reusable services are not the first technology to be shared, however. Certainly, your enterprise leverages shared networking and communication infrastructure. You likely share databases and application servers, although the scope of the sharing may be limited within a line of business or a particular data center. Your identity management infrastructure is likely shared across the enterprise, at least the underlying directories supporting it.
To Joe’s question about the business unit the originally commissioned the service, clearly, that service is going to be written one way or another. The risk is not in the service being unfunded, but that the service is designed only for the requirements of the project that identified the initial need. Is this a problem with the funding model, or is it a problem with the project definition model? What if the project had been cleanly divided into the business unit facing components (user interface) under one project and each enterprise service under its own project? In this scenario, the service project can have requirements outside of the that single consuming application and a schedule independent of, but synchronized with, that consuming application.
If this is done, now the IT Governance process can look at the service project independent from the consuming application project. If the service project has immediate benefits to the enterprise, it should be funded out of enterprise dollars to which each business unit contributes (presuming the IT Governance process is conducive to this approach). If the service project doesn’t have immediate benefits to the enterprise, then it may need to be funded from the budgets of the units that are receiving benefits. It will likely require changes to support the needs of the rest of the enterprise in the future, but those changes would be funded by the business units requiring those changes. Yes, this may impact the existing consumers, anyone who is thinking services can be deployed once and never touched will be sadly mistaken. You should plan for your services to change, and have a clearly communicated change management strategy with release schedules and prior version support.
While I certainly understand that funding can be a major concern in many organizations, I also think that IT needs to take it upon themselves to strive to do the right thing for the enterprise while working within the constraints that the project structure has created. If a project has identified a need for service, that service should always be designed as an independent entity from the consuming application, even if that application is the only known consumer. Funding is not an issue here, because the service logic needs to be written. An understanding of the business processes that created the need for the service should be required to write the application in the first place, so the knowledge on how to make the service more viable for the enterprise should not require a tremendous amount of extra effort. While studies have shown that building reusable components can be more expensive, I would argue that a mantra of reuse mandates better coding practices in general, and is how the systems should have been designed all along. Many of our problems are not that we don’t have the service logic implemented in a decent manner somewhere, but rather that the service logic was never exposed to the rest of the enterprise.
Use or reuse?
Joe McKendrick referred to a recent BEA study that said that “a majority of the largest global organizations (61%) expect no more than 30 percent of their SOA services to be eventually reused or shared across business units.” He goes on to state that 84% of “these same respondents consider service reuse is one of their most critical metrics for SOA success.” So what gives?
Neither of these statements surprise me. A case study I read on Credit Suisse and their SOA efforts (using CORBA) was that the average number of consumers per service was 1.5. This means that there’s a whole bunch of services with only one consumers, and probably a select few with many, many consumers. So, 30% reuse sounds normal. Now, as for the critical success metric, I think the reason this is the case is because it’s one of the few quantifiable metrics. Just because it’s easy to capture doesn’t make it right, however.
First off, I don’t think that anyone would argue that there is redundancy in the technology systems of a large enterprise. Clearly, then, the possibility exists to eliminate that redundancy and reuse some services. Is that the right thing to do, however? The projects that created those systems are over and done with, costs sunk. There may be some maintenance costs associated with them, but if they were developed in house, those costs are likely to be with the underlying hardware and software licenses, not with the internally developed code. Take legacy systems, for example. You may be sending annual payments to your mainframe vendor of choice, but you may not be paying any COBOL developer to make modifications to the application. In this situation, it’s going to be all but impossible to justify the type of analysis required to identify shared service opportunities across the board. The opportunities have to be driven from the business, as a result of something the business wants to do, rather than IT generating work for itself. If the business needs to cut costs and has chosen to get rid of the mainframe costs in favor of smaller commodity servers, great. There’s your business-driven opportunity to do some re-architecting for shared services.
The second thing that’s missing is the whole notion of use. The problem with systems today is not that they aren’t performing the right functionality. It’s that the functionality is not exposed as a service to be used in a different manner. This is an area where I introduce BPM and what it brings to the table. A business driver is improved process efficiency. Analysis of that business process may result in the application being broken apart into pieces, orchestrated by your favorite BPM engine. If that application wasn’t built with SOA in mind, there’s going to be a larger cost involved in achieving that process improvement. If the application leveraged services, even if it was the only consumer of those services, guess what? The presentation layer is properly decoupled, and the orchestration engine can be inserted appropriately at a lesser cost. That lesser cost to achieve the desired business goal is the agility we’re all hoping to achieve. In this scenario, we still only have one consumer of the service. Originally it was the application’s user interface. That may change to the orchestration engine, which receives events from the user interface. No reuse, just use.