Archive for the ‘SOA’ Category

The power of the feedback loop

I watched the latest video blog from Amanda Congdon, formerly of Rocketboom, now with ABC News. In this video, Amanda is in a dairy farm in Vermont. What makes this dairy farm unique is that it is entirely cow powered. They harvest the by-products of the cows, extract the methane gas, which powers a turbine that generates enough electricity for the farm with plenty leftover. The remnants of the by-products are separated into liquid by-products and solid by-products. The liquid by-products go to a lagoon, the solid by-products wind up becoming bedding for the cows. I found it very cool.

So here’s the analogy to SOA. You may find it a bit of a stretch, but everyone’s entitled to their opinion. Personally, I like to look for parallels in the real world to how our IT systems should behave. In the past world of dairy farming, I’m sure there was a time where the farmer was concerned with one thing: producing milk. Are there costs associated with it? Sure. Are there by-products? Absolutely. But in the end, the farmers really just cared about producing and selling milk. Now, the practice has progressed to where the system is a linear system with caw feed at the beginning and milk at the end, it is a closed-loop environment where even the by-products are turned around and leveraged in the process. Where are we at with IT systems today? I’d argue that most enterprises are still in the linear mode of thinking. You could even argue that it goes beyond IT, and into the business thinking, but being an IT guy, I’ll limit my assumptions there. IT produces solutions, and then forgets about them unless a user complains or some alarm goes off. If an organization takes on SOA, but still operates with this mentality, the only thing that has changed is that they are producing services instead of applications.

If an IT organization (and even the business) wants to mature and continue to wisely invest its IT dollars, the thinking has to stop being linear and start focusing on continued improvement. When a service goes into production, monitoring needs to go beyond just whether or not the service is available or not. Metrics (by-products) must be extracted from the process, and incorporated back into the planning process to continually improve the performance and behavior of IT systems. While it may begin with more operational metrics such as response time, there’s no reason that it can’t begin to involve business metrics and business events. These business events and metrics are processed by our analytics engines (business intelligence) and cause incorporated back into the IT systems themselves. Sometimes it may be a manual process where future improvements are justified through the analysis of usage metrics, other times it may be more automated where resources are automatically provisioning according to external factors that have been shown to increase demand. In any case, the IT operating model needs to be a loop, rather than a line. This isn’t anything new, as the concept of continual business improvement has been around for a long time. The thing that’s new is that it needs to be in the mindset of all of IT, all the way down to the developer writing the next service. While I’m sure those cows in Vermont don’t know that their manure is being used to keep their living space nice and cozy, the IT worker does need to know that by exposing metrics, whether IT centric or business centric, is a key to creating a feedback loop for continual improvement to the IT systems.

Services for Managing the Network

Jeff Browning, Director of Product Management for F5, wrote an article entitled “Take a SON Approach for Agile SOA” available for your reading enjoyment at FTPOnline. The article would probably have been better titled “Network Management via Services for Agile SOA” but it still makes some great points, ones that I’ve commented on in the past.

He describes the old world (or perhaps it is today’s world) as set-and-forget.

Load balancers—the network technology most relevant to application architects and developers—were basically installed, set up for round-robin load balancing, and never touched again.

This is certainly an accurate portrayal. He goes on to discuss how there is a tremendous amount of context embedded within SOAP requests that a traditional HTTP load balancer can’t leverage. He points out the importance of this information when servers get overloaded, using an example of a change in interest rate and how it may impact services traffic at a financial services firm.

He then switches to a discussion about the ability to configure network devices. He correctly calls out that most devices provide some user-facing console or command-line interface. While a CLI can be scripted, he states:

These scripts usually work for static environments and events, but the approach conflicts with the agility and flexibility that SOA enables.

He ends with an example where he states:

The network device can monitor requests, look for errors, and invoke configuration changes to alter device setup, adding more standby servers to the pool or even redirecting new requests to another data center running additional instances of the Web service. Additionally, prioritized requests based on client request ID or other factors could be sent to an entirely separate pool of servers hosting the service in a manner optimized for high-demand scenarios.

Based on business priority, the number of server resources, the priority, and dynamic changes to the configuration, automated change can be done seamlessly, with the service and network working together through more network intelligence and control.

Interestingly, this is the exact scenario I used when describing SOA to infrastructure engineers and why they should be concerned about it. I did an informal survey of about 50 of these engineers at one company that each managed a particular device or system in the infrastructure, and of those 50, only 1 product was known to have a web services interface for managing its configuration, and that was F5. I think I actually met Jeff when he stopped by for a briefing at this company. F5, is certainly not the only vendor to do this. I know that IBM DataPower devices expose all of their management capabilities available through their console as Web Services as well. It’s finally nice to see someone touting the importance of this capability, however. Let’s hope other vendors jump on the bandwagon and provide Web Services for managing their products. This means that IBM, HP, Microsoft, and Intel need to start making some progress on the converged management specification promised back in March of 2006.

P.S. Jeff, if you read this, drop me a line. I think I met you during a presentation at a previous employer. I’d love to hear more on what F5 is up to these days.

Why not more vodcasts?

There’s no shortage of webcasts these days. What’s frustrating for me, however, is that accessing them is still too inconvenient for me. It never fails that the one webcast I’m interested in will be scheduled at the same time as a meeting I have. I could have one meeting in the entire week, and it will coincide with the webcast.

If you’ve followed my blog, you also know that I’m an Apple fan, which means I have my share of iPods, including a Video iPod. Why aren’t companies leveraging video podcasts or vodcasts for delivery to consumers? While many webcasts are available on demand after the original presentation, they still tend to be streaming video, meaning I need to be connected to watch it. It would seem that someone could find a simple way to extract the appropriate marketing data when subscribing to an RSS feed so that could be captured when a particular vodcast was downloaded. Yes, iPods can only handle 640×400 now, but even that should be adequate. You can certainly send larger video if you want to view it in iTunes. Personally, I think trying to tailor slides so that they are visible on an iPod could be a good thing for presentations. It would avoid those text heavy slides that the presenter simply reads verbatim. So how about it vendors? Talk to your preferred webcast provider and start making things available in a downloadable H.264 format.

Briefings Direct Comments

Yet another good discussion from Dana Gardner’s group of independent analysts at Briefings Direct:SOA Edition. In this edition (podcast / transcript), the panelists (Dana Gardner, Steve Garone, Joe McKendrick, Jon Collins, and Tony Baer) discuss the year in review and their predictions for 2007.

The first comments that I liked were from Jon Collins of MacehiterWard-Dutton. In commenting on the consolidation in the vendor space that occurred in 2006, Jon points out “the one thing that’s been lacking so far … is integration, which is to me the ultimate irony, because it’s exactly what SOA is about as a concept.” Well said. He goes on to state that they need to become service-oriented in terms of how they put together their products. What I find great about that is that this trend toward the superplatform will make vendors think about their management interfaces. After all, this is where the integration must begin. If the integration points don’t exist, or are only available via a user-facing console, the integration cost will be higher and take longer. Prior to the superplatform, this expense was incurred by the end consumer. Now, let the vendors take that charge. Let’s hope it’s done properly however. All too often, they perform this integration, but then fail to expose these services to the end consumer. Back when I was working for a large enterprise, I had a vendor come in and tell me how their management console was built using their portal product. I said, “That’s great! Are the individuals components exposed as JSR-168 compliant portlets so I can built my own custom console with portlets from other products that operations needs?” Their answer, “Well, no…” Let’s get it right this time!

I was surprised at Tony Baer’s comments on the convergence of the Registry/Repository space. The only explanation was his comment that “at runtime, you don’t want a repository with a lot of overhead.” I’ll agree with this, but the reason for the convergence was not runtime, it was design time. Perhaps this was more intuitive to me because of the research I had done into component reuse in the 2000-2001 timeframe, and I understood the importance of metadata to support it. I think it’s only now that enterprises are starting to figure out an appropriate role for the registry/repository in the run-time environment.

In the predictions portion of the podcast, Joe McKendrick made the boldest statements, stating that “SOA as a term has crested.” He went on to discuss event-driven architecture, consistent with one of his recent blog posts, correctly calling out the ability for it to integrate with business intelligence solutions. Personally, I think an enterprise that is able to leverage SOA, EDA, and their BI systems is in the upper echelons of SOA maturity. Joe was also the most conservative regarding the rate of SOA adoption, expecting only a 20% increase from 2006.

Tying the vendor consolidation comments with Joe’s comments on the cresting of SOA, I think it is true that the superplatform vendors need to position their solutions not as enterprise SOA infrastructure, but enterprise infrastructure. As Jon suggested, a service-oriented view helps break down the infrastructure needs into capabilities. These capabilities aren’t specific to SOA, however, they are enterprise needs. For SOA to be successful, it does need to get to point where it is simply what we do, rather than being some special initiative within IT while the rest of IT continues to operate the same way it has in the past.

Predictions for 2007

Call it a wish list, or call it predictions, here are my thoughts on SOA in 2007. Largely, I think we’ll see lots of movement in the operational management space, as you’ll see in my comments.

  • Vendors: Surprise, surprise, the consolidation will continue. There aren’t too many niche SOA vendors left, and by the end of 2007 there will be fewer.
  • Operational Management: At least one of the major Enterprise Systems Management providers will actually come out with a decent marketing effort around SOA management along with a product to back it up. As everyone knows, Systems Management is still the ugly stepchild that is far too often an afterthought, in my opinion. Systems Management technologies, however, are exactly what the doctored ordered for create a mature practice of Service Product Management (not Service Management in the ITIL sense, but a Product Manager for a Service). Without metrics, a product manager can’t hope to be successful. Without an appropriate metric collection and reporting infrastructure, the metrics won’t be available. Without this, there is no service lifecycle other than the service development lifecycle. The service goes into the production and the development team forgets about it until a red light turns on. That’s not the way to practice Service Product Management.
  • Registry/Repository: At least one players in the CMDB space will enter into the Registry/Repository space, most likely through acquisition. There’s simply too much overlap for this not to occur.
  • CMDB: At least one of the super-platform vendors will see the overlap between CMDB and Registry/Repository and begin to incorporate it into their offerings, either through partnership, or through an acquisition. Interestingly, I see this as a natural happening with efforts around the adaptive enterprise and closed loop governance. While virtualization and other technologies are allowing data centers to be consolidated, it’s still too static of a process with too much manual involvement. Get the metadata into a repository, collect metrics from run-time, run some analysis, and adapt as needed. Incorporate it earlier in the development lifecycle, utilizing testing results against base comparisons from production systems and we can shift into a predictive mode. This stuff is happening in research labs, but still hasn’t gained traction from a marketing perspective.
  • Events: The importance of events has received some recent press, but unfortunately got mixed up in the awful marketing message of SOA 2.0 earlier in the year. I think we’ll see renewed interest in event description, as I see it as a critical tool for enabling agility. Services provide the functional aspect, but there needs to be a library of triggers for those functions. Those triggers are events. Along with this, the players in the Registry/Repository space will begin to market their ability to provide an event library just as they can provide a service library.
  • Service Development and Hosting: Momentum’s CEO, Jeff Schneider, had predicted that the business process platform would become the accepted standard as the foundation for enterprise software in 2006. Well, that didn’t happen, but it’s still trending that way. Personally, I think we’ve lost sight of the importance of service containers, partially because of the confusion created by the ESB. While it’s too early to proclaim the application server dead in 2007, I think the pendulum will begin to swing away from flexibility (i.e. general purpose servers that host things written in Java or C#) and toward specialization. A specialized service container for orchestration can provide a development environment targeted for orchestration with an execution engine targeted for that purpose. The magic question is how many domains of engines will we need? Is it simply two- a general purpose application server for Java/C# code, and a model-driven process server for orchestration/integration, or are there more? If it is simply those two, I think we’ll definitely see less usage of the general purpose application server, and more usage of the model-driven process server.

So what are your predictions for the SOA space in 2007? From a wish list perspective, my wish for you is simple: success in your SOA endeavours! I know a great company that can help make sure that happens, as well!

Kudos to the Apple Store

Just some public kudos for my local Apple Store at West County Mall in Des Peres, MO. Unfortunately, my MacBook Pro was one of the many with a fan problem. The fan on the left hand side of the machine made a nice grinding noise when the CPU got hot that was steadily getting worse (mine wasn’t this bad but the sounds were very familiar). Since I’m now a traveling consultant, I really didn’t want to be without my machine for too long, and with the holidays here, I had a week where it could go into the shop. I dropped it off on Saturday the 23rd, at about 1:30pm, knowing they’d be in their Christmas rush. The guy at the genius bar ran some tests, checked the machine in for repair, and told me they’d probably be able to turn it around by Friday the 29th or Saturday the 30th, due to the rush. That was okay with me, as I was simply hoping to get it back in a week.

Fast forward about 1 hour as I’m on my way home from the rest of my errands and my wife calls me and says, “your laptop is ready.” One hour turnaround instead of one week? At a store whose focus is retail, not repair? Two days before Christmas? I’ll take it. Thank you Apple for a very pleasant customer experience. Now let’s just hope the problem doesn’t return.

Tagged

I was wondering how long it would take before I got tagged. If you haven’t seen this going around, you don’t read enough blogs. So, here are five things that you might not know about me:

  1. I was in the New York Times on June 10, 1996 as part of an article on using the web for wedding planning. While my wife and I did not use the web to plan our wedding, we did put up a whole bunch of pictures and other information about our wedding on my web site. Believe it or not, it’s still up, complete with circa-1996 background, cheesy web award and a last updated date of Dec. 17, 1996! 10 years, wow!
  2. I was quoted for an article on Pekingese in Dog Fancy magazine a few years ago thanks to my web site on pekes. Besides putting wedding pictures up, this was my first foray into putting something informative on the web. I’m not a breeder, I just had one as a pet for nearly 14 years. Unfortunately, not much has happened on that site in the past 3+ years, either…
  3. I’ve sung in front of an audience of over 1000 people. That was part of a 100 person chorus, though. I’ve also sung God Bless America with a chorus at Busch Stadium before a St. Louis Cardinals game. As for solo performances, I’m a regular psalmist every Sunday at 11 AM. I would love to sing the National Anthem solo at a Cardinals game one day.
  4. I was a VIP at the NASA Cassini/Huygens launch. Unfortunately, it didn’t go up the first night, and since my wife and I were on vacation at Disneyworld and had to drive the hour or so to the Kennedy Space Center at midnight for the 4AM launch, we didn’t go back two days later when it did go up. My wife went to Spacecamp in Huntsville when she was growing up and had dreamed of being an astronaut, so she wasn’t too happy about missing it. My parents now live in Florida, so perhaps we’ll be able to get down for some other launch in the future.
  5. My brother is an excellent artist. He was an effects artist with Disney until they closed the Orlando animation studio, and prior to that, he did a lot of work for Fasa, creator of Shadowrun and Battletech, in addition to some freelance Magic cards. I don’t play any of those games, but his artwork is awesome.

And now, I tag the following people: Tom Rose, Steve Anthony, Phil Ayres, Pankaj Arora, and Miko Matsumura.

Service Taxonomy

Something that you may run into in your conquest of adopting SOA is the Service Taxonomy. Taxonomy, according the Oxford English dictionary on my Mac, is the branch of science concerned with classification. Aside: Did you know that you can hit control-open apple-d with the mouse over any word and MacOS X will pop up a definition? I never knew MacOS X had this feature but now find it incredibly useful. A service taxonomy, therefore, is a classification of services. Back when I was first getting into the SOA space, the chief architect at the company I was with told me that we needed a service taxonomy, so I set about trying to create one.

What I found in that endeavor was that taxonomies are difficult. It isn’t difficult to come up with a taxonomy, however, it is difficult to come up with a taxonomy that adds appropriate value. There are an infinite number of ways of classifying things. Take a deck of cards, for example. I could sort them by suit, by color, by face value, or by number (and probably many of other ways, as well). Which is the right way to do so? It depends on what your objectives are. To jump into a classification process without any objectives for the classification effort is futile. It may keep someone busy in Visio or PowerPoint for a while, but what value does it add to the SOA effort?

Two areas that I’ve found where taxonomies can add value to the SOA effort are service ownership decisions and service technology decisions. I won’t say that these are the only areas (and I encourage my readers to leave comments with their ideas), but I think they are a good starting point for organizations. Solution architecture is what many of you may refer to as application architecture. I try to avoid using the term application anymore which I’ll go into in another post some time. In solution architecture, the architect defines the major subcomponents of the solution that will undergo detailed design. A key activity of this is service identification. Services represent boundaries, and one place a boundary typically exists is an ownership domain. Therefore, to assist the solution architect in identifying the right services, a taxonomy that can assist in ownership decisions makes a lot of sense. Be careful on this one, however, because ownership is often defined in terms of the existing organization. Anyone who has worked in a large enterprise knows that organizations are fluid, not static. Your taxonomy should be classifying the service, not the organization. The most common classification that clarifies ownership is that of business services versus infrastructure services. As a sample, you could define two the categories this way:

A business service is one whose description contains terms that are specific to the business. That is, if you were to take the service to another vertical, it may or not make sense.

An infrastructure service is one whose description is not specific to the business at hand. It is likely to be equally applicable across many or even all verticals.

Now, this classification certainly isn’t rock solid, since a business service like “Hire Employee” fits the later definition, but in general, it attempts to differentiate between things like security services from ordering services. Your organization may have an infrastructure area that handles infrastructure services, while the business services are likely handled by the development group. The classification itself doesn’t mention any specific organization, but can be easily aligned to the organizational model.

The second area I mentioned was service technology decisions. While it’s certainly possible to write all of your services in C#, Java, or any other programming language, odds are you have multiple technology platforms available for hosting services. It is the job of the enterprise architect to ensure that these platforms are used appropriately. Therefore, some taxonomy is needed that can allow the solution architect to define services that have architectural significance. If your taxonomy doesn’t differentiate between when a BPEL engine should be used from when Java should be used, then it probably is not capturing the architecturally significant characteristics. At a minimum, the taxonomy should make it clear where process orchestration engines should be used (think visual environments that are largely schema-driven), general purpose programming languages like Java or C# should be used, and where database technologies (stored procedures, views, etc.) should be used. These things are architecturally significant. A good taxonomy will remove the guesswork from the solution architect and result in solutions that adhere to the technology architecture.

Policy-Driven Infrastructure

One term that you may have run across from some of the marketing efforts of vendors touting SOA governance solutions is policy. They may even go so far to claim that governance is about policy enforcement. Technically, policing is probably more about policy enforcement is you think of governance in the municipal sense. Your local government collects taxes, makes laws (policy), and the police enforce those laws. But, that’s not the point of this entry because policy is actually a very important concept to understand.

Practitioners in the security space will frequently use the terms Policy Enforcement Point or PEP, Policy Management Point or PMP (also referred to as Policy Administration Point), Policy Information Point or PIP, and sometimes Policy Decision Point or PDP. Simply put, a policy enforcement point is the place where the determination is made that a policy needs to be enforced. A policy management point is where policies are defined or administered. A policy information point is the source of information necessary to enforce the policy, potentially including the policy itself, and a policy decision point is where the actual decision regarding the policy is made. Take your basic web access control product. A request shows up at your web server, the policy enforcement point. That request is intercepted, credentials extracted, and a call is made to the policy decision point asking, “Can the user named todd access the URL http://www.biske.com/blog?” The policy decision point accesses a policy information point (e.g. LDAP) to determine the allowed roles for that URL and the roles assigned to user todd, and returns a yes or a no. If an administrator has used the policy management point to assign allowed roles to the URL and assigned roles to user todd, everything works.

So what does this have to do with anything outside of the security domain? It comes back to policy. A policy is something that is declared. Container managed code was about externalizing concerns and declaring them in a configuration file. In other words, establishing policy. If you think about infrastructure today, all of them have a configuration file, or some management interface that allows it to be configured. Take your typical application server. It probably has some XML file that contains entries that specify what .ear or .war files will be executed. Hmm… this sounds like a policy information file. Clearly, the application server itself is an enforcement point. The policy is “load and execute this .ear file” and it does it. How do these policies get established? Well, while most enterprises probably manipulate the configuration file directly, that application server certainly has a management console (policy management point) that will manipulate the configuration file for you. Is this starting to become clear?

Now let’s look at the domain of the service intermediary, whether its an ESB, an XML appliance, a WSM gateway or agent, or anything else. What does the intermediary do? It enforces security policy. It enforces routing policy. It enforces transformation policy. It enforces logging policy. It enforces availability policy. Your intermediary of choice certainly has a management console, and it likely creates a config file somewhere where all these policies are stored. So, what’s the problem? Well, the problem is that security policy is governed by information security. Availability and routing policy by operations. Transformation policy may be the domain of the development team. Logging policy may be tied to compliance. Yet, we have one management console. To match the way policies are set, policy management must be externalized from policy enforcement. Furthermore, policies must be independent of policy enforcement points, because they may be heterogeneous. One security policy may be enforced by a Java EE server, another by an XML gateway, and a third by a .NET server. So, the most flexible scenario is one where policy management, policy enforcement, and policy information are all separated. Well, what do we have going on in the SOA space? AmberPoint recently announced agentless management, claiming that they will manage your enforcement points, whatever they may be. HP/Mercury/Systinet and webMethods/Infravio have both touted the role of the registry/repository as a policy information point. We’re getting there folks. I’m not implying that a mutli-vendor solution, is necessary, either. Certainly, SOA Software’s Closed Loop Infrastructure is showing this same separation of concerns. The world I envision has policy management points that are tailored toward the type of policy being established and the groups that do so, a distributed network of enforcement points capable of handling multiple policy domains at once (i.e. avoid having one enforcement point for security, another for routing, another for transformations, etc.), and information points capable of storing many types of policies.

My recommendation to allow of you when you’re off talking to these vendors, especially the ones offering enforcement points, is to ask them about policy. Do they even understand that they are a policy enforcement point? Can policies span multiple services? Can they be managed by an external entity? Can they be stored externally? If they don’t understand this, then be prepared for a conceptual mismatch in trying to use them.

Disclaimer: I personally have no relationship with any of the vendors listed, although I’ve certainly had conversations with them at conferences, follow their blogs, etc. I used them here only to illustrate the concepts, not as any kind of endorsement.

The Art of Strategic Planning

Back in October, I attended the Shared Insights EA Conference. I had never been to an Enterprise Architecture conference before, so I didn’t quite know what to expect. I’d been to various SOA conferences, which obviously have a lot to do with Enterprise Architecture, but there is a difference. David Linthicum has blogged in the past about the differences between the EA camp and the SOA camp, so if you’re interested in that, you can read some of his entries as that’s not the point of this discussion. The real point of this entry is strategic planning, which is important to both EA and SOA. The question is why is it so hard?

When I started looking into the EA discipline, I initially found Architecture and Governance magazine. I was surprised to see advertisements for Troux until I found out that the editor-in-chief works for them. The real reason for my surprise was that I had only known Troux as a vendor of a configuration management database. My first thought was what does CMDB have to do with EA? Then, at the Shared Insights EA Conference, nearly all of the vendors there also were CMDB vendors. Guaranteed, they had tailored products toward EA, but I’d venture a bet that they all started out in the CMDB space, so what gives?

I’m of the opinion that the EA discipline grew out of a desire to get some arms around the spaghetti bowl of technology that constituted the typical large enterprise. Note that the word “strategic” didn’t appear anywhere in that sentence. Hence, there is a connection to CMDB. Another word for what was going on was current state analysis, with the CMDB being a place where the results could be captured. This is actually a key first step for entering into a strategic planning effort, however, which is where EA should be headed.

As an analogy, let’s look at MapQuest. If my memory of the early days of the web serve me correctly, it started out as a pure mapping solution, and only later added driving directions. I could get a map of my current location, which would the equivalent of populating my CMDB. I could also get a map of some other location that I’d like to visit. This would represent some future state. Whether or not that could be represented in the CMDB is debatable, but also besides the point. If all I had was this map, however, I still may be no better off. If the map of my town and the place I want to go don’t overlap, I have, at best, only a general idea of how to get from point A to point B. What is needed is point A, point B, and the directions in between. Any practicing enterprise architect should understand that current state documentation, future state definition, and a roadmap to get between the two is what they are responsible for. Architects that are stuck in current state definition may help quantify the current problem, but not much more. Architects that are focused only on the future state wind up in an ivory tower, and architects who start throwing together an action plan without a future state in mind, well, who knows where that will take you, but it’s probably safe to say it is short-sighted.

So what are the challenges associated with this? Well, here are two. First off, people who excel at big picture thinking are normally poor at details. Likewise, people who are very detail-oriented are very poor at seeing the big picture. The tasks I described include both big picture thinking for the future state definition, as well as execution details for the action plan. The architect that can define the big picture may struggle in the finer details of the executable plan. That’s one. The other problem is that a strategic plan has many dimensions to it, as well as dependencies. Here’s an overly simplistic example, but it gets the point across. Let’s suppose that the engineering or infrastructure guys decide they’re going to stop buying infrastructure as projects demand, and instead develop a strategic plan for the technology infrastructure. Great, right? Well, it’s not that easy. While the organization may be able to define the current state of the technology, what happens when they try to define the future state? The infrastructure guys can probably get pretty far based on industry trends, but as soon as an attempt is made to turn that into an executable plan and assign some priorities, the applications that will utilize the infrastructure come into play. At this point, pressure is now placed on the application organization to think strategically. Again, there’s a certain amount of work that can be done based on industry trends and imitating others, but to be really strategic, it has to be driven by the business strategy. So, now the problem leaves the realm of IT and enters the world of the business. What’s the current state of the business, the future state of the business, and the plan to get there? That’s certainly not a question that a technology architect may be able to address.

At this point, the most mature companies for EA are now including a business architecture practice, whether it has that name or not. The strategic planning process is not an IT thing or a business thing, it’s an enterprise thing. While organizations may be able to achieve some limited success by simply following the paths of those before them as reported by the analysts, sooner or later, they’ll find that the barriers must be broken down to be successful.

SOA in a box, going quick!

Thanks to Brenda Michelson for pointing this out and giving me a good laugh this morning.

EDA Again

Joe McKendrick brought the subject of event driven architecture again on his SOA in Action blog. I’ve previously commented on this subject, most recently here, here, and here. This is a subject that I like to comment on, because I feel that appropriate use of events are a key to the agility that we strive to achieve. It’s very simple. Services execute actions. Processes orchestrate actions. Events trigger the transitions within the process. You need all of them. A solid messaging infrastructure is critical to event processing, so it’s very surprising to me that the MOM EAI ESB vendors aren’t all over this. Tibco has their complex event processing product, but they really haven’t pushed the event message very hard. What about the registry/repository vendors? Lots of talk about services, but not very much about events. The fact is, just as an enterprise can’t leverage SOA without services, they can’t leverage EDA without events. The two are complimentary, and I encourage the EA’s out there to start doing the work to identify the events that drive the business.

SOA and Virtualization

As I’ve been doing some traveling lately, I’ve been trying to catch up with some Podcasts. Dana Gardner of ZDNet and Interarbor Solutions has a new podcast entitled “BriefingsDirect SOA Insights Edition.” In this episode, Dana, Steve Garone, and Jon Collins discussed virtualization and SOA. It’s funny how every buzzword going on in IT these days is somehow being attached to SOA. I was a bit skeptical on the discussion when I saw the title, and in reality, the discussion was primarily on virtualization, and not as much on SOA.

Something that I didn’t feel came across clearly was that SOA creates a need for more efficient resource allocation. Interestingly, a lot of drive toward virtualization is based upon a need to get a handle on resource allocation. So, perhaps there is a connection between the two, after all. So why is resource allocation important? Well, let’s compare Web Services to Web Applications. The typical web application is deployed on an application server, perhaps in a cluster or on a redundant server. It may or may not share that server with other applications, if it does, the applications may compete for threads, or each application may have its own pool. The application server has some memory allocated to it, etc. The app gets deployed and then no one touches it. Unless this application is facing the internet, it’s unlikely that the loads for which it was initially configured will change dramatically. The line of business may add a few employees here or there, but that’s certainly not going to create enough additional load to crash the system.

Now let’s talk about Web Services. They, too, are deployed on an application server, potentially with other services, potentially with their own threads, some amount of memory, etc. Unlike the Web Application, it’s entirely possibly to have dramatic changes in load from when the Web Service is first deployed. As a new consumer comes on board, the load on the service can increase by tens of thousands of requests per day or more very easily. Furthermore, the usage patterns may be vary widely. One consumer may use the service every day, another consumer may use it once a month, but hammer it that day. All this poses a challenge for the operational staff to ensure the right amount of resources are available at the right time. The ease of virtualization can allow this to happen. BEA just announced their WebLogic Server Virtual Edition, and their VP of WebLogic products, Guy Churchward, was quoted on ZDNet stating, “the setup will allow companies to create new instances of Java applications to meet spikes in demand in a few seconds, compared with 45 minutes, as is the case now.”

Some final thoughts on this: a good friend and former colleague of mine once described what we were doing as the virtual mainframe. In a recent conversation with a client, I also brought up the venerable mainframe. Does your enterprise currently have a highly complicated batch processing window? Have you ever researched what goes into the management of that process when something goes awry? A wily mainframe operator can do quite a bit to make sure that everything still manages to get done within the processing window. Now move to the world of SOA, with increased interdependencies between systems and real time processing. If we don’t give the operational staff the tools they need to efficiently manage those resources, we’ll be in an even bigger mess. Virtualization is one tool in that chest.

Tying together SOA, BPM / Workflow, and Web 2.0

I was on my way home from the airport listening to Dana Gardner’s SOA Insights podcast from 11/15, which brought up the topic of Web 2.0 and SOA. Before I get started, I wanted to thank Dana for putting this together. It’s great to have another podcast out there on SOA. I’ve always enjoyed the panel format (it was a blast having the opportunity to be on one this past year), and Dana’s got a great group of panelists including Steve Garone, Joe McKendrick, and Neil Macehiter, plus the occasional guest.

Anyway, as they were discussing the Oracle OpenWorld Show, the conversation turned to Web 2.0 and SOA. Joe compared Web 2.0 and SOA to the recent “I’m a Mac” commercials from Apple, saying that “they’re trying to understand each other.” Steve went on to point out that he thinks that one is going to support the other. When I think about Web 2.0, I think about the user facing component of our IT systems. With that assumption, Web 2.0 and SOA are complementary, not competitive.

I’d like to take the conversation away from Web 2.0, however. I’d like to take a step back and look at a larger picture that tries to tie a number of these concepts together. I can’t say it ties everything together, as I’m not going to discuss BAM or MDM, as this post would get way too large, but they fit in as well. Let’s start with SOA. SOA is all about services. For the purpose of this discussion, let’s view services as the functional activities performed by our IT systems. (I realize SOA can be applied more broadly than this). Many enterprises are relying on Web Service technologies for these services, and Web Services are intended for system-to-system interactions. I can use a BPEL tool to externalize some automated processes, but it’s still all system to system. So, how do humans come into the mix? Clearly, humans are involved. So now we bring in a BPM / Workflow tool. If you’ve worked with the workflow tools, you’ll know that an integral part of it is task management. A user gets assigned a task, it pops up via some notification mechanism, and they go and do what needs to be done. Today, it probably involves firing up some application, which is built on top of those services we mentioned earlier. So we get a picture like this:

Simple, right? The real issue with this picture is the application/service component. Odds are that application does a lot more than just this particular task. In fact, there’s probably a whole bunch of things the user must go through to get to the right part of the application. How do I get contextual information out of the task and into the application, if that contextual information is even there? Would you rather get a task that says, “Check your email” or “Read the email from your boss, John Q. Hardnose, he’s in a bad mood today.”

Where the systems need to go in the future is in the form of actionable, task-based interfaces. A great example of task-based interfaces is the Dashboard in Apple’s MacOS X. It consists of widgets that, for the most part, have a very limited purpose. They are intended to be lightweight. For example, there’s an address lookup. It’s far more convenient for me to go there to lookup an address, than to have to launch AddressBook. This task-based approach hit home for me when I was reading Keith Harrison-Broninski’s book, “Human Interactions: The Heart And Soul Of Business Process Management.” Something that would help in the efficiency of business processes would be if the tasks contained appropriate context so that a lightweight user interface could be quickly provided that allowed the user to perform that action. Now if the tasks contained a link to an HTML page, that HTML could immediately be rendered within the task management system, allowing the user to do what they need to do. How could that be done? Well, if the task management system is built on Portal technology, now we have a vehicle for shared context, incorporation of Web 2.0 collaboration technologies, AJAX capabilities, and the HTML rendering engine we need. We wind up with this:

When I think about this, I think of huge potential gains in efficiency. I also hope that I’ll see it in my lifetime! The hard part will be determining in how this can be built out incrementally. Even SaaS has a role in this, since the traditional application providers are part of the problem. They simply don’t have a means or the architecture to take in appropriate context and quickly launch the application to the right user interface to allow tasks to happen efficiently. SaaS has the potential to do this, but there’s still a long, long way to go. I, for one, am excited to be along for the ride.

SOA Anthropologist

My colleague Ed Vazquez (Ed you need to start blogging and I) came up with a new role associated with SOA Adoption. It’s the SOA anthropologist. There is no shorting of pundits out there that feel that the most difficult aspect of SOA is the cultural change. I tend to agree. If you look at the average enterprise, they probably have 20 or more years of history of building stovepipe applications of various sorts. This doesn’t bode well for SOA. From top to bottom, there is baggage that exists that must be changed, ranging from project definition and funding models, to the project manager who is dependent on services and teams outside of their area of control, to the analyst and developer that must define and produce solutions that have the potential to be combined in a flexible and efficient manner in the future. It’s like taking a rugby team, putting some helmets and shoulder pads on them and saying, “go play football!” Some things look vaguely familiar, but in reality, they’ve got their work cut out for them.

A field that is slowly gaining some prominence, at least in very large enterprises, is corporate anthropology. Corporate anthropology is about understanding behavior and seeing whether technology (or anything else) will be applicable. If ever there was a role that could be useful in SOA adoption, it’s this. Someone can come in, dig around the enterprise, and attempt to classify the culture of the organization. Once the culture is properly understood, now the SOA adoption effort can be properly targeted to manage the inevitable culture change that must occur. Unfortunately, I have no formal training whatsoever in anthropology, but in my role as a consultant, I absolutely understand the criticality of identifying the way an organization works in order to be successful in the engagement. Back in my college days, I took a number of psychology courses (it wasn’t my major), and I’m better off for it.

So, anyone out there actually utilizing anthropology in their SOA efforts? If so, I’d love to hear about it.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.