Importance of Identity

I’m currently working on a security document, and it brought to mind a topic that I’ve wondered about in the past. Why is all of the work around Web Services security user centric? Services are supposed to represent system-to-system interactions. As a result, won’t most policies be based on system identifiers rather than user identifiers?

When outlining an SOA security solution, I think an important first step is to determine what “identity” is in the context of service security. Identity will certainly include user identity, but should also include system identity. There may be other contextual information that needs to be carried along, such as originating TCP/IP address, or branch office location. Not all information may be used for access control purposes, but may be necessary for auditing or support purposes. Identity identification (doesn’t that sound great) can be quite a challenging task, and may only grow more difficult when trying to map it to a credential format, such X.509, Kerberos, SAML, or something else.

The problem gets even more complicated when dealing with composite services. If policies are based on system identity, what system identity do you use on service requests? I think there will likely be scenarios where you want the original identity of the call chain passed through, as well as scenarios where policies are based upon the most recent consumer in the call chain.

If this wasn’t enough, you also have to consider how to represent identity on processes that are kicked off by system events. I’ve previously blogged a bit about events, but in a nutshell, I believe there is a fundamental difference between events and service requests. Events are purely information. Service requests represent an explicit request to have action taken. Events do not. Events can trigger action, and often do, but in and of themselves, they’re just information. This now poses a problem for identity. If a user performs some action that results in a business event, and some subscriber on that event performs some action as a result, what identity should be carried on that action? While the implementation may result in events and implied action, to the business side, the actions the end user took that kicked off the event may represent an explicit request for that action. In other scenarios, it might not. It is safe to say that identity should be carried on both service requests and events allowing the flexibility to choose appropriately in particular scenarios.

All of this should make it painfully clear why Security Architecture (which includes Identity Management in my book) is extremely important.

The importance of communication

My last two posts have actually generated a few comments. Any blogger appreciates comments because above all else, it means someone is reading it! I should improve my commenting habits on the blogs I follow, even if to say nothing more than I agree. To that end, as a courtesy to those who have commented, I recommend that all of you visit the following:

The reason I do this is because of the importance of the communication that these individuals publish as their time allows. If you follow James McGovern, a common theme for him to ask practicing architects to share their experiences. In reality, how much of what we practice could really be considered a competitive advantage? Personally, I think it would the majority of our efforts, not the minority. We all get better through shared experiences. It’s when those experiences are not shared for the common good that they are repeated again and again (of course, we’ll always have ignorance on top of that accounting for some amount of repeated mistakes, they’re not ALL due to lack of communication).

One blog I didn’t mention above, but would like to give special attention to here is Joe McKendrick’s SOA in Action blog. In addition to his ZDNet blog, Joe’s SOA in Action blog on eBizQ is intended to focus on the practitioners of SOA, not the vendors marketing it. So much of IT communication in the public domain is dominated by the vendors, not by the practitioners. This isn’t a knock at the vendors- they have some very smart people and put out some useful information, but I tend to think that product selection isn’t what is holding companies back from a successful SOA adoption. It’s like a big chess game. Each enterprise represents a pattern on the chess board. We must recognize the patterns in front of us and make appropriate decisions to move toward success. The factors involved in IT are far more complicated than any chess board. Wouldn’t it be great if we knew certain players in the enterprise could only move one space at a time, diagonally, or in an L? After all, governance is about generating desired behavior. If the roles and capabilities are not well communicated, desired behavior will be difficult to achieve.

As I mentioned in a previous post, I’m reading IT Governance. It would be great if Harvard Press would do a book like this on Service Oriented Architecture, with case studies riddled throughout, attempting to articulate the patterns of both the successful and the unsuccessful companies. Unfortunately, I’m not aware of any such book (leave me a comment if you are). So, in the absence of it, your best bet is to follow the blogs of my fellow practicing architects and see what you can learn!

Management and Governance

Thanks to James McGovern, Brenda Michelson, and Sam Lowe for their comments on my last post on EA and SOA. Sam’s response, in particular, got my mind thinking a bit more. He stated:

One of contacts likes to describe EA’s new role in the Enterprise as being managing SOA-enabled (business) change.

The part that caught my attention was the word managing. What’s great about this is that it’s an active word. You can’t manage change by sitting in an ivory tower. Managing is about execution. EA’s not only need to define the future state using their framework of choice, but they need to put the actions in place to actually get there. Creating a powerpoint deck and communicating it to the organization is not execution, it’s communication. While communication is extremely important, it’s not going to yield execution. Execution involves planning. Ironically, this is something that I personally have struggled with, and I’m sure many other architects do, as well. I am a big picture thinker. The detail oriented nature of a good project manager is foreign to me. I’ve told many a manager that assigned me as a technical lead or a project architect to assign the most detail-oriented project manager to keep me in check. It may drive me nuts, but that’s what it takes to be successful.

Moving on, how does governance fit into the mix? First off, governance is not management, although the recent use of the term in SOA product marketing has certainly confused the situation. James McGovern has blogged frequently that governance is about encouraging desirable behavior. The book IT Governance by Peter Weill and Jeanne Ross states that

IT governance: Specifying the decision rights and accountability framework to encourage desirable behavior in the use of IT.

One of the five decisions associated with IT governance stated by the book is IT Architecture, the domain of the Enterprise Architect. I think the combination of decision making, strategy setting, and execution is what the EA needs to be concerned with. An EA group solely focused on decision making without having a strategy to guide them is taking a bottom up approach that may achieve consistency, but is unlikely to achieve business alignment. An EA group solely focused on strategy becomes an ivory tower that simply gets ignored. An EA group focused too much on execution may get mired in project architecture activities, losing both consistency and direction.

Added note: I just saw Tom Rose’s post discussing EA and SOA in response to my post and others. It’s a good read, and adds a lot to what I said in this post.

SOA and EA…

David Linthicum recently posted his thoughts about the Shared Insights Enterprise Architectures Conference held at the Hotel del Coronado near San Diego. David states:

The fact is I heard little about SOA during the entire conference. Even the EA magazines and vendors did not mention SAO, and I’m not sure the attendees understand the synergies between the two disciplines. In fact, I think they are one in the same; SOA at its essence is “good enterprise architecture.”

I attended this conference, and I have to agree 100% with Dave. There was a great case study from Ford, a facilitated discussion which I participateed in (this was a very good idea, I thought), and Dave’s talk (I was in another session during this time), and not much more. It was mentioned here and there, but I was quite surprised at the lack of discussion around it. This was the first EA-specific conference I’d been to, so I didn’t know what to expect. I came away feeling that the EA community is a bit too disconnected from the real world. The field is still dominated by the notion of frameworks, Zachman, TOGAF, etc. It reminds me of the early days of OO and the multitude of methodologies that were available. While these frameworks and methodologies were extremely powerful, no organization could ever adopt them completely due to the huge learning curve involved. Many organizations have resentment toward their EAs as they see them as sitting up in an ivory tower somewhere pontificating the standards down on the enterprise. As with anything of this nature, there’s a little bit of truth in it. Many developers don’t understand the importance of EA.

There’s a need to bring these two worlds together. I don’t think SOA can be successful at the enterprise level without a strong EA team. At the same time, if EA’s are not driving SOA, and instead focusing on the models within their chosen framework, that won’t help either. I think that SOA has appeal at multiple levels, from the developer to the business strategist and everywhere in between. An interesting thing about the Zachman framework is that it’s built around the concept that each consumer of the information in the framework needs their own view. I believe that the core concept of SOA, services, can be shared across these views, linking them together, whether business or technical. That linkage is what’s missing today, and it’s a shame that no speaker at the conference hammered this point home. The presentations were either EA or SOA, not both. Joe McKendrick makes similar points in his blog about the recent BPM and SOA divide. It’s time to stop dividing and fighting separate battles and realize we’re all on the same team.

Converging in the middle

Scott Mark posted some comments on his blog about ESBs versus Smart Routers. This is one of my favorite subjects, and I posted some extensive comments on Scott’s blog. I wanted to go a bit further on the subject, so I decided to post my own entry on the subject.

First off, full disclosure. My personal preference is toward appliances. That being said, I don’t consider myself an ESB basher, either. If there’s one takeaway from this that you get, it should be that the selection of a product for intermediary capabilities is going to be different for every organization, based upon their culture and who does what. I spoke with a colleague at a conference last June who had both an appliance and software-based broker, each performing distinct functions, largely due to organizational responsibilities. While both products could have provided all of the capabilities on their own, there were two different operational teams responsible for subsets of the capabilities.

Let’s start with the capabilities. This is the most important step, because as soon as you throw ESB’s into the mix, the list of capabilities can become volatile. The area of concern that I believe is the right set of capabilities are those than belong “in the middle.” The core principle guiding what belongs “in the middle” are things that developers shouldn’t be concerned about. Developers should be concerned about business logic. Developers should not have to code in security (in the Authentication/Authorization sense), routing, monitoring, caching. These are things that should be externalized. J2EE and .NET try to do this, but they still make the policies part of the developers domain, either by bundling into an archive file or by using annotations in source code. In a Web Services world, these capabilities can be completely externalized from the execution container. My list of capabilities include:

  • Routing
  • Load Balancing
  • Failover
  • Monitoring and Metrics
  • Caching
  • Alerting
  • Traffic Optimization
  • Transport Mapping
  • Standards Mediation (e.g. WS-Security 1.0 to WS-Security 1.1)
  • Credential Mapping / Mediation
  • Authorization
  • Authentication
  • Encryption
  • Digital Signing
  • Firewall
  • Auditing
  • Content-Based behavior, including version management
  • Transformation (not Translation)

Note that one area commonly associated with ESBs these days is missing from my list: Orchestration. I don’t view orchestration as an “in the middle” capability. Orchestration does represent an externalization of process, just as we are externalizing these capabilities, however, an orchestration engine is an endpoint. It initiates new requests in response to events or incoming service requests. Adapter-based integration ala EAI is also not on my list. Many cases fall into the category of translation, not transformation. There is business logic associated with translating between an incoming request, and one or more requests that need to be directed at the system being “adapted.” The grey areas that are tougher to draw a line include queueing via MOM and basic transformation capabilities.

Now, if we take this list, you’ll see that it can be covered by products from three major areas: Networking and Security (normally appliances), Software Infrastructure (ESBs, Application Servers, EAI/BPM), and to a lesser extent, Systems Management (most management systems employ an agent based architecture. Most Web Services Management providers include gateways and agents that perform these). My own tracking and discussion with various vendors that deal in this space yields a diagram like this:

As you can see, these three areas overlap. There’s a convergence of capabilities in the middle, but not complete overlap. Also, note that this is more about the target space of the product versus the form factor. There are some surprises on the diagram. SOA Software’s Network Director (formerly BlueTitan) and perhaps IBM’s WebSphere XD are much more like network devices than software infrastructure, even though they are software based.

So how do you choose one of these providers? First off, I believe all of the products that control the capabilities I listed should be policy-driven, utilizing distributed enforcement points with centralized management. The notion of policy-driven is extremely important, because we need to externalize policy management from policy enforcement. Why? If you look at those capabilities, who sets the policies associated with them? It likely should be multiple groups. Information Security may handle authorization and authentication policies, Compliance may handle auditing, Operations will handle routing, and the development team may specify transformations in support of versioning. Right now, the management tools are all tightly coupled to the enforcement points. Therefore, if you choose an ESB for all capabilities, your Information Security team may need to use Eclipse to set policies, and they may have visibility into other policy domains. This could be a recipe for disaster. You need to match the tools to the groups that will be using them. Software infrastructure tools will likely be targeted at developers. Network and Security tools will likely be targeted toward network and security operations, and may be in an appliance form factor. Systems Management will also be operations focused, but may have a stronger focus on monitoring and a weaker focus on active enforcement. It may involve the management of agents. Think about who in your organization will be using these tools and whether it fits their way of working. When evaluating products, see how far they’ve gone in externalizing policy. Can a policy be reused across multiple services via reference? Can policies be defined independent of a service pipeline? Is there clear separation of the policy domains? Do you have to be a developer to configure policies?

In short, I think if you nail down the capabilities, eliminating the grey area in integration and transformation, and have a clear idea of who in the organization will use the system, the product space should be much easier to navigate, for now. This convergence will continue, as smaller vendors get gobbled up and registry/repository becomes a more integrated part of the solution as a policy repository. It will begin to converge with the Configuration Management database space as I’ve previously discussed. Network devices continue to converge with MPLS and the addition of VoIP capabilities. If you have an intermediary today, continue to watch the space every 6 months to a year to see what’s happening and know when it might make sense to converge.

SOA and EA

Greetings from the Shared Insights EA Conference. I just finished attending a case study given by Eric Carsten of Ford Motor Company. It was an excellent presentation in two ways. First, the concerns he outlined that their EA organization was trying to address certainly hit home with my own experiences within a large enterprise: technology consolidation and everything that goes along with it. He had a great item when he showed five business goals, and how those same goals could be applied to the technology domain. This was great, however don’t misinterpret this as business/IT alignment. For example (hypothetical, Eric didn’t say this), a goal of Ford could be consolidation. Applied to the business, this would be a reduction in the number of models being sold. Applied to IT, this would be a reduction in the number of technology vendors being used. While both will contribute to lowering costs, it would be difficult to directly show how a reduction in the technology platforms leads to a reduction in the models being produced.

This leads to the second thing that I found interesting. Ford’s EA organization, at least what I inferred from this one presentation, is very technology focused. It is involved with governing project technology decisions. That is, it is focused on the “how” to do things, rather than the “what.” This poses a problem if SOA is expected to be driven from the EA organization. If the EA organization is too focused on the how, it will help out in selecting service technologies but it won’t help on what services to build. I think this is a natural evolution for an EA group, however, it’s a big, big challenge. Technology selection is inherently part of EA, akin to cleaning your own house. Choosing what solutions to build on that technology is a business decision. This means that the EA organization must be involved closely with the business in order to pull it off. If they aren’t, what organization will drive the SOA effort? Perhaps that’s an indication that the organization isn’t ready for it yet. Once the EA organization has been invited to the portfolio planning/IT governance table, their chances of success will likely increase. I think I’ll ask this question now of the panel discussion that I’m sitting in…

Policy Management Domains

Robin Mulkers had this post :Transformation in a SOA recently. He provided three options for providing transformation services:

1. A central message or transaction broker ESB platform is using connectors to access the various back-end systems and mediates between the service consumers and the service providers. The service consumers don’t see the mediation, they see a semantically coherent service offered by the mediation platform. It’s an EAI like architecture.

2. Externalize the transformation as a service, that’s the solution described by Zapthink in their paper. In this scenario, it is up to the service provider or the service consumer to detect an unsupported message format and know which transformation service to use to transform that unsupported format into something understandable.

3. It’s up to the service provider to implement the transformation as part of the service.

The options were a bit confusing, in that what the first one was really saying was the use of a centralized broker that was the responsibility of some centralized team, and not the service provider. If a centralized team handled this, that team “would have to be aware of all the legacy messages and schemas used throughout the enterprise,” as stated by Robin. He continues on to emphasize that the service provider is responsible for exposing various interfaces to the service functionality, using whatever infrastructure might be appropriate.

Let’s think about this for a second, however. This is where problems can arise. Often times, a group will choose technology based upon familiarity. In this scenario, the team providing the service is likely composed of developers. What’s the tool they can directly use? Their code. If an organization has some form of intelligent intermediary (XML appliance, Web Service Intermediary, ESB, etc.), the developer may not have access to the management console to properly leverage the transformation capabilities of the device. Note that this problem goes beyond the developer, however. What are some of the other core capabilities of an intermediary? Security. For a number of reasons, many organizations want to centralize the management of security policies within the organization. I’m sure they don’t have access to either code or the intermediary console. What about routing policies? That’s probably under the domain of an infrastructure operations group. What about audit logs? The compliance group may want to control these.

So, now the situation is that I have at least four different policy domains, each managed by different areas of the organization, all capable of being enforced by specialized infrastructure. Do I let these groups use the tools they have access to, all of which may take far longer in getting policy changes out into the enterprise (e.g. a development cycle for transformations coded in the service implementation) limiting the agility of the organization? If I want to leverage the specialized infrastructure, I’m in a bind as the console probably provides access to the entire policy pipeline for a service, putting my environment at risk.

What we need is an externalized policy manager, complete with role-specific access to particular policy domains. So who’s the vendor that’s going to come up with this? Unfortunately, policy managers are tightly coupled to policy enforcement points, because there are very few standard policy languages that could allow a third party manager. Security is the most likely policy domain, but even in the broader web access management space, how many policy managers can control heterogeneous enforcement points? It’s more likely that the policy managers require proprietary policy enforcement agents on those enforcement points. So, how about this vendors? Who’s going to step up to the plate and drive this solution home?

Writing on water…

While I can’t think of any practical applications right now, this is just cool. Anyone want to go watch The Abyss?

Infrastructure Services

A favorite topic of mine, SOA for IT, has come back up with a blog from Robin Mulkers at ITtoolbox. He attended the Burton Group’s Catalyst conference in Barcelona and heard Anne Thomas Manes talk about her Infrastructure Services Model.

Robin states:

Instead of Account management and order processing, think of services like authentication, auditing, integration, content management, etc. etc.

Security is a great example. If you’re a large enterprise, there’s a good chance that you’ve adopted some Identity and Web Access Management infrastructure from Oracle, CA-Netegrity, IBM or the like. Typical usage involves installing some agent in a reverse proxy or application server (policy enforcement point or PEP) which intercepts requests, obtains security tokens and contextual information about the request, and then communicates with a server (policy decision point or PDP) to get a yes/no answer on whether the request can be processed. Today, this communication between the PEP and the PDP is all proprietary. Vendors license toolkits to other providers, such as XML appliance and Web Service Management products, to allow them to talk to the PDP. In this scenario, the PDP is providing the security services. Why does this need to be proprietary? Is there really any competitive difference between the players in this space? Probably not. If there were standard interfaces for communicating with an authorization service, one PDP could easily be exchanged for a new PDP.

One challenge in the infrastructure space, however, is understanding the difference between implied capabilities and services. Take routing, for example. In a typical HTTP request, routing is implied. The actual service request is to GET/POST/PUT/DELETE a resource. It’s not to route a message, other than through the specification of a URI. It’s likely that the URI used can represent one of many web/application servers, the exact one determined by a routing service provided by the networking infrastructure or clustering technology. In this case, routing is an implied capability, not an explicit service. The security example is an implied capability from the point of view of the service consumer, however, the communication between a PEP and a PDP is an explicit service request.

In applying SOA to IT, it’s important to identify when a capability should be implied and when it should be explicitly exposed. In some cases it may one or the other, in other cases it will be both. The most important services in SOA for IT, in my opinion, are the management services. In both the security and the routing example, the decision made are based upon policies configured in the infrastructure through some management console or management API/scripting capability. Wouldn’t it be great if all of the capabilities in the management console were available as standards based services? Automation would be far easier. This is a daunting challenge, however, as there are no vertical standards for this. We have horizontal standards like JMX and WS-DistributedManagement, but there are few standards for the actual things being managed. Having a Web Service for deploying applications is good, but odds are that the services for JBoss, WebLogic, and WebSphere will have significant semantic differences.

Infrastructure services aren’t going to happen overnight, but it’s time for IT Operations Managers to begin pushing the vendors for these capabilities. The time is ripe for some vendors in the management space to switch from management of Web Services ala Sonic/Actional and Amberpoint to management using Web Services. In the absence of standards, it would be great if some of the big systems management players decided to create a standards-based insulation layer and present a set of services that could be used with a variety of infrastructure products, allowing IT Operations to leverage automation and workflow infrastructure to improve their efficiency.

Success with SOA

This topic has been wandering around my head for a while, and a recent post by Joe McKendrick of ZDNet brought it back to the forefront. Sidenote: why can’t people get Joe’s name right? David Chappell of Sonic Software called him Joel McKendrick, and every time David Linthicum quotes him in his SOA Podcast, it sounds like “Joe McItrick.” Anyway, in this entry, Joe has some quotes from John deVadoss of Microsoft. He states:

In the Q&A, deVadoss says the most important thing about SOAs is that they are a means to an end — not an end in and of themselves. “There is . . . a preponderance of what I call the top-down, big-bang mega approach, which is often guilty of trying to ‘do SOA’ as opposed to delivering business value,” deVadoss says. “The fundamental problem with the big-bang mega approaches to SOA is that they almost always end up being out-of-sync with the needs of the business.”

There’s a lot of truth in that statement, but how do organizations end up that way? One suspicion that I have is that the organization was already that way to being with. I had the opportunity to have lunch with someone two weeks ago who is a VP of Operations (business operations, not IT) at a local enterprise. We discussed how he was leveraging BPM technologies. I left the lunch thinking how great this was that the use of IT was being driven by a tech-savvy business executive. There wasn’t any discussion about top-down or bottom-up. Having a plan for where the business needed to go was part of his job, not something that needed to be justified. He knew what could be taken on and where IT fit into the picture, and was able to successfully leverage it. Take now, in contrast, an enterprise where IT is a bunch of order takers, without clear strategic planning. It’s certainly possible that the lack of strategic planning goes all the way back to the business, as well. The lack of IT alignment may just be a symptom of poor alignment throughout the organization. Organizations like that are going to struggle to accomplish anything of a strategic nature, SOA and BPM just being recent examples. While there may be some room for some incremental success, there are bigger problems that exist preventing it.

In short, while SOA is touted as some as the tool which will increase IT and business alignment, so far, it’s only the enterprises that already had great IT and business alignment that are truly leveraging it successfully. Think of it like the difference between the automobiles that we drive to work and a Formula One race car. Technology in the race car could allow us to drive ridiculously fast, but if you put the average driver in that car, they’ll crash and burn. It’s only the drivers that were already on the racing circuit that can stay on the track. For the rest of us, there is a culture change that must occur in the organization before we can get on the track.

Momentum Shift

Yes, a Momentum shift is underway! While I could be talking about my hometown Cardinals playing better baseball and beating the Padres tonight, I’m not. Instead, I’m talking about MomentumSI, my new employer. I’ve joined their team of top consultants on SOA and am looking forward to helping their clients be successful with their SOA initiatives. MomentumSI is a blog-friendly company, as evidenced by the blog of their CEO, Jeff Schneider. I look forward to having a little bit more freedom in some of the topics that I cover. I previously was part of an Enterprise Architecture team for a financial services company, which put some limits on things that could be discussed. If you’ve got questions about me, questions about Momentum, or topics you’d like to see covered, please don’t hesitate to ask.

YAA! (Yet Another Acronym)

I was reading the October 2 edition of eWeek, and on page 16 there is article by Darryl K. Taft titled, “AJAX, SOA to Merge.” The rate of acronym mergers and acquisitions is beginning to rival the rate of SOA company mergers and acquisitions. The bulk of the article is devoted to a discussion of some announcements from JackBe and TIBCO. The one that set me off was on JackBe’s Presto platform which now includes an “ASB (AJAX Service Bus).” So now I need a specific service bus to each technology I use to expose services? I thought that technology mediation was a key feature of an ESB? Of course that’s not entirely true since I seem to be hearing more and more about ESB federation. That seems like an oxymoron to me. We need to forget about all of the marketing buzzwords and focus on the capabilities (not acronyms) that are needed by the enterprise like mediation, transformation, security, etc.

Interdependent Architectures

I’m currently reading The Innovator’s Solution by Clayton Christensen and Michael Raynor. This is a business book from Harvard Business School Press, not an SOA book, at least so I thought.

In Chapter 5, titled “Getting the Scope of the Business Right,” the authors discuss product architecture and interfaces. Keep in mind that this is a business book, so the product they refer to could be consumer electronics, a box of kleenex, I-beams, etc. The authors state:

A product’s architecture determines its constituent components and subsystems and defines how they must interact- fit and work together- in order to achieve the targeted functionality. The place where any two components fit together is called an interface. Interfaces exist within a product, as well as between stages in the value-added chain. For example, there is an interface between design and manufacturing, and another between manufacturing and distribution.

An architecture is interdependentat an interface if one part cannot be created independently of the other part- if the way one is designed and made depends on the way the other is being designed and made. When there is an interface across which there are unpredictable interdependencies, then the same organization must simultaneously develop both of the components if it hopes to develop either component.

The book goes on to discuss the pros and cons of interdependent architectures and how they create a performance versus flexibility tradeoff. Interdependent architectures yield greater performance, modular architectures yield greater flexibility through tight specifications. This is certainly analogous to IT and SOA, as SOA is all about standards-based interfaces and higher flexibility. If you read on, however, you find that there is a perpetual see-saw like motion that consumers take regarding performance over flexibility. In the early days, a market may be built on performance and features using a completely proprietary model. The original Apple Macintosh is a great example of this. Apple controlled it end to end, and as a result, provided performance and features that no one else did at the time. Over time, however, a consumer base built up that was not so interested in performance and features, but was interested in flexibility. This eroded Apple’s market share and made Bill Gates rich. Today, however, the pendulum has swung back. Apple’s success with the iPod is largely due to the proprietary, integrated experience they can provide by owning the solution end-to-end.

So what does this mean for the SOA practitioners out there? It means that we must be judicious in where we apply the principles of SOA. If the business requires flexibility, SOA is where we need to be. If the business does not require flexibility, SOA may be a tough sell. Today, we’re at the apex of the pendulum swing on the side of flexibility. The business is demanding it, and SOA is there to save the day. As you build out your services, however, keep in mind that the pendulum will swing back. There will be a time where performance and features will rule over flexibility, and your IT systems must be prepared to support it. What’s the key? The key is that both interdependent architecture and a modular architecture understand the concept of interfaces. If you know where your interfaces (services) belong, you can choose to make those interfaces more or less proprietary as appropriate. Any large organization will likely find that some services need to be very specific to a single consumer to meet performance needs while other can be designed to support the masses. Those decisions are based on the needs today, however. Over time, that consumer-specific interface may need to be used more broadly as the pendulum shifts. If you never had a service there to begin with, it will be a far more difficult task.

What’s wrong with governance?

David Linthicum, in this blog, this blog, and this podcast, rails on the recent trend of SOA Governance products. I’ve been a frequent listener of his podcasts, and I usually agree with what he has to say, but not in this case.

As we know, last year the buzz was around ESBs (Dave railed on those too), this year the buzz has been around governance. He correctly points out that most of the vendor products “are directories, repositories, or registries, at their essence, and may or may not include policy management.” He then states that there is “no clear architectural use case for most of these tools” and that they are “application development support tools, akin to configuration management and metadata management of years gone by.” Hmm… I think someone else pointed out the relationship of these tools to configuration management databases.

So, where’s the problem? Dave states that the biggest thing missing is the focus on architecture. He correctly states that architecture brings agility, rather than focusing on reuse. Again, no arguments with that statement. What he never states is what the role of governance should be in architecture. He implies that the tools should be offering something more, but never states what that is. If the tools really are all about management, is that really a problem? After all, that’s largely what governance is. According to dictionary.com, one definition of governance is “a method or system of government or management.” Personally, I think that SOA has actually given a more concrete purpose to these tools that have struggled to gain a significant foothold until now.

Fantasy Football and IT

I’m sitting here watching the New York Giants and the Indianapolis Colts and I thought of Fantasy Football. I’ve never had a fantasy football team, but I do understand how it works. It’s part simulation and part reality. The simulation part is your role. You’re the GM of the team and you draft your players, make trades, etc. The reality part of it is that how well your team does is based on how the players you drafted actually perform in their NFL games that week.

So, I starting thinking about this, and wondering if there are any parallels that could be applied to IT. Sure enough, there are. First off, in the BPM world, a common feature that many vendors are working toward is the ability to run process simulations. Just as with Fantasy Football, these simulations need to take in actual production statistics to give an accurate portrayal. Sounds great, right? What’s missing, however, is the statistics. Football, and virtually every other professional sport, with baseball being the king, has statistics on just about anything interesting to any individual, from the guy who tivo’s every single game to watch them the rest of the week, to the person who doesn’t know the difference between a touchdown and a touchback.

What would our IT systems be like if we had the same level of statistics on them? Besides making a new career for the IT statistician, I think we’d have a far greater understanding of how IT makes the business successful (or unsuccessful).

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.