SOA and GCM (Governance and Compliance)

I just listened to the latest Briefings Direct: SOA Insights podcast from Dana Gardner and friends. In this edition, the bulk of the time was spent discussing the relationship between SOA Governance and tools in the Governance and Compliance market (GCM).

I found this discussion very interesting, even if they didn’t make too many connections to the products classifying themselves as “SOA Governance” solutions. That’s not surprising though, because there’s no doubt that the marketers jumped all over the term governance in an effort to increase sales. Truth be told, there is a long, long way to go in connecting the two sets of technologies.

I’m not all that familiar with the GCM space, but the discussion did help to educate me. The GCM space is focused on corporate governance, clearly targeting the Sarbanes-Oxley space. There is no doubt that many, many dollars are spent within organizations in staying compliant with local, state, and federal (or your area’s equivalent) regulations. Executives are required to sign off that appropriate controls are in place. I’ve had experience in the financial services industry, and there’s no shortage of regulations that deal with handling investor’s assets, and no shortage of lawsuits when someone feels that their investment intent has not been followed. Corporate governance doesn’t end there, however. In addition to the external regulations, there are also the internal principles of the organization that govern how the company utilizes its resources. Controls must be put in place to provide documented assurances that resources are being used in the way they were intended. This frequently takes the form of someone reviewing some report or request for approval and signing their name on the dotted line. For these scenarios, there’s a natural relationship between analytics, business intelligence, and data warehouse products, and the GCM space appears to have ties to this area.

So where does SOA governance fit into this space? Clearly, the tools that are claiming to be players in the governance space don’t have strong ties to corporate governance. While automated checking of a WSDL file for WS-I adherence is a good thing, I don’t think it’s something that will need to show up in a SOX report anytime soon. Don’t get me wrong, I’m a fan of what these tools can offer but be cautious in thinking that the governance they claim has strong ties to your corporate governance. Even if we look at the financial aspect of projects, the tools still have a long way to go. Where do most organizations get the financial information? Probably from their project management and time accounting system. Is there integration between these tools, your source code management system, and your registry/repository? I know that BEA AquaLogic Enterprise Repository (Flashline) had the ability to track asset development costs and asset integration costs to provide an ROI for individual assets, but where do these cost numbers come from? Are they manually entered, or are they pulled directly from the systems of record?

Ultimately, the relationship between SOA Governance and Corporate Governance will come down to data. In a couple recent posts, I discussed the challenges that organizations may face with the metadata associated with SOA, as well as the management continuum. This is where these two worlds come together. I mentioned earlier that a lot of corporate governance is associated with the right people reviewing and signing off on reports. A challenge with systems of the past is their monolithic nature. Are we able to collect the right data from these systems to properly maintain appropriate controls? Clearly, SOA should break down these monoliths and increase the visibility into the technology component of the business processes. The management architecture must allow metrics and other metadata to be collected, analyzed, and reported to allow the controllers to make better decisions.

One final comment that I didn’t want to get lost. Neil Macehiter brought up Identity Management a couple times in the discussion, and I want to do my part to ensure it isn’t forgotten. I’ve mentioned “signoff” a couple times in this entry. Obviously, signoff requires identity. Where compliance checks are supported by a service-enabled GCM product, having identity on those service calls is critical. One of the things the controller needs to see is who did what. If I’m relying on metadata from my IT infrastructure to provide this information, I need to ensure that the appropriate identity stays with those activities. While there’s no shortage of rants against WS-*, we clearly will need a transport-independent way of sharing identity as it flows through the various technology components of tomorrow’s solutions.

The vendor carousel continues to spin…

It’s not an acquisition this time, but a rebranding/reselling agreement between BEA and AmberPoint. I was head down in my work and hadn’t seen this announcement until Google Alerts kindly informed me of a new link from James Urquhart, a blogger on Service Level Automation whose writings I follow. He asked what I think of this announcement, so I thought I’d oblige.

I’ve never been an industry analyst in the formal sense, so I don’t get invited to briefings, receive press releases, or whatever the other normal mechanisms (if there are any) that analysts use. I am best thought of as an industry observer, offering my own opinions based on my experience. I have some experience with both BEA and AmberPoint, so I do have some opinions on this. 🙂

Clearly, BEA must have customers asking about SOA management solutions. BEA doesn’t have an enterprise management solution like HP or IBM. Even if we just consider BEA products themselves, I don’t know whether they have a unified management solution across all of their products. So, there’s definitely the potential for AmberPoint technology to provide benefits to the BEA platform and customers must be asking about it. If this is the case, you may be wondering why didn’t BEA just acquire AmberPoint? First, AmberPoint has always had a strong relationship with Microsoft. I have no idea how much this results in sales for them, but clearly an outright acquisition by BEA could jeopardize that channel. Second, as I mentioned, BEA doesn’t have an enterprise management offering of which I’m aware. AmberPoint can be considered a niche management solution. It provides excellent service/SOA management, but it’s not going to allow you to also manage you physical servers and network infrastructure. So, this doesn’t wouldn’t sense on either side. BEA wouldn’t gain entry into that market, and AmberPoint would be at risk of losing customers as their message could get diluted by the rest of the BEA offerings.

As a case in point, you don’t see a lot of press about Oracle Web Services Manager these days. Oracle acquired this technology when they acquired Oblix (who acquired it when they acquired Confluent). I don’t consider Oracle a player in enterprise systems management, and as a result, I don’t think people think of Oracle when they’re thinking about Web Services Management. They’re probably more likely to think of the big boys (HP, IBM, CA) and the specialty players (AmberPoint, Progress Actional).

So, who’s getting the best out of this deal? Personally, I think this is great win for AmberPoint. It extends a sales channel for them, and is consistent with the approach they’ve taken in the past. Reselling agreements can provide strength to these smaller companies, as it builds on a perception of the smaller company as either being a market leader, having great technology, or both. On the BEA side, it does allow them to offer one-stop solutions directly in response to SOA-related RFPs, and I presume that BEA hopes it will result in more services work. BEA’s governance solution is certainly not going to work out of the box since it consists of two rebranded products (AmberPoint and HP/Mercury/Systinet) and one recently acquired product (Flashline). All of that would need to be integrated with their core execution platform. It will help BEA with existing customers who don’t want to deal with another vendor but desire an SOA management solution, but BEA has to ensure that there are integration benefits rather than just having the BEA brand.

More vendor movement…

SoftwareAG is acquiring webMethods. Interestingly, Dana Garnder’s analysis of the deal seems to imply that webMethods earlier acquisition of Infravio may have made them more attractive, but given that SoftwareAG had a solution already, CentraSite, I wouldn’t think that was the case. It is true, however, that CentraSite hasn’t received a lot of media attention. I wonder what this now means for Infravio. Clearly, SoftwareAG has two solutions in one problem space, so some consolidation is likely to occur.

Dana’s analysis points out that “bigger is better in terms of SOA solutions provider survival.” This is an interesting observation, although only time will tell. The larger best-of-breed players are expanding their offerings, but will they be viewed as platform players by consumers? It’s very interesting to look at the space at this point. At the top, you’ve got companies like IBM and Oracle who clearly are full platform vendors. It’s in the middle where things get really messy. You have everything from companies with large enough customer bases to be viewed in a strategic light to clear niche players. This includes such names BEA, HP, Sun, Tibco, Progress (which includes Sonic, Actional, Apama, and a few others), SOA Software, Iona, CapeClear, AmberPoint, Skyway Software, iWay Software, Appistry, Cassatt, Lombardi, Intalio, Vitria, Savvion, and too many more to name. Clearly, there’s plenty of other fish out there, and consolidation will continue.

What we’re seeing is that you can’t simply buy SOA. There also isn’t one single piece of infrastructure required for SOA. Full SOA adoption will require looking at nearly every aspect of your infrastructure. As a result, companies that are trying to build a marketing strategy around SOA are going to have a hard time now that the average customer is becoming more educated. That means one of two things: increase your offerings or narrow your marketing message. If you narrow the message, you run the risk of becoming insignificant, struggling to gain mindshare with the rest of the niche players. Thus, we enter the stage of eat or be eaten for all of those companies that don’t have a cash cow to keep them happy in their niche. As we’ve seen however, even the companies that do have significant recurring revenue can’t sit still. That’s life as a technology vendor, I guess.

Update: Beth Gold-Bernstein posted a very nice entry discussing this acquisition including a discussion of the areas of overlap, including the Infravio/CentraSite item. It also helped me to know where SoftwareAG now fits into this whole picture, since I honestly didn’t know all that much about them since the bulk of their business is in Europe.

Parallel Development and Integration

One topic that’s come up repeatedly in my work is that of parallel development of service consumers and service providers. While over time, we would expect these efforts to become more and more independent, many organizations are still structured to be application development organizations. This typically means that services are likely identified as part of that application project, and therefore, will be developed in parallel. The efforts may all be under one project manager, or the service development efforts may be spun off as independently managed projects. Personally, I prefer the latter, as I think it increases the chances of keeping the service independent of the consumer, as well as establish clear service ownership from the beginning. Regardless of your approach, there is a need to manage the development efforts so that chaos doesn’t ensue.

To paint a picture of the problem, let’s look at a popular technique today- continuous integration. In a continuous integration environment, there are a series of automated builds and tests that are run on a scheduled basis using tools like Cruise Control, ant/Nant, etc. In this environment, shortly after someone checks in some code, a series of tests are run that will validate whether any problems have been introduced. This allows problems to be identified very early in the process, rather than waiting for some formal integration testing phase. This is a good practice, if for no other reason than encouraging personal responsibility for good testing from the developers. No one likes to be the one who breaks the build.

The challenge this creates with SOA, however, is that the service consumer and the service provider are supposed to be independent of each other. Continuous integration makes sense at the class/object level. The classes that compose a particular component of the system are tightly coupled, and should move in lock step. Service consumers and providers should be loosely coupled. They should share contract, not code. This contract should introduce some formality into the consumer/provider relationship, rather than viewing in the same light as integration between two tightly coupled classes. What I’ve found is that when the handoffs between a service development team and a consumer development team are not formalized, sooner or later, it turns into a finger-pointing exercise because something isn’t working they way they’d like, typically due to assumptions regarding the stability of the service. Often times, the service consumer is running in a development environment and trying to use a service that is also running in a development environment. The problem is that development environments, by definition, are inherently unstable. If that development environment is controlled by the automated build system, the service implementation may be changing 3 or more times a day. How can a service consumer expect consistent behavior when a service is changing that frequently? Those automated builds often include set up and take down of testing data for unit tests. The potential exists that incoming requests from a service consumer not associated with those tests may cause the unit testing to fail, because it may change the state of the system. So how do we fix the problem? I see two key activities.

First, you need to establish a stable integration environment. You may be thinking, “I already have an integration testing environment,” but is that environment used for integration with things outside of the project’s control, or is that used for integration of the major components within the project’s control. My experience has been the latter. This creates a problem. If the service development team is performing their own integration testing in the IT environment, say with a database dependency, they’re testing things they need to integrate with, not things that want to integrate with them. If the service consumer uses the service in that same IT environment, that service is probably not stable, since it’s being tested itself. You’re setting yourself up for failure in this scenario. The right way, in my opinion, to address this is to create one or more stable integration environments. This is where service (and other resources) are deployed when they have a guaranteed degree of stability and are “open for business.” This doesn’t mean they are functionally complete, only that the service manager has clearly stated what things work and what things don’t. The environment is dedicated for use by consumers of those services, not by the service development team. Creating such an environment is not easily done, because you need to manage the entire dependency chain. If a consumer invokes a service that updates a database and then pushes a message out on a queue for consumption by that original consumer, you can have a problem if that consumer is pointing at a service in one environment, but a MOM system in another environment. Overall, the purpose of creating this stable integration environment is to manage expectations. In an environment where things are changing rapidly, it’s difficult to set any expectation other than that the service may change out from underneath you. That may work fine where 4 developers are sitting in cubes next to each other, but it makes it very difficult if the service development team is in an offshore development center (or even on another floor of the building) and the consumer development team is located elsewhere. While you can manage expectations without creating new environments, creating them makes it easier to do so. This leads to the second recommendation.

Regardless of whether you have stable integration environments or not, the handoffs between consumer and provider need to be managed. If they are not, your chances of things going smoothly will go down. I recommend creating a formal release plan that clearly shows when iterations of the service will be released for integration testing. It should also show cutoff dates for when feature requests/bug reports must be received in order to make it into a subsequent iteration. Most companies are using iterative development methodologies, and this doesn’t prevent that from occurring. Not all development iterations should go into the stable environment, however. Odds are, the consumer development (especially if there’s more than one) and the service development are not going to have their schedules perfectly synchronized. As a result, the service development team can’t expect that a consumer will test particular features within a short timeframe. So, while a development iteration may occur every 2 weeks, maybe every third iteration goes into a stable integration environment, giving consumers 6 weeks to perform their integration testing. You may only have 3 or 4 stable integration releases of a service within its development lifecycle. Each release should have formal release notes and set clear expectations for service consumers. Which operations work and which ones don’t? What data sets can be used? Can performance testing be done? Again, problems happen when expectations aren’t managed. The clearer the expectations, the more smoothly things can go. It also makes it easier to see who dropped the ball when something does go wrong. If there’s no formal statement regarding what’s available within a service at any particular point in time, you’ll just get a bunch of finger pointing that will expose the poor communication that has happened.

Ultimately, managing expectations is the key to success. The burden of this falls on the shoulders of the service manager. As a service provider, the manager is responsible for all aspects of the service, including customer service. This applies to all releases of a service, not just the ones in production. Providing good customer service is about managing expectations. What do you think of products that don’t work they way you expect them to? Odds are, you’ll find something else instead. Those negative experiences can quickly undermine your SOA efforts.

Added to the blogroll…

I just added another blog to my blogroll, and wanted to call attention to it as it has some excellent content. According to his “about me” section, Bill Barr is an enterprise architect on the west coast with a company in the hospitality and tourism industry. His blog is titled “Agile Enterprise Architecture” and I encourage all of my readers to check it out.

Agile Enterprise Architecture by Bill Barr

New Greg the Architect

Boy, YouTube’s blog posting feature takes a long time to show up. I tried it for the first time to create blog entries with embedded videos, but it still hasn’t shown up. Given that the majority of my readers have probably already seen it courtesy of links on ZDNet and InfoWorld, I’m caving and just posting direct links to YouTube.

The first video, released some time ago, can be viewed here. Watch this one first, if you’ve never seen it before.

The second video, just recently released and dealing with Greg’s ROI experience, can be found here.

Enjoy.

The Reuse Marketplace

Marcia Kaufman, a partner with Hurwitz & Associates, posted an article on IT-Director.com entitled “The Risks and Rewards of Reuse.” It’s a good article, and the three recommendations can really be summed up in one word: governance. While governance is certainly important, the article misses out on another important, perhaps more important, factor: marketing.

When discussing reuse, I always refer back to a presentation I heard at SD East way back in 1998. Unfortunately, I don’t recall the speaker, but he had established reuse programs at a variety of enterprises, some successful and some not successful. He indicated that the factor that influenced success the most was marketing. If the groups that had reusable components/services/whatever were able to do an effective job in marketing their goods and getting the word out, the reuse program as a whole would be more successful.

Focusing in on governance alone still means those service owners are sitting back and waiting for customers to show up. While the architectural governance committee will hopefully catch a good number of potential customers and send them in the direction of the service owner, that committee should be striving to reach “rubber stamp” status, meaning the project teams should have already sought out potential services for reuse. This means that the service owners need to be marketing their services effectively so that they get found in the first place. I imagine the potential customer using Google searches on the service catalog, but then within the service catalog, you’d have a very Amazon-like feel that may say things like “30% of other customers found this service interesting…” Service owners would be monitoring this data to understand why consumers are or are not using their services. They’d be able to see why particular searches matched, what information the customer looked at, and know whether the customer eventually decided to use the service/resource or not. Interestingly, this is exactly what companies like Flashline and ComponentSource were trying to do back in the 2000 timeframe, with Flashline having a product to establish your own internal “marketplace” while ComponentSource was much more of a hosted solution intended at a community broader than the enterprise. With the potential to utilize hosted services always on the rise, this makes it even more interesting, because you may want your service catalog to show you both internally created solutions, as well as potential hosted solutions. Think of it as amazon.com on the inside + with amazon partner content integrated from the outside. I don’t know how easily one could go about doing this, however. While there are vendors looking at UDDI federation, what I’ve seen has been focused on internal federation within an enterprise. Have any of these vendors worked with say, StrikeIron, so that hosted services show up in their repository (if the customer has configured it to allow them)? Again, it would be very similar to amazon.com. When you search for something on Amazon, you get some items that come from amazon’s inventory. You also get links to Amazon partners that have the same products, or even products that are only available from partners.

This is a great conceptual model, however, I do need to be a realist regarding the potential of such a robust tool today. How many enterprises have a service library large enough to warrant establishing this rich of a marketplace-like infrastructure? Fortunately, I do think this can work. Reuse is about much more than services. If all of your reuse is targeted at services, you’re taking a big risk with your overall performance. A reuse program should address not only service reuse, but also reuse of component libraries, whether internal corporate libraries or third-party libraries, and even shared infrastructure. If your program addresses all IT resources that have the potential for reuse, now the inventory may be large enough to warrant an investment in such a marketplace. Just make sure that it’s more than just a big catalog. It should provide benefit not only for the consumer, but for the provider as well.

The management continuum

Mark Palmer of Apama continued his series of posts on myths around the EDA/CEP space, with number 3: BAM and BPM are Converging. Mark hit on a subject that I’ve spoken with clients about, however, I don’t believe that I’ve ever posted on it.

Mark’s premise is that it’s not BAM and BPM that are converging, it’s BAM and EDA. Converging probably isn’t the right word here, as it implies that the two will become one, which certainly isn’t the case. That wasn’t Mark’s point, either. His point was that BAM will leverage CEP and EDA. This, I completely agree with.

You can take a view on our solutions like the one below. At higher levels, the concepts we’re dealing with are more business-centric. At lower levels, the concepts are more technology-centric. Another way of looking at it is that at the higher levels, the products involved would be specific to the line of business/vertical we’re dealing with. At the lower levels, the products involved would be more generic, applicable to nearly any vertical. For example, an insurance provider may have things like quoting and underwriting at the top, but at the bottom, we’d have servers, switches, etc. Clearly, the use of servers are not specific to the insurance industry.

All of these platforms require some form of management and monitoring. At the lowest levels of the diagram, we’re interested in traditional Enterprise Systems Management (ESM). The systems would be getting data on CPU load, memory usage, etc. and technologies like SNMP would be involved. One could certainly argue that these ESM tools are very event-drvien. The collection of metrics and alerts is nearly always done asynchronously. If we move up the stack, we get to business activity monitoring. The interesting thing is that the fundamental architecture of what is needed does not change. Really, the only thing that changes is the semantics of the information that needs to get pushed out. Rather than pushing CPU load, I may be pushing out the number of auto insurance quotes requested and processed. This is where Mark is right on the button. If the underlying systems are pushing out events, whether at a technical level or at a business level, there’s no reason why CEP can’t be applied to that stream to deliver back valuable information to the enterprise, or even better, coming full circle and invoking some automated process to take action.

I think that the most important takeaway on this is that you have to be thinking from an architectural standpoint as you build these things out. This isn’t about running out and buying a BAM tool, a BPM tool, a CEP tool, or anything else. What metrics are important? How will the metrics be collected? How do you want to perform analytics (is static analysis against a centralized store enough, or do you need dynamic analysis in realtime driven by changing business rules)? What do you want to do with the results of that analysis? Establishing a management architecture will help you make the right decisions on what products you need to support it.

Master Metadata/Policy Management

Courtesy of Dana Gardner’s blog, I found out that IONA has announced a registry/repository product, Artix Registry/Repository.

I’m curious if this is indicative of a broader trend. First, you had acquisitions of the two most prominent players in the registry/repository space: Systinet by Mercury who was then acquired by HP, and Infravio by webMethods. For the record, Flashline was also acquired by BEA. You’ve had announcements of registry/repository solutions as part of a broader suite by IBM (WebSphere Registry/Repository), SOA Software (Registry), and now IONA. There’s also Fujitsu’s/Software AG CentraSite and LogicLibrary Logidex that are still primarily independent players. What I’m wondering is whether or not the registry/repository marketplace simply can’t make it as an independent purchase, but will always be a mandatory add-on to any SOA infrastructure stack.

All SOA infrastructure products have some form of internal repository. Whether we’re talking about a WSM system, an XML/Web Services gateway, or an ESB, they all maintain some internal configuration that governs what they do. You could even lump application servers and BPM engines into that mix if you so desire. Given the absence of domain specific policy languages for service metadata, this isn’t surprising. So, given that every piece of infrastructure has its own internal store, how do you pick one to be the “metadata master” of your policy information? Would someone buy a standalone product solely for that purpose? Or are they going to pick a product that works with the majority of their infrastructure, and then focus on integration with the rest. For the smaller vendors, it will mean that they have to add interoperability/federation capabilities with the platform players, because that’s what customers will demand. The risk for the consumer, however, is that this won’t happen. This means that the consumer will be the one to bear the brunt of the integration costs.

I worry that the SOA policy/metadata management space will become no better than the Identity Management space. How many vendor products still maintain proprietary identity stores rather than allowing identity to be managed externally through the use of ActiveDirectory/LDAP and some Identity Management and Provisioning solution? This results in expensive synchronization and replication problems that keep the IT staff from being focused on things that make a difference to the business. Federation and interoperability across these registry/repository platforms must be more than a checkbox on someone’s marketing sheet, it must be demonstrable, supported, and demanded by customers as part of their product selection process. The last thing we need is a scenario where Master Data Management technologies are required to manage the policies and metadata of services. Let’s get it right from the beginning.

SOA Politics

In the most recent Briefings Direct SOA Insights podcast (transcript here), the panel (Dana Gardner, Steve Garone, Joe McKendrick, Jim Kobielus, and special guest Miko Matsumura) discussed failures, governance, policies, and politics. There were a number of interesting tidbits in this discussion.

First, on the topic of failures, Miko Matsumura of webMethods called out that we may see some “catastrophic failures” in addition to some successes. Perhaps it’s because I don’t believe that there should be a distinct SOA project or program, but I don’t see this happening. I’m not suggesting that SOA efforts can’t failure, rather, if it fails, things will still be business as usual. If anything, that’s what the problem is. Unless you’ve already embraced SOA and don’t realize it, SOA adoption should be challenging the status quo. While I don’t have formal analysis to back it up, I believe that most IT organizations still primarily have application development efforts. This implies the creation of top-to-bottom vertical solutions whose primary concern is not playing well in the broader ecosystem, but rather, getting solutions to their stakeholders. That balance needs to shift to where the needs of the stakeholder of the application are equal to the needs of the enterprise. The pendulum needs to stay in the middle, balancing short term gains and long term benefits. Shifting completely to the long term view of services probably won’t be successful, either. I suppose that it is entirely possible that organizations may throw a lot of money at an SOA effort, treating it as a project, which is prone to spectacular failure. The problem began at the beginning however, when it was made a project to begin with. A project will not create the culture change needed throughout the organization. The effort must influence all projects to be successful in the long term. The biggest risk in this type of an approach, however, is that IT spends a whole bunch of money changing the way they work but they still wind up with the same systems they have today. They may not be seeing reuse of services or improved productivity. My suspicion is that the problem here lies with the IT-business relationship. IT can only go so far on its own. If the business is still defining projects based upon silo-based application thinking, you’re not going to leverage SOA to its fullest potential.

Steve Garone pointed out that two potential causes of these failures are “a difficulty in nailing down requirements for an application, the other is corporate backing for that particular effort going on.” When I apply this to SOA, it’s certainly true that corporate backing can be a problem, since that points the IT centric view I just mentioned. On the requirements side, this is equally important. The requirements are the business strategy. These are what will guide you to the important areas for service creation. If your company is growing through mergers and acquisitions, what services are critical to maximize the benefits associated with that activity? Clearly, the more quickly and cost effectively you can integrate the core business functionality of the two organizations, the more benefit you’ll get out of the effort. If your focus is on cost reduction, are their redundant systems that are adding unnecessary costs to the bottom line? Put services in place to allow the
migration to a single source for that functionality.

The later half of the podcast talked about politics and governments, using world governments as a metaphor for the governance mechanisms that go on within an enterprise. This is not a new discussion for me, as this was one of my early blog entries that was reasonably popular. It is important to make your SOA governance processes fit in within your corporate culture and governance processes, provided that the current governance process is effective. If the current processes are ineffective, then clearly a change is in order. If the culture is used to a more decentralized approach to governance and has been successful in meeting the company’s goals based upon that approach, then a decentralized approach may work just fine. Keep in mind, however, that the corporate governance processes may not ripple down to IT. It’s certainly possible to have decentralized corporate governance, but centralized IT governance. The key question to ask is whether the current processes are working or not. If they are, it’s simply a matter of adding the necessary policies for SOA governance into the existing approach. If the current processes are not working, it’s unlikely that adding SOA policies to the mix is going to fix anything. Only a change to the processes will fix it. Largely, this comes down to architectural governance. If you don’t have anyone with the responsibility for this, you may need to augment your governance groups to include architectural representation.

SOA Consortium

The SOA Consortium recently gave a webinar that presented their Top 5 Insights based upon a series of executive summaries they conducted. Richard Mark Soley, Executive Director of the SOA Consortium, and Brenda Michelson of Elemental Links were the presenters.

A little background. The SOA Consortium is a new SOA advocacy group. As Richard Soley put it during the webinar, they are not a standards body, however, they could be considered a source of requirements for the standards organizations. I’m certainly a big fan of SOA advocacy and sharing information, if that wasn’t already apparent. Interestingly, they are a time-boxed organization, and have set an end date of 2010. That’s a very interesting approach, especially for a group focused on advocacy. It makes sense, however, as the time box represents a commitment. 12 practitioners have publicly stated their membership, along with the four founding sponsors, and two analyst firms.

What makes this group interesting is that they are interested in promoting business-driven SOA, and dispelling the notion that SOA is just another IT thing. Richard had a great anecdote of one CIO that had just finished telling the CEO not to worry about SOA, that it was an IT thing and he would handle it, only to attend one of their executive summits and change course.

The Top 5 insights were:

  1. No artificial separation of SOA and BPM. The quote shown in the slides was, “SOA, BPM, Lean, Six Sigma are all basically on thing (business strategy & structure) that must work side by side.” They are right on the button on this one. The disconnect between BPM efforts and SOA efforts within organizations has always mystified me. I’ve always felt that the two go hand in hand. It makes no sense to separate them.
  2. Success requires business and IT collaboration. The slide deck presented a before and after view, with the after view showing a four-way, bi-directional relationship between business strategy, IT strategy, Enterprise Architecture, and Business Architecture. Two for two. Admittedly, as a big SOA (especially business-driven SOA) advocate, this is a bit of preaching to the choir, but it’s great to see a bunch of CIOs and CTOs getting together and publicly stating this for others to share.
  3. On the IT side, SOA must permeate the organization. They recommend the use of a Center of Excellence at startup, which over times shifts from a “doing” responsibility to a “mentoring” responsibility, eventually dissolving. Interestingly, that’s exactly what the consortium is trying to do. They’re starting out with a bunch of people who have had significant success with SOA, who are now trying to share their lessons learned and mentor others, knowing that they’ll disband in 2010. I think Centers of Excellence can be very powerful, especially in something that requires the kind of cultural change that SOA will. Which leads to the next point.
  4. There are substantial operational impacts, but little industry emphasis. As we’ve heard time and time again, SOA is something you do, not something you buy. There are some great quotes in the deck. I especially liked the one that discussed the horizontal nature of SOA operations, in contrast to the vertical nature (think monolithic application) of IT operations today. The things concerning these executives are not building services, but versioning, testing, change management, etc. I’ve blogged a number of times on the importance of these factors in SOA, and it was great to hear others say the same thing.
  5. SOA is game changing for application providers. We’ve certainly already seen this in action with activities by SAP, Oracle, and others. What was particularly interesting in the webinar was that while everyone had their own opinion on how the game will change, all agreed that it will change. Personally, I thought these comments were very consistent with a post I made regarding outsourcing a while back. My main point was that SOA, on its own, may not increase or decrease outsourcing, but it should allow more appropriate decisions and hopefully, a higher rate of success. I think this applies to both outsourcing, as well as to the use of packaged solutions installed within the firewall.

Overall, this was a very interesting and insightful webinar. You can register and listen to a replay of it here. I look forward to more things to come from this group.

SOA and Enterprise Security

James McGovern asked a number of us in the blogosphere if we’d be willing to share some thoughts on security and SOA. First, I recommend you go and read James’ post. He makes the claim that if you’ve adopted SOA successfully, it should make security, such as user-centric identity, single signon, on and off boarding employees, asset management, etc. easier. I completely agree. I’ve yet to encounter an organization that’s reached that point with SOA, but if they did, I think James’ claims should hold true. Now on to the subject at hand, however.

I’ve shared some thoughts on security in the past, particularly in my post “The Importance of Identity.” Admittedly, however, it probably falls into the high level category. I actually look to James’ posts on security, XACML, etc. to fill in gaps in my own knowledge, as it’s not an area where I have a lot of depth. I’m always up for a challenge, however, and this space clearly is a challenge.

Frankly, I think security falls only slightly ahead of management when it comes to things that haven’t received proper attention. We can thank the Internet and some high profile security problems for elevating it’s importance in the enterprise. Unfortunately, security suffers from the same baggage as the rest of IT. Interestingly, though, security technology probably took a step backward when we moved off of mainframes and into the world of client-server. Because there was often a physical wire connecting that dumb terminal to the big iron, you had identity. Then along came client-server and N-tier systems with application servers, proxy servers, etc. and all of a sudden, we’ve completely lost the ability to trace requests through the system. Not only did applications have no concept of identity, the underlying programming languages didn’t have any concept of identity, either. The underlying operating system did, but what good is it to know something is running as www?

James often laments the fact that so many systems (he likes to pick on ECM) still lack the ability to leverage an external identity management system, and instead have their own proprietary identity stores and management. He’s absolutely on the mark with this. Identity management is the cornerstone of security, in my opinion. I spent a lot of time working with an enterprise security architect discussing the use of SSL versus WS-Security, the different WS-Security profiles, etc. In the end, all of that was moot until we figured out how to get identity into the processing threads to begin with! Yes, .NET and Java both have the concept of a Principal. How about your nice graphical orchestration engine? Is identity part of the default schema that is the context for the process execution? I’m guessing that it isn’t, which means more work for your developers.

So, unfortunately, all I can do is point out some potential pitfalls at this point. I haven’t had the opportunity to go deep in this space, yet, but hopefully this is enough information to get you thinking about the problems that lie ahead.

Now this is how to integrate your IT infrastructure

Will It Blend? IT Infrastructure.

Good product design

Here’s a post from Innovations that nails it. It’s a discussion on Las Vegas, but the closing comments say it all, whether you’re designing a casino, a user interface, or services:

But our opinion doesn’t matter. What matters is the opinion of the customer, and it’s easy to marginalize or ignore that perspective in favor of seeing what we want to see. This is particularly true in technology product development, I believe, where the skill levels of the developers are leagues ahead of those of their ultimate customers. That’s one reason why software designers continue to add features to products when customers don’t use 80% of the features they already have. … Good product design means setting aside your biases and thinking like the customer, whether or not the customer is like you.

Well said.

Reference Architectures and Governance

In the March 5th issue of InfoWorld, I was quoted in Phil Windley’s article, “Teaming Up For SOA.” One of the quotes he used was the following:

Biske also makes a strong argument for reference architectures as part of the review process. “Architectural reviews tie the project back to the reference architecture, but if there’s no documentation that project can be judged against, the architectural review won’t have much impact.”

My thinking on this actually came from a past colleague. We were discussing governance, and he correctly pointed out that it is unrealistic to expect benefits from a review process when the groups presented have no idea what they are being reviewed against. The policies need to be stated in advance. Imagine if your town had no speed limit signs, yet the police enforced a speed limit. What would be your chances of getting a ticket? What if your town had a building code, but the only place it existed was in the building inspector’s head. Would you expect your building to pass inspections? What would be your feelings toward the police or the inspector after you received a ticket or were told to make significant structural changes?

If you’re going to have reviews that actually represent an approval to go forward, you need to have documented policies. At the architectural level, this is typically done through the use of reference architectures. The challenge is that there is no accepted norm for what a reference architecture should contain. Rather than get into a semantic debate over the differences between a reference architecture, a reference model, a conceptual architecture, and any number of other terms that are used, I prefer to focus on what is important- giving guidance to the people that need it.

I use the term solution architect to refer to the person on a project that is responsible for the architecture of the solution. This is the primary audience for your reference material. There are two primary things to consider with this audience:

  1. What policies need to be enforced?
  2. What deliverable(s) does the solution architect need to produce?

Governance does begin with policies, and I put that one first for a reason. I worked with an organization that was using the 4+1 view model from Philippe Kruchten for modeling architecture. The problem I had was that the projects using this model were not adequately able to convey the services that would be exposed/utilized by their solution. The policy that I wanted enforced at the architecture review was that candidate services had been clearly identified, and potential owners of those services had been assigned. If the solution architecture can’t convey this, it’s a problem with the solution architecture format, not the policy. If you know your policies, you should then create your sample artifacts to ensure that those policies can be easily enforced through a review of the deliverable(s). This is where the reference material comes into play, as well. In this scenario, the purpose of the reference material is to assist the solution architect in building the required deliverable(s). Many architects may create future state diagrams that may be accurate representations, but wind up being very difficult to use in a solution architecture context. The projects are the efforts that will get you to that future state, so if the reference material doesn’t make sense to them, it’s just going to get tossed aside. This doesn’t bode well for the EA group, as it will increase the likelihood that they are seen as an ivory tower.

When creating this material, keep in mind that there are multiple views of a system, and a solution architect is concerned with all of them. I mentioned the 4+1 view model. There’s 5 views right there, and one view it doesn’t include is a management view (operational management of the solution). That’s at least 6 distinct views. If your reference material consists of one master Visio diagram, you’re probably trying to convey too much with it, and as a result, you’re not conveying anything other than complexity to the people that need it. Create as many diagrams and views as necessary to ensure compliance. Governance is not about minimizing the number of artifacts involved, it’s about achieving the desired behavior. If achieving desired behavior requires a number of diagrams and models, so be it.

Finally, architectural governance is not just about enforcing policies on projects. In Phil’s articles some of my quotes also alluded to the project initiation process. EA teams have the challenge of trying to get the architectural goals achieved, but often times without the ability to directly create and fund projects themselves. In order to achieve their goals, the current state and future state architectures must also be conveyed to the people responsible for IT Governance. This is an entirely different audience, one whom the reference architectures created for the solution architects may be far too technical in nature. While it would minimize the work of the architecture team to have one set of reference material that would work for everyone, that simply won’t be the case. Again, the reference material needs to fit the audience. Just as a good service should fit the needs of its consumers, the reference material produced by EA must do the same.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.