Archive for the ‘IT’ Category
SOA and GCM (Governance and Compliance)
I just listened to the latest Briefings Direct: SOA Insights podcast from Dana Gardner and friends. In this edition, the bulk of the time was spent discussing the relationship between SOA Governance and tools in the Governance and Compliance market (GCM).
I found this discussion very interesting, even if they didn’t make too many connections to the products classifying themselves as “SOA Governance” solutions. That’s not surprising though, because there’s no doubt that the marketers jumped all over the term governance in an effort to increase sales. Truth be told, there is a long, long way to go in connecting the two sets of technologies.
I’m not all that familiar with the GCM space, but the discussion did help to educate me. The GCM space is focused on corporate governance, clearly targeting the Sarbanes-Oxley space. There is no doubt that many, many dollars are spent within organizations in staying compliant with local, state, and federal (or your area’s equivalent) regulations. Executives are required to sign off that appropriate controls are in place. I’ve had experience in the financial services industry, and there’s no shortage of regulations that deal with handling investor’s assets, and no shortage of lawsuits when someone feels that their investment intent has not been followed. Corporate governance doesn’t end there, however. In addition to the external regulations, there are also the internal principles of the organization that govern how the company utilizes its resources. Controls must be put in place to provide documented assurances that resources are being used in the way they were intended. This frequently takes the form of someone reviewing some report or request for approval and signing their name on the dotted line. For these scenarios, there’s a natural relationship between analytics, business intelligence, and data warehouse products, and the GCM space appears to have ties to this area.
So where does SOA governance fit into this space? Clearly, the tools that are claiming to be players in the governance space don’t have strong ties to corporate governance. While automated checking of a WSDL file for WS-I adherence is a good thing, I don’t think it’s something that will need to show up in a SOX report anytime soon. Don’t get me wrong, I’m a fan of what these tools can offer but be cautious in thinking that the governance they claim has strong ties to your corporate governance. Even if we look at the financial aspect of projects, the tools still have a long way to go. Where do most organizations get the financial information? Probably from their project management and time accounting system. Is there integration between these tools, your source code management system, and your registry/repository? I know that BEA AquaLogic Enterprise Repository (Flashline) had the ability to track asset development costs and asset integration costs to provide an ROI for individual assets, but where do these cost numbers come from? Are they manually entered, or are they pulled directly from the systems of record?
Ultimately, the relationship between SOA Governance and Corporate Governance will come down to data. In a couple recent posts, I discussed the challenges that organizations may face with the metadata associated with SOA, as well as the management continuum. This is where these two worlds come together. I mentioned earlier that a lot of corporate governance is associated with the right people reviewing and signing off on reports. A challenge with systems of the past is their monolithic nature. Are we able to collect the right data from these systems to properly maintain appropriate controls? Clearly, SOA should break down these monoliths and increase the visibility into the technology component of the business processes. The management architecture must allow metrics and other metadata to be collected, analyzed, and reported to allow the controllers to make better decisions.
One final comment that I didn’t want to get lost. Neil Macehiter brought up Identity Management a couple times in the discussion, and I want to do my part to ensure it isn’t forgotten. I’ve mentioned “signoff” a couple times in this entry. Obviously, signoff requires identity. Where compliance checks are supported by a service-enabled GCM product, having identity on those service calls is critical. One of the things the controller needs to see is who did what. If I’m relying on metadata from my IT infrastructure to provide this information, I need to ensure that the appropriate identity stays with those activities. While there’s no shortage of rants against WS-*, we clearly will need a transport-independent way of sharing identity as it flows through the various technology components of tomorrow’s solutions.
Parallel Development and Integration
One topic that’s come up repeatedly in my work is that of parallel development of service consumers and service providers. While over time, we would expect these efforts to become more and more independent, many organizations are still structured to be application development organizations. This typically means that services are likely identified as part of that application project, and therefore, will be developed in parallel. The efforts may all be under one project manager, or the service development efforts may be spun off as independently managed projects. Personally, I prefer the latter, as I think it increases the chances of keeping the service independent of the consumer, as well as establish clear service ownership from the beginning. Regardless of your approach, there is a need to manage the development efforts so that chaos doesn’t ensue.
To paint a picture of the problem, let’s look at a popular technique today- continuous integration. In a continuous integration environment, there are a series of automated builds and tests that are run on a scheduled basis using tools like Cruise Control, ant/Nant, etc. In this environment, shortly after someone checks in some code, a series of tests are run that will validate whether any problems have been introduced. This allows problems to be identified very early in the process, rather than waiting for some formal integration testing phase. This is a good practice, if for no other reason than encouraging personal responsibility for good testing from the developers. No one likes to be the one who breaks the build.
The challenge this creates with SOA, however, is that the service consumer and the service provider are supposed to be independent of each other. Continuous integration makes sense at the class/object level. The classes that compose a particular component of the system are tightly coupled, and should move in lock step. Service consumers and providers should be loosely coupled. They should share contract, not code. This contract should introduce some formality into the consumer/provider relationship, rather than viewing in the same light as integration between two tightly coupled classes. What I’ve found is that when the handoffs between a service development team and a consumer development team are not formalized, sooner or later, it turns into a finger-pointing exercise because something isn’t working they way they’d like, typically due to assumptions regarding the stability of the service. Often times, the service consumer is running in a development environment and trying to use a service that is also running in a development environment. The problem is that development environments, by definition, are inherently unstable. If that development environment is controlled by the automated build system, the service implementation may be changing 3 or more times a day. How can a service consumer expect consistent behavior when a service is changing that frequently? Those automated builds often include set up and take down of testing data for unit tests. The potential exists that incoming requests from a service consumer not associated with those tests may cause the unit testing to fail, because it may change the state of the system. So how do we fix the problem? I see two key activities.
First, you need to establish a stable integration environment. You may be thinking, “I already have an integration testing environment,” but is that environment used for integration with things outside of the project’s control, or is that used for integration of the major components within the project’s control. My experience has been the latter. This creates a problem. If the service development team is performing their own integration testing in the IT environment, say with a database dependency, they’re testing things they need to integrate with, not things that want to integrate with them. If the service consumer uses the service in that same IT environment, that service is probably not stable, since it’s being tested itself. You’re setting yourself up for failure in this scenario. The right way, in my opinion, to address this is to create one or more stable integration environments. This is where service (and other resources) are deployed when they have a guaranteed degree of stability and are “open for business.” This doesn’t mean they are functionally complete, only that the service manager has clearly stated what things work and what things don’t. The environment is dedicated for use by consumers of those services, not by the service development team. Creating such an environment is not easily done, because you need to manage the entire dependency chain. If a consumer invokes a service that updates a database and then pushes a message out on a queue for consumption by that original consumer, you can have a problem if that consumer is pointing at a service in one environment, but a MOM system in another environment. Overall, the purpose of creating this stable integration environment is to manage expectations. In an environment where things are changing rapidly, it’s difficult to set any expectation other than that the service may change out from underneath you. That may work fine where 4 developers are sitting in cubes next to each other, but it makes it very difficult if the service development team is in an offshore development center (or even on another floor of the building) and the consumer development team is located elsewhere. While you can manage expectations without creating new environments, creating them makes it easier to do so. This leads to the second recommendation.
Regardless of whether you have stable integration environments or not, the handoffs between consumer and provider need to be managed. If they are not, your chances of things going smoothly will go down. I recommend creating a formal release plan that clearly shows when iterations of the service will be released for integration testing. It should also show cutoff dates for when feature requests/bug reports must be received in order to make it into a subsequent iteration. Most companies are using iterative development methodologies, and this doesn’t prevent that from occurring. Not all development iterations should go into the stable environment, however. Odds are, the consumer development (especially if there’s more than one) and the service development are not going to have their schedules perfectly synchronized. As a result, the service development team can’t expect that a consumer will test particular features within a short timeframe. So, while a development iteration may occur every 2 weeks, maybe every third iteration goes into a stable integration environment, giving consumers 6 weeks to perform their integration testing. You may only have 3 or 4 stable integration releases of a service within its development lifecycle. Each release should have formal release notes and set clear expectations for service consumers. Which operations work and which ones don’t? What data sets can be used? Can performance testing be done? Again, problems happen when expectations aren’t managed. The clearer the expectations, the more smoothly things can go. It also makes it easier to see who dropped the ball when something does go wrong. If there’s no formal statement regarding what’s available within a service at any particular point in time, you’ll just get a bunch of finger pointing that will expose the poor communication that has happened.
Ultimately, managing expectations is the key to success. The burden of this falls on the shoulders of the service manager. As a service provider, the manager is responsible for all aspects of the service, including customer service. This applies to all releases of a service, not just the ones in production. Providing good customer service is about managing expectations. What do you think of products that don’t work they way you expect them to? Odds are, you’ll find something else instead. Those negative experiences can quickly undermine your SOA efforts.
Added to the blogroll…
I just added another blog to my blogroll, and wanted to call attention to it as it has some excellent content. According to his “about me” section, Bill Barr is an enterprise architect on the west coast with a company in the hospitality and tourism industry. His blog is titled “Agile Enterprise Architecture” and I encourage all of my readers to check it out.
New Greg the Architect
Boy, YouTube’s blog posting feature takes a long time to show up. I tried it for the first time to create blog entries with embedded videos, but it still hasn’t shown up. Given that the majority of my readers have probably already seen it courtesy of links on ZDNet and InfoWorld, I’m caving and just posting direct links to YouTube.
The first video, released some time ago, can be viewed here. Watch this one first, if you’ve never seen it before.
The second video, just recently released and dealing with Greg’s ROI experience, can be found here.
Enjoy.
The Reuse Marketplace
Marcia Kaufman, a partner with Hurwitz & Associates, posted an article on IT-Director.com entitled “The Risks and Rewards of Reuse.” It’s a good article, and the three recommendations can really be summed up in one word: governance. While governance is certainly important, the article misses out on another important, perhaps more important, factor: marketing.
When discussing reuse, I always refer back to a presentation I heard at SD East way back in 1998. Unfortunately, I don’t recall the speaker, but he had established reuse programs at a variety of enterprises, some successful and some not successful. He indicated that the factor that influenced success the most was marketing. If the groups that had reusable components/services/whatever were able to do an effective job in marketing their goods and getting the word out, the reuse program as a whole would be more successful.
Focusing in on governance alone still means those service owners are sitting back and waiting for customers to show up. While the architectural governance committee will hopefully catch a good number of potential customers and send them in the direction of the service owner, that committee should be striving to reach “rubber stamp” status, meaning the project teams should have already sought out potential services for reuse. This means that the service owners need to be marketing their services effectively so that they get found in the first place. I imagine the potential customer using Google searches on the service catalog, but then within the service catalog, you’d have a very Amazon-like feel that may say things like “30% of other customers found this service interesting…” Service owners would be monitoring this data to understand why consumers are or are not using their services. They’d be able to see why particular searches matched, what information the customer looked at, and know whether the customer eventually decided to use the service/resource or not. Interestingly, this is exactly what companies like Flashline and ComponentSource were trying to do back in the 2000 timeframe, with Flashline having a product to establish your own internal “marketplace” while ComponentSource was much more of a hosted solution intended at a community broader than the enterprise. With the potential to utilize hosted services always on the rise, this makes it even more interesting, because you may want your service catalog to show you both internally created solutions, as well as potential hosted solutions. Think of it as amazon.com on the inside + with amazon partner content integrated from the outside. I don’t know how easily one could go about doing this, however. While there are vendors looking at UDDI federation, what I’ve seen has been focused on internal federation within an enterprise. Have any of these vendors worked with say, StrikeIron, so that hosted services show up in their repository (if the customer has configured it to allow them)? Again, it would be very similar to amazon.com. When you search for something on Amazon, you get some items that come from amazon’s inventory. You also get links to Amazon partners that have the same products, or even products that are only available from partners.
This is a great conceptual model, however, I do need to be a realist regarding the potential of such a robust tool today. How many enterprises have a service library large enough to warrant establishing this rich of a marketplace-like infrastructure? Fortunately, I do think this can work. Reuse is about much more than services. If all of your reuse is targeted at services, you’re taking a big risk with your overall performance. A reuse program should address not only service reuse, but also reuse of component libraries, whether internal corporate libraries or third-party libraries, and even shared infrastructure. If your program addresses all IT resources that have the potential for reuse, now the inventory may be large enough to warrant an investment in such a marketplace. Just make sure that it’s more than just a big catalog. It should provide benefit not only for the consumer, but for the provider as well.
Master Metadata/Policy Management
Courtesy of Dana Gardner’s blog, I found out that IONA has announced a registry/repository product, Artix Registry/Repository.
I’m curious if this is indicative of a broader trend. First, you had acquisitions of the two most prominent players in the registry/repository space: Systinet by Mercury who was then acquired by HP, and Infravio by webMethods. For the record, Flashline was also acquired by BEA. You’ve had announcements of registry/repository solutions as part of a broader suite by IBM (WebSphere Registry/Repository), SOA Software (Registry), and now IONA. There’s also Fujitsu’s/Software AG CentraSite and LogicLibrary Logidex that are still primarily independent players. What I’m wondering is whether or not the registry/repository marketplace simply can’t make it as an independent purchase, but will always be a mandatory add-on to any SOA infrastructure stack.
All SOA infrastructure products have some form of internal repository. Whether we’re talking about a WSM system, an XML/Web Services gateway, or an ESB, they all maintain some internal configuration that governs what they do. You could even lump application servers and BPM engines into that mix if you so desire. Given the absence of domain specific policy languages for service metadata, this isn’t surprising. So, given that every piece of infrastructure has its own internal store, how do you pick one to be the “metadata master” of your policy information? Would someone buy a standalone product solely for that purpose? Or are they going to pick a product that works with the majority of their infrastructure, and then focus on integration with the rest. For the smaller vendors, it will mean that they have to add interoperability/federation capabilities with the platform players, because that’s what customers will demand. The risk for the consumer, however, is that this won’t happen. This means that the consumer will be the one to bear the brunt of the integration costs.
I worry that the SOA policy/metadata management space will become no better than the Identity Management space. How many vendor products still maintain proprietary identity stores rather than allowing identity to be managed externally through the use of ActiveDirectory/LDAP and some Identity Management and Provisioning solution? This results in expensive synchronization and replication problems that keep the IT staff from being focused on things that make a difference to the business. Federation and interoperability across these registry/repository platforms must be more than a checkbox on someone’s marketing sheet, it must be demonstrable, supported, and demanded by customers as part of their product selection process. The last thing we need is a scenario where Master Data Management technologies are required to manage the policies and metadata of services. Let’s get it right from the beginning.
SOA Politics
In the most recent Briefings Direct SOA Insights podcast (transcript here), the panel (Dana Gardner, Steve Garone, Joe McKendrick, Jim Kobielus, and special guest Miko Matsumura) discussed failures, governance, policies, and politics. There were a number of interesting tidbits in this discussion.
First, on the topic of failures, Miko Matsumura of webMethods called out that we may see some “catastrophic failures” in addition to some successes. Perhaps it’s because I don’t believe that there should be a distinct SOA project or program, but I don’t see this happening. I’m not suggesting that SOA efforts can’t failure, rather, if it fails, things will still be business as usual. If anything, that’s what the problem is. Unless you’ve already embraced SOA and don’t realize it, SOA adoption should be challenging the status quo. While I don’t have formal analysis to back it up, I believe that most IT organizations still primarily have application development efforts. This implies the creation of top-to-bottom vertical solutions whose primary concern is not playing well in the broader ecosystem, but rather, getting solutions to their stakeholders. That balance needs to shift to where the needs of the stakeholder of the application are equal to the needs of the enterprise. The pendulum needs to stay in the middle, balancing short term gains and long term benefits. Shifting completely to the long term view of services probably won’t be successful, either. I suppose that it is entirely possible that organizations may throw a lot of money at an SOA effort, treating it as a project, which is prone to spectacular failure. The problem began at the beginning however, when it was made a project to begin with. A project will not create the culture change needed throughout the organization. The effort must influence all projects to be successful in the long term. The biggest risk in this type of an approach, however, is that IT spends a whole bunch of money changing the way they work but they still wind up with the same systems they have today. They may not be seeing reuse of services or improved productivity. My suspicion is that the problem here lies with the IT-business relationship. IT can only go so far on its own. If the business is still defining projects based upon silo-based application thinking, you’re not going to leverage SOA to its fullest potential.
Steve Garone pointed out that two potential causes of these failures are “a difficulty in nailing down requirements for an application, the other is corporate backing for that particular effort going on.” When I apply this to SOA, it’s certainly true that corporate backing can be a problem, since that points the IT centric view I just mentioned. On the requirements side, this is equally important. The requirements are the business strategy. These are what will guide you to the important areas for service creation. If your company is growing through mergers and acquisitions, what services are critical to maximize the benefits associated with that activity? Clearly, the more quickly and cost effectively you can integrate the core business functionality of the two organizations, the more benefit you’ll get out of the effort. If your focus is on cost reduction, are their redundant systems that are adding unnecessary costs to the bottom line? Put services in place to allow the
migration to a single source for that functionality.
The later half of the podcast talked about politics and governments, using world governments as a metaphor for the governance mechanisms that go on within an enterprise. This is not a new discussion for me, as this was one of my early blog entries that was reasonably popular. It is important to make your SOA governance processes fit in within your corporate culture and governance processes, provided that the current governance process is effective. If the current processes are ineffective, then clearly a change is in order. If the culture is used to a more decentralized approach to governance and has been successful in meeting the company’s goals based upon that approach, then a decentralized approach may work just fine. Keep in mind, however, that the corporate governance processes may not ripple down to IT. It’s certainly possible to have decentralized corporate governance, but centralized IT governance. The key question to ask is whether the current processes are working or not. If they are, it’s simply a matter of adding the necessary policies for SOA governance into the existing approach. If the current processes are not working, it’s unlikely that adding SOA policies to the mix is going to fix anything. Only a change to the processes will fix it. Largely, this comes down to architectural governance. If you don’t have anyone with the responsibility for this, you may need to augment your governance groups to include architectural representation.
SOA Consortium
The SOA Consortium recently gave a webinar that presented their Top 5 Insights based upon a series of executive summaries they conducted. Richard Mark Soley, Executive Director of the SOA Consortium, and Brenda Michelson of Elemental Links were the presenters.
A little background. The SOA Consortium is a new SOA advocacy group. As Richard Soley put it during the webinar, they are not a standards body, however, they could be considered a source of requirements for the standards organizations. I’m certainly a big fan of SOA advocacy and sharing information, if that wasn’t already apparent. Interestingly, they are a time-boxed organization, and have set an end date of 2010. That’s a very interesting approach, especially for a group focused on advocacy. It makes sense, however, as the time box represents a commitment. 12 practitioners have publicly stated their membership, along with the four founding sponsors, and two analyst firms.
What makes this group interesting is that they are interested in promoting business-driven SOA, and dispelling the notion that SOA is just another IT thing. Richard had a great anecdote of one CIO that had just finished telling the CEO not to worry about SOA, that it was an IT thing and he would handle it, only to attend one of their executive summits and change course.
The Top 5 insights were:
- No artificial separation of SOA and BPM. The quote shown in the slides was, “SOA, BPM, Lean, Six Sigma are all basically on thing (business strategy & structure) that must work side by side.” They are right on the button on this one. The disconnect between BPM efforts and SOA efforts within organizations has always mystified me. I’ve always felt that the two go hand in hand. It makes no sense to separate them.
- Success requires business and IT collaboration. The slide deck presented a before and after view, with the after view showing a four-way, bi-directional relationship between business strategy, IT strategy, Enterprise Architecture, and Business Architecture. Two for two. Admittedly, as a big SOA (especially business-driven SOA) advocate, this is a bit of preaching to the choir, but it’s great to see a bunch of CIOs and CTOs getting together and publicly stating this for others to share.
- On the IT side, SOA must permeate the organization. They recommend the use of a Center of Excellence at startup, which over times shifts from a “doing” responsibility to a “mentoring” responsibility, eventually dissolving. Interestingly, that’s exactly what the consortium is trying to do. They’re starting out with a bunch of people who have had significant success with SOA, who are now trying to share their lessons learned and mentor others, knowing that they’ll disband in 2010. I think Centers of Excellence can be very powerful, especially in something that requires the kind of cultural change that SOA will. Which leads to the next point.
- There are substantial operational impacts, but little industry emphasis. As we’ve heard time and time again, SOA is something you do, not something you buy. There are some great quotes in the deck. I especially liked the one that discussed the horizontal nature of SOA operations, in contrast to the vertical nature (think monolithic application) of IT operations today. The things concerning these executives are not building services, but versioning, testing, change management, etc. I’ve blogged a number of times on the importance of these factors in SOA, and it was great to hear others say the same thing.
- SOA is game changing for application providers. We’ve certainly already seen this in action with activities by SAP, Oracle, and others. What was particularly interesting in the webinar was that while everyone had their own opinion on how the game will change, all agreed that it will change. Personally, I thought these comments were very consistent with a post I made regarding outsourcing a while back. My main point was that SOA, on its own, may not increase or decrease outsourcing, but it should allow more appropriate decisions and hopefully, a higher rate of success. I think this applies to both outsourcing, as well as to the use of packaged solutions installed within the firewall.
Overall, this was a very interesting and insightful webinar. You can register and listen to a replay of it here. I look forward to more things to come from this group.
SOA and Enterprise Security
James McGovern asked a number of us in the blogosphere if we’d be willing to share some thoughts on security and SOA. First, I recommend you go and read James’ post. He makes the claim that if you’ve adopted SOA successfully, it should make security, such as user-centric identity, single signon, on and off boarding employees, asset management, etc. easier. I completely agree. I’ve yet to encounter an organization that’s reached that point with SOA, but if they did, I think James’ claims should hold true. Now on to the subject at hand, however.
I’ve shared some thoughts on security in the past, particularly in my post “The Importance of Identity.” Admittedly, however, it probably falls into the high level category. I actually look to James’ posts on security, XACML, etc. to fill in gaps in my own knowledge, as it’s not an area where I have a lot of depth. I’m always up for a challenge, however, and this space clearly is a challenge.
Frankly, I think security falls only slightly ahead of management when it comes to things that haven’t received proper attention. We can thank the Internet and some high profile security problems for elevating it’s importance in the enterprise. Unfortunately, security suffers from the same baggage as the rest of IT. Interestingly, though, security technology probably took a step backward when we moved off of mainframes and into the world of client-server. Because there was often a physical wire connecting that dumb terminal to the big iron, you had identity. Then along came client-server and N-tier systems with application servers, proxy servers, etc. and all of a sudden, we’ve completely lost the ability to trace requests through the system. Not only did applications have no concept of identity, the underlying programming languages didn’t have any concept of identity, either. The underlying operating system did, but what good is it to know something is running as www?
James often laments the fact that so many systems (he likes to pick on ECM) still lack the ability to leverage an external identity management system, and instead have their own proprietary identity stores and management. He’s absolutely on the mark with this. Identity management is the cornerstone of security, in my opinion. I spent a lot of time working with an enterprise security architect discussing the use of SSL versus WS-Security, the different WS-Security profiles, etc. In the end, all of that was moot until we figured out how to get identity into the processing threads to begin with! Yes, .NET and Java both have the concept of a Principal. How about your nice graphical orchestration engine? Is identity part of the default schema that is the context for the process execution? I’m guessing that it isn’t, which means more work for your developers.
So, unfortunately, all I can do is point out some potential pitfalls at this point. I haven’t had the opportunity to go deep in this space, yet, but hopefully this is enough information to get you thinking about the problems that lie ahead.
Good product design
Here’s a post from Innovations that nails it. It’s a discussion on Las Vegas, but the closing comments say it all, whether you’re designing a casino, a user interface, or services:
But our opinion doesn’t matter. What matters is the opinion of the customer, and it’s easy to marginalize or ignore that perspective in favor of seeing what we want to see. This is particularly true in technology product development, I believe, where the skill levels of the developers are leagues ahead of those of their ultimate customers. That’s one reason why software designers continue to add features to products when customers don’t use 80% of the features they already have. … Good product design means setting aside your biases and thinking like the customer, whether or not the customer is like you.
Well said.
Reference Architectures and Governance
In the March 5th issue of InfoWorld, I was quoted in Phil Windley’s article, “Teaming Up For SOA.” One of the quotes he used was the following:
Biske also makes a strong argument for reference architectures as part of the review process. “Architectural reviews tie the project back to the reference architecture, but if there’s no documentation that project can be judged against, the architectural review won’t have much impact.”
My thinking on this actually came from a past colleague. We were discussing governance, and he correctly pointed out that it is unrealistic to expect benefits from a review process when the groups presented have no idea what they are being reviewed against. The policies need to be stated in advance. Imagine if your town had no speed limit signs, yet the police enforced a speed limit. What would be your chances of getting a ticket? What if your town had a building code, but the only place it existed was in the building inspector’s head. Would you expect your building to pass inspections? What would be your feelings toward the police or the inspector after you received a ticket or were told to make significant structural changes?
If you’re going to have reviews that actually represent an approval to go forward, you need to have documented policies. At the architectural level, this is typically done through the use of reference architectures. The challenge is that there is no accepted norm for what a reference architecture should contain. Rather than get into a semantic debate over the differences between a reference architecture, a reference model, a conceptual architecture, and any number of other terms that are used, I prefer to focus on what is important- giving guidance to the people that need it.
I use the term solution architect to refer to the person on a project that is responsible for the architecture of the solution. This is the primary audience for your reference material. There are two primary things to consider with this audience:
- What policies need to be enforced?
- What deliverable(s) does the solution architect need to produce?
Governance does begin with policies, and I put that one first for a reason. I worked with an organization that was using the 4+1 view model from Philippe Kruchten for modeling architecture. The problem I had was that the projects using this model were not adequately able to convey the services that would be exposed/utilized by their solution. The policy that I wanted enforced at the architecture review was that candidate services had been clearly identified, and potential owners of those services had been assigned. If the solution architecture can’t convey this, it’s a problem with the solution architecture format, not the policy. If you know your policies, you should then create your sample artifacts to ensure that those policies can be easily enforced through a review of the deliverable(s). This is where the reference material comes into play, as well. In this scenario, the purpose of the reference material is to assist the solution architect in building the required deliverable(s). Many architects may create future state diagrams that may be accurate representations, but wind up being very difficult to use in a solution architecture context. The projects are the efforts that will get you to that future state, so if the reference material doesn’t make sense to them, it’s just going to get tossed aside. This doesn’t bode well for the EA group, as it will increase the likelihood that they are seen as an ivory tower.
When creating this material, keep in mind that there are multiple views of a system, and a solution architect is concerned with all of them. I mentioned the 4+1 view model. There’s 5 views right there, and one view it doesn’t include is a management view (operational management of the solution). That’s at least 6 distinct views. If your reference material consists of one master Visio diagram, you’re probably trying to convey too much with it, and as a result, you’re not conveying anything other than complexity to the people that need it. Create as many diagrams and views as necessary to ensure compliance. Governance is not about minimizing the number of artifacts involved, it’s about achieving the desired behavior. If achieving desired behavior requires a number of diagrams and models, so be it.
Finally, architectural governance is not just about enforcing policies on projects. In Phil’s articles some of my quotes also alluded to the project initiation process. EA teams have the challenge of trying to get the architectural goals achieved, but often times without the ability to directly create and fund projects themselves. In order to achieve their goals, the current state and future state architectures must also be conveyed to the people responsible for IT Governance. This is an entirely different audience, one whom the reference architectures created for the solution architects may be far too technical in nature. While it would minimize the work of the architecture team to have one set of reference material that would work for everyone, that simply won’t be the case. Again, the reference material needs to fit the audience. Just as a good service should fit the needs of its consumers, the reference material produced by EA must do the same.
ROI and SOA
Two ZDNet analysts, Dana Gardner and Joe McKendrick, have had recent posts (I’ve linked their names to the specific posts) regarding ROI and SOA. This isn’t something I’ve blogged on in the past, so I thought I’d offer a few thoughts.
First, let’s look at the whole reason for ROI in the first place. Simply put, it’s a measurement to justify investment. Investment is typically quanitified in dollars. That’s great, now we need to associate dollars with activities. Your staff has salaries or bill rates, so this shouldn’t be difficult, either. Now is where things get complicated, however. Activities are associated with projects. SOA is not a project. An architecture is a set of constraints and principles that guide an approach to a particular problem, but it’s not the solution itself. Asking for ROI on SOA is similar to asking for ROI on Enterprise Architecture, and I haven’t seen much debate on that. That being said, many organizations still don’t have EA groups, so there are plenty of CIOs that may still question the need for it as a formal job classification. Getting back to the topic, we can and do estimate costs associated with a project. What is difficult, however, is determining the cost at a fine-grained level. Can you determine the cost of developing services in support of that project accurately? In my past experience, trying to use a single set of fine-grained activities for both project management and time accounting was very difficult. Invariably, the project staff spent time that was focused on interactions that were needed to determine what the next step was. These actions never map easily into a standard task-based project plan, and as a result, caused problems when trying to charge time. (Aside: For an understanding on this, read Keith Harrison-Broninski’s book Human Interactions or check out his eBizQ blog.) Therefore, it’s going to be very difficult to put a cost on just the services component of a project, unless it’s entire scope of the project, which typically isn’t the case.
Looking at the benefits side of the equation, it is certainly possible to quantify some expected benefits of the project, but again, only a certain level. If you’re strictly looking at IT, your only hope of coming up with ROI is to focus on cost reduction. IT is typically a cost center, with at best, an indirect impact on revenue generation. How are costs reduced? This is primarily done by reducing maintenance costs. The most common approach is through a reduction in the number of vendor products involved and/or a reduction in the number of vendors involved. More stuff from fewer vendors typically means more bundling and greater discounts. There are other options, such as using open source products with no licensing fees, or at least discounted fees. You may be asking, “What about improved productivity?” This is an indirect benefit, at best. Why? Unless there is a reduction in headcount, the cost to the organization is fixed. If a company is paying a developer $75,000 / year, that developer gets that money regardless of how many projects get done and what technologies are used. Theoretically, however, if more projects are completed within a given time, you’d expect that there is a greater potential for revenue. That revenue is not based upon whether SOA was used or not, it’s based upon the relevance of that project to business efforts.
So now we’re back to the promise of IT – Business agility. For a given project, ROI should be about measuring the overall project cost (not specific actions within it) plus any ongoing costs (maintenance) against business benefits (revenue gain) and ongoing cost reduction. So where will we get the best ROI? We’ll get the best ROI by picking projects with the best business ROI. If you choose a project that simply rebuilds an existing system using service technologies, all you’ve done is incurred cost unless those services now create the potential for new revenue sources (a business problem, not a technology problem), or cost consolidation. Cost consolidation can come from IT on its own through reduction in maintenance costs, although if you’re replacing one homegrown system with another, you only reduce costs if you reduce staff. If you get rid of redundant vendor systems, clearly there should be a reduction in maintenance fees. If you’re shooting for revenue gain, however, the burden falls not to IT, but to the business. IT can only control the IT component of the project cost and we should always be striving to reduce that through reuse and improved tooling. Ultimately, however, the return is the responsibility of the business. If the effort doesn’t produce the revenue gain due to inaccurate market analysis, poor timing, or anything else, that’s not the fault of SOA.
There are two last points I want to make, even though this entry has gone longer than I expected. First, Dana made the following statement in his post:
So in a pilot project, or for projects driven at the departmental level, SOA can and should show financial hard and soft benefits over traditional brittle approaches for applications that need integration and easy extensibility (and which don’t these days?).
I would never expect a positive ROI on a pilot project. Pilots should be run with the expectation that there are still unknowns that will cause hiccups in the project, causing it to run at a higher cost that a normal project. A pilot will then result in a more standardized approach for subsequent projects (the extend phase in my maturity model discussions) where the potential can be realized. Pilots should be a showcase for the potential, but they may not be the project that realizes it, so be careful in what you promise.
Dana goes on to discuss the importance of incremental gains from every project, and this I agree with. As he states, it shouldn’t be an “if we build it, they will come” bet. The services you choose to build in initial projects should be ones that you have a high degree of confidence that they will either be reused, or, that they will be modified in the future but where the more fine-grained boundaries allow those modifications to be performed at a lower cost than previously the case.
Second, SOA is an exercise in strategic planning. Every organization has staff that isn’t doing project work, and some subset of that staff is doing strategic planning, whether formally or informally. Without the strategic plan, you’ll be hard pressed to have accurate predictions on future gains, thus making all of your ROI work pure speculation, at best. There’s always an element of speculation in any estimate, but it shouldn’t be complete speculation. So, the question then is not about separate funding for SOA. It’s about looking at what your strategic planners are actually doing. Within IT, this should fall to Enterprise Architecture. If they’re not planning around SOA, then what are they planning? If there are higher priority strategic activities that they are focused on, fine. SOA will come later. If not, then get to work. If you don’t have enterprise architecture, then who in IT is responsible for strategic planning? Put the burden on them to establish the SOA direction, at no increase in cost (presuming you feel it is higher priority than their other activities). If no one is responsible, then your problem is not just SOA, it’s going to be anything of a strategic nature.
Is the word “Application” still relevant?
Gary So of webMethods asked this question in a recent blog entry. He states:
With SOA, the concept of what constitutes an application will have to be rethought. You might have dozens of different services – potentially originating from different vendors, with some on-premise and some served up on demand – that, individually, don’t fulfill a complete business requirement but are collectively arranged to address a business need. Some of these services could also be used in addressing other, unrelated, business requirements. Traditional application logic – logic that would normally reside in the middle tier of a three-tier application architecture — is no longer monolithically implemented in an application server, but distributed among the services themselves and, potentially, process engines that manage the orchestration of these services, governance engines that apply policies related to the use of and interaction between services, and rules engines that externalize logic from everything else. Where in this scenario does one application begin and another end? Is the word “application” even relevant anymore?
I’ve actually thought about this a lot, and agree with Gary. The term application implies boundaries and silo-based thinking. Typical IT projects were categorized as one of two things: application development or Application Integration. In the case of application development, the view is of end-to-end ownership of the entire codebase. In the case of application integration, it was about finding a bunch of glue to stick between two or more rigid processing points that could not be modified. Neither of these models are where we want to be with SOA. Some have to tried to differentiate the future by referring to systems of the past as monolithic applications. Personally, I don’t like that either. Why? Simple. For me, the term application has always implied some form of end-to-end system, most typically with a user interface, with the exception of the batch application. If this is the case, what are we building when a project only produces services? It’s not really an application, because it will just sit there until a consumer comes along. I also don’t think that we should wait to build key services until consumers are ready, however. It’s true that many services will be built this way, but there is no reason that some analysis can’t occur to identify key or “master” services that will be of value to the enterprise.
The terminology I’ve begun using is solution development. An IT project produces a solution, it’s as simple as that. That solution may involve the creation of a user interface, executable business processes, new tasks within a workflow system, services, databases, etc. or any subset of them. It may also make use of existing (whether previously built, purchased and installed, or hosted externally) user interface components, business processes, workflow activities, services, and databases. While vendors may still bundle all of these together and sell them as an “application,” a key evaluation point will be whether those “applications” are build on a solid, composable architecture that allows pieces to be interchanged as needed.
I think that this topic is a good example of the cultural change that is associated with becoming a service-oriented enterprise. If all of your projects can still be defined as application development, you’re not there. In fact, you’re at risk of just creating a bunch of services, as those application boundaries are not conducive to the type of thinking required for success with SOA. If you’re running into projects where the term “application development” just doesn’t seem to make sense, then you’re starting to experience the culture change that comes with the territory. Have fun building your solutions that leverage your SOA!
Is the SOA Suite good or bad?
I haven’t listed to the podcast yet, but Joe McKendrick posted a summary of the discussion in a recent Briefings Direct SOA Insights conversation organized by Dana Gardner. In his entry, Joe asks whether vendors are promoting an oxymoron in offering SOA suites. He states:
“Jumbo shrimp” and “government organization” are classic examples of oxymorons, but is the idea of an “SOA suite” also just as much a contradiction of terms? After all, SOA is not supposed to be about suites, bundles, integration packages, or anything else that smacks of vendor lock-in.
“The big guys — SAP, Oracle, Microsoft, webMethods, lots of software vendors — are saying, ‘Hey, we provide a bigger, badder SOA suite than the next guy,'” Jim Kobelius pointed out. “That raises an alarm bell in my mind, or it’s an anomaly or oxymoron, because when you think of SOA, you think of loose coupling and virtualization of application functionality across a heterogeneous environment. Isn’t this notion of a SOA suite from a single vendor getting us back into the monolithic days of yore?”
Personally, I have no issue with SOA suites. The big vendors are always going to go down this route, and if anything, it simply demonstrates just how far we have to go on the open integration front. If you follow this blog, you know that I’ve discussed SOA for IT. SOA for IT, in my mind, is all about integration across the technology infrastructure at a horizontal level, not a vertical level. SOA for business is concerned about the vertical level semantics of the business, and allowing things to integrate at a business sense. SOA for IT is about integration at the technical level. Can my Service Management infrastructure talk to my Service Execution infrastructure? Can my Service Execution infrastructure talk to my Service Mediation infrastructure? Can my Service Mediation infrastructure talk to my Service Management infrastrucutre? The list goes on. Why is their still a need for these SOA suites? Simply put, we still lack standards for communication between these platforms. It’s one thing to say all of the infrastructure knows how to speak with a UDDI v3 registry. It’s another thing to have the infrastructure agree on the semantics of the metadata placed in a registry/repository (note, there’s no standard repository API), and leverage that information successfully across a heterogeneous set of environments. The smaller vendors try to form coalitions to make this a reality, as was the case with Systinet’s Governance Interoperability Framework, but as they get swallowed up by the big fish, what happens? IBM came out with WebSphere Registry/Repository and it introduced new, proprietary APIs. Competitive advantage for an all IBM environment? Absolutely. If I don’t have an all IBM environment, am I that much worse off however? If I have AmberPoint or Actional for SOA management, I’m still dealing with their proprietary interfaces and policy definitions, so vendor lock-in still exists. I’m just locked in to multiple vendors, rather than one.
The only way this gets fixed is if customers start demanding open standards for technology integration as part of their evaluation criteria. While the semantics of the information exchange may not exist yet, you can at least ask whether or not the vendor exposes management interfaces as services. Put another way, the internal architecture of the product needs to be consistent with the internal architecture of your IT systems. If you desire to have separation of management from enforcement, then your vendor products must expose management services. If the only way to configure their product is through a web-based user interface or by attempting to directly manipulate configuration files, this is going to be very costly for you if you’re trying to reduce the number of independent management console that operations needs to deal with. Even if it’s all IBM, Oracle, Microsoft, or whoever, the internal architecture of that suite needs to be consistent with your vendor-independent target architecture. If you haven’t taken the time to develop one, then you’re allowing the vendors to push their will on you.
Let’s use the city planning analogy. A suite vendor is akin to the major developer. Do the city planners simply say, “Here’s your 80,000 acres, have fun?” That probably wouldn’t result in a good deal for the city. Taking the opposite extreme, the city doesn’t want individual property owners to do whatever they want, either. Last year, there was article about a nearby town that had somehow managed to allow an adult store to set up shop next door to a daycare center in a strip mall. Not good. The right approach, whether you want to have a diverse set of technologies, or a very homogenous set is to keep the power in the hands of the planners, and that is done through architecture. If you can remain true to your architecture with a single vendor? Great. If you prefer to do it with multiple vendors, that’s great at well. Just make sure that you’re setting the rules, not them.