Archive for the ‘Maturity Model’ Category
Maturity Levels
While working on a maturity model, a colleague pointed out a potential pitfall to me. The way I had defined the levels, they were too focused on standardized processes, which was not my intent. Indeed, many maturity efforts, such as CMMi, tend to be all about establishing standard processes across the enterprise. The problem with this is that just because you have standard processes doesn’t mean you’re actually getting the intended results from the capability. I’m sure this will ring true with my friend James McGovern who has frequently lambasted CMMi in his blog. So, to fix things, I propose the following maturity levels, and I’d like feedback from my readers.
- Not existent/Not applicable: The capability either does not exist at the organization, or is not needed.
- Ad hoc: The capability exists at the organization, however, the organization has no idea whether there is consistency in the way it is performed, whether within a team or across teams, there is no way to measure the costs and benefits associated with it, and there are no target costs and benefits associated with it.
- Measurable: The capability exists at the organization, and the organization is tracking the costs and benefits associated with it. There is no consistency in how the capability is performed either within teams or across teams, as shown by the measurements. The organization does not have any target costs and benefits associated with it.
- Defined: The capability exists at the organization, the organization is tracking the costs and benefits associated with it, and the organization has defined target costs and benefits associated with the capability. There is inconsistency, however, in achieving those costs and benefits. Note that different teams can have different target costs and benefits, if the organization believes that a single, enterprise standard is not in its best interest.
- Managed: The capability exists at the organization, the organization is tracking the costs and benefits associated with it, target costs and benefits have been defined, and the teams executing the capability are all achieving those target costs and benefits.
- Optimizing: The capability is fully managed and processes exist to continually monitor both the performance of the teams performing the capability as well as the target costs and benefits and make changes as needed, whether it is new targets, new operational models (e.g. switching from a centralized approach to a decentralized approach, relying on a service provider, etc.), new processes, or any other change for the better.
Maturity levels need to show continual improvement, and it can’t be solely about standardizing a process, since it may not need to be standardized across the enterprise, nor may those processes actually achieve the desired cost levels, even though they are standardized. Standardization is one way of getting there, and I’ve tried to make these descriptors be applicable for many paths of getting there. Let me know what you think.
Gartner AADI: State of SOA
Presenter: Daniel Sholler
Dan is largely presenting results from some surveys that Gartner has done. Highlights:
- Adoption is increasing, but so is number of organizations that are choosing to delay/do nothing
- Nearly all organizations are at maturity level 1
- Of the very few organizations that are above level 1, interest/usage in WOA/REST is increasing
- More mature organizations are using services for B2B and multi-channel applications
- Only 1/3 of organizations adopting SOA are using an ESB
- Stage 2 maturity companies have nearly double the number of service consumers for the same number of services as Stage 1 maturity companies
- “BPM is the ‘killer app’ for SOA”
My thoughts: Nothing really surprising here. I’m not at all surprised that we’re at a very early stage of maturity. The statement that the more mature organizations are pursuing B2B and multi-channel opportunities is an aberration, in my opinion. I think those are simply opportunities that some organizations have and other don’t, rather than being tied to the maturity of the organization. The bullet point that more mature organizations have twice the number of consumers was interesting to me. That one seems to make sense from a maturity standpoint. The BPM comment isn’t surprising at all, because I don’t think you can do a good job with BPM without having services.
Gartner AADI: Maturity Assessment
I attended a session on assessing maturity in IT given by Susan Landry. She outlined their maturity model which covers eight different dimensions. The maturity model has familiar levels if you’ve looked at my work, CMMI, or COBIT. Level 1 is ad hoc, level 2 is repeatable, level 3 is defined, level 4 is quantitatively managed, and level 5 is optimizing. The eight assessment areas are:
- Application Portfolio Management (APM)
- Project Portfolio Management (PPM)
- Staffing, Skills, and Sourcing
- Financial Analysis and Budgets
- Vendor Management
- Management of Architecture
- Software Process
- Operations and Support
The most interesting part of the discussion, however, was a single slide that discussed the interdependencies between these areas. For each pair of areas, the relationship was classified as either highly interdependent or moderately interdependent. Having done a multi-dimensional maturity model before, a big challenge is in determining whether or not it makes sense to get scored high in one dimension or low in another. In my SOA maturity model, I typically found that when scores were two or more levels apart, it was probably going to cause some imbalance in the organization. If the scores were even or a single level apart, it was probably workable. What I didn’t do, however, was to explore those inter-relationships and see if that theory uniformly applied. While Gartner didn’t provide a lot of depth in the inter-relationships, they did at least make a statement regarding it which can provide a lot of assistance in determining how to interpret the scoring and what actions to take.
Gartner AADI Keynote
I’m in the keynote for the Gartner Application Architecture, Development, and Integration Summit. So far, the biggest thing that the panelists have mentioned is this overlay on the traditional Gartner hype cycle which defines things as in the “Zone of Competitive Leadership (ZCL),” the “Zone of the Mainstream (ZM),” and the “Zone of the Decline (ZD).” If not apparent, things that are in the ZCL are higher risk but higher reward. Things that are in the ZM are necessary for companies to adopt to simply keep up with their competitors. Things that are in ZD are for the most risk averse. Yefim Natis put Client-Server architectures in the Zone of Decline, as an example.Yefim also called out that this coming year, “Interactive” SOA (which is Gartner’s “traditional” SOA) will move into the Zone of the Mainstream. I’ll have to go back and look up what their definition of interactive SOA is, but unless it’s just the standard integration-based approach to building the same old solutions, I’d be hesitant to say that SOA is mainstream. Using services where we were previously using EJBs or other distributed components clearly is mainstream at this point, but until we see more projects that are strictly about building services and a separation between the construction of consumers and the construction of services, I think we’re still in the ZCL.Yefim also talked about other types of SOA, including Event-Driven SOA, WOA, and Context-based SOA, which he’ll cover in a later session. There was then some brief discussion about the SOA platform, and now we’ve moved onto a discussion from Gene Phifer about Web 2.0, Cloud Computing, Computing as a Service, Social Networking, etc. It’s nice to hear them giving a decent amount of attention to this. Nothing new in the quick overview, but keynote-level attention is a good thing.Finally, Matthew Hotle is wrapping up with a discussion about governance. They’re emphasizing a very lightweight approach, with just enough decision support to ensure that the organization can still be agile. There’s a session on their Maturity Assessment for Application Development later in the day that I plan on attending. From what they showed in the keynote, it’s a five level maturity approach that sounds very similar to what I’ve discussed in the past (here, here).More to come from the conference, I hope to be blogging throughout.
Measuring Your Success
As I’ve mentioned previously, I’m a member of the SOA Consortium. In a recent conference call for them, the subject of maturity models came up, and I discussed some of the efforts that I’ve done on MomentumSI’s Maturity Model. Anyway, the end result of the discussion was an email exchange between myself and Ron Schmelzer of ZapThink discussing the whole purpose of maturity models. In my prior life working in an enterprise, I’ve seen my fair share of maturity models on various subjects and I know there were certainly some that I dismissed as marketing fluff. At the same time, there were others that peaked my interest, and even more interesting, I also spent time creating my own maturity models. So what gives? Are they good or bad?
The purpose of a maturity model is pretty straightforward. It’s an attempt to try to quantify where you fall on some continuum. Obviously, there are many ways that this can be problematic, but it usually falls into two categories. First, you may not even think the subject at hand is worth measuring, therefore, a maturity model is a waste of time. Second, you may agree that subject is worth measuring, but you may disagree with the measuring scale. Frequent readers of my blog know that I’m no stranger to that category, as I started a bit of stir with some comments regarding the maturity model that David Linthicum had posted on his blog.
For this entry, the real question to ask is whether or not SOA adoption is something worth measuring? If you answer yes, then you’re going to need some yardstick to do so. I certainly think that any SOA adoption effort must include some way of assessing your efforts. If you don’t have this, how can you ever claim success? A maturity model is one way of assessing it. A roadmap with goals attached to points in time can certainly be another way of doing it, as well. I believe that you’ll need both. A maturity model approach is good because it’s not based upon reaching a point in time, it’s based on more qualitative factors, such as how you do things. You don’t become more mature with SOA buy purchasing a product, for example. You become more mature by using a product appropriately, consistently, successfully and in a repeatable fashion. The challenge with maturity models, however, is that they are often created as a unit of comparison, rather than as a unit of assessment. This has everything to do with the yardstick of measurement. During the SOA Consortium discussion, someone brought up that some maturity models aren’t valuable because an organization may not be concerned with reaching the upper levels of maturity. When you create a maturity model that is intended as a comparison tool, you need a yardstick that works for all. For organizations that are content at level one or level two on your scale, there may not be much value. If you focus solely on a maturity model as an assessment tool, step one is to establish criteria for the levels that make sense for your organization. If you do this, now you can use it as a means of measuring your progress on your journey. You won’t be able to use it to compare yourself to your competitors, but that’s okay, because you’re measuring your activities, not theirs.
Roadmaps are good as well, because they are very execution-oriented. While a maturity model may call out some lofty, qualitative goals, it doesn’t articulate a path to get you there. Roadmaps, on the other hand, consist of activities and deadlines. At the same time, roadmaps can run a risk of being too focused on the what and when, and not on the how.
The important takeaway from this is that if you’re going to attempt to adopt SOA, you need a way of measuring it. As many have said, SOA is a journey. It’s not one project. My recommendation it consider using both a maturity model and a roadmap with concrete milestones as tools for measuring your efforts. The roadmap will keep you focused on the execution, and the maturity model will ensure that you can capture the more qualitative aspects, such as service consumer/service provider interactions in the development lifecycle. What are your thoughts? Are maturity models a good things or a bad thing? If they can be either, what makes for a good maturity model versus a lousy one?
Most popular posts to date
It’s funny how these syndicated feeds can be just like syndicated TV. I’ve decided to leverage Google Analytics and create a post with links to the most popular entries since January 2006. My blog isn’t really a diary of activities, but a collection of opinions and advice that hopefully remain relevant. While the occasional Google search will lead you to find many of these, many of these items have long since dropped off the normal RSS feed. So, much like the long-running TV shows like to clip together a “best of” show, here’s my “best of” entry according to Google Analytics.
- Barriers to SOA Adoption: This was originally posted on May 4, 2007, and was in response to a ZapThink ZapFlash on the subject.
- Reusing reuse…: This was originally posted on August 30, 2006, and discusses how SOA should not be sold purely as a means to achieve reuse.
- Service Taxonomy: This was originally posted on December 18, 2006 and was my 100th post. It discusses the importance and challenges of developing a service taxonomy.
- Is the SOA Suite good or bad? This was originally posted on March 15, 2007 and stresses that whatever infrastructure you select (suite or best-of-breed), the important factor is that it fit within a vendor-independent target architecture.
- Well defined interfaces: This post is the oldest one on the list, from February 24, 2006. It discusses what I believe is the important factor in creating a well-defined interface.
- Uptake of Complex Event Processing (CEP): This post from February 19, 2007 discusses my thoughts on the pace that major enterprises will take up CEP technologies and certainly raised some interesting debate from some CEP vendors.
- Master Metadata/Policy Management: This post from March 26, 2007 discusses the increasing problem of managing policies and metadata, and the number of metadata repositories than may exist in an enterprise.
- The Power of the Feedback Loop: This post from January 5, 2007 was one of my favorites. I think it’s the first time that a cow-powered dairy farm was compared to enterprise IT.
- The expanding world of the “repistry”: This post from August 25, 2006 discusses registries, repositories, CMDBs and the like.
- Preparing the IT Organization for SOA: This is a June 20, 2006 response to a question posted by Brenda Michelson on her eBizQ blog, which was encouraging a discussion around Business Driven Architecture.
- SOA Maturity Model: This post on February 15, 2007 opened up a short-lived debate on maturity models, but this is certainly a topic of interested to many enterprises.
- SOA and Virtualization: This post from December 11, 2006 tried to give some ideas on where there was a connection between SOA and virtualization technologies. It’s surprising to me that this post is in the top 5, because you’d think the two would be an apples and oranges type of discussion.
- Top-Down, Bottom-Up, Middle-Out, Outside-In, Chicken, Egg, whatever: Probably one of the longest titles I’ve had, this post from June 6, 2006 discusses the many different ways that SOA and BPM can be approached, ultimately stating that the two are inseparable.
- Converging in the middle: This post from October 26, 2006 discusses my whole take on the “in the middle” capabilities that may be needed as part of SOA adoption along with a view of how the different vendors are coming at it, whether through an ESB, an appliance, EAI, BPM, WSM, etc. I gave a talk on this subject at Catalyst 2006, and it’s nice to see that the topic is still appealing to many.
- SOA and EA… This post on November 6, 2006 discussed the perceived differences between traditional EA practitioners and SOA adoption efforts.
Hopefully, you’ll give some of these older items a read. Just as I encouraged in my feedback loop post, I do leverage Google Analytics to see what people are reading, and to see what items have staying power. There’s always a spike when an entry is first posted (e.g. my iPhone review), and links from other sites always boost things up. Once a post has been up for a month, it’s good to go back and see what people are still finding through searches, etc.
SOA Consortium Goals
Alastair Bathgate is taking the SOA Consortium to task in a response to Joe McKendrick’s recent post about the goals of the SOA Consortium. The stated goals of the SOA Consortium, as Joe correctly states, are to have 75% of the Global 1000 complete a successful SOA implementation by 2010.
Alastair asks, “What on earth is the measure of success?” This type of questioning is certainly appropriate, and probably the biggest hurdle that the SOA Consortium will need to overcome. Joe McKendrick calls this out as well, stating that success with SOA is a long-term gain, and that “the only true measures of long-term success in the market are either increased revenues or increased stock values, and many factors besides SOA will contribute to this. The real issue is figuring out how to measure SOA’s contribution to this success.”
For the record, I am a member of the SOA Consortium. The first conference call I was on did include discussion on how to assess SOA success, so I have confidence that this will be addressed. Personally, I think the greater concern is Joe’s final comment. What is SOA’s contribution to the success? Already in my own efforts, there are plenty of things that SOA adoption can call attention to, such as having solid Security and Management architectures or standardized infrastructure practices, none of which are necessarily tied to SOA, but can certainly make life more difficult for SOA adoption. If success is measured in terms of the company’s stock price, can you really say whether SOA had anything to do with a fall or climb? Financial statements are always interesting, because the raw numbers give some quantitative insight, but the qualitative analysis is somewhat of a free-for-all, and highly subjective to the views of both the company’s PR group as well as the analysts.
Alastair expressed some concern about ‘Gorilla Vendor sponsored “clubs”‘ and that’s certainly within his right. Personally, I’m a glass half-full kind of guy, and always look for the good in these things. I think an SOA advocacy group is a good thing. It’s a targeted opportunity for group discussion, similar to a BoF session at a conference. I’ve learned a lot in these environments, and they haven’t been dominated by vendors pushing their will. Maybe I’m the exception, but to each their own. Everyone has to judge the value they see in these efforts and decide accordingly. In any case, I’m glad Alastair is paying attention and blogging on the subject, as it will only help the broader community in the long run.
So, rather than lamenting that SOA success is difficult to determine, what are your thoughts on how to measure SOA success? Comment or trackback with your thoughts. I’ll dedicate an upcoming post to my own thoughts, as well.
Horizontal and Vertical Thinking
I’ve been meaning to post on this subject for some time, so it’s good that I got to the airport a little earlier than normal today. There’s nothing like blogging at 5:30 in the morning.
As I mentioned in my last entry, I just finished listening to a podcast from IT Conversations on the drive to the airport which was a discussion on user experience with Irene Au, Director of User Experience for Google. One of the questions she took from the audience dealt with the notion of having a centralized group for User Experience, or whether it should be a decentralized activity. This question is one that frequently comes up in SOA discussions, as well. Should you have a centralized service development, or should your efforts be decentralized? There’s no right or wrong answer to this question, however, it’s certainly true that your choices can impact your success. In the podcast, Irene discussed the matrixed approach at Yahoo, and how everything would up being funded by business units. This made it difficult to do activities for the greater good, such as style guides, etc. The business unit wanted to maximize their investment and have those resources focused on their activities, not someone else’s. Putting this same topic in the context of SOA, this would be the same as having user-facing application teams developing services. The challenge is that the business unit wants that user-facing application, and they want it yesterday. How do we create services that aren’t solely of value to just that application. At the opposite extreme, things can be centralized. Irene discussed the culture of open office hours at Google, and how she’ll have a line of people outside her office with their user experience questions in hand. While this may allow her to maintain a greater level of consistency, resource management can be a big challenge, as you are being pulled in multiple directions. Again, putting this in the SOA context, the risk is that in the quest for the perfect enterprise service, you can put individual project schedules at risk as they wait for the service they are dependent on. These are both extreme positions, and seldom is an organization at one extreme or the other, but usually somewhere in the middle.
In trying to tackle this problem, it’s useful to think of things as either horizontal or vertical. Horizontal concepts are ones where breadth is the more important concern. For example, infrastructure is most frequently presented as a horizontal concern. I can take a four CPU server and run just about anything I’d like on it these days, meaning it provides broad coverage across a variety of functional domains. A term frequently used these days is commodity hardware, and the notion of commoditization is a characteristic of horizontal domains. When breadth becomes more important that depth, there’s usually not many opportunities for innovation. Largely, activities become more focused on reducing the cost by leveraging economies of scale. Commoditization and standardization go hand in hand, as it’s difficult to classify something as a commodity that doesn’t meet some standard criteria. In the business world, these horizontal areas are also ones that are frequently outsourced, as all companies typically do it the same way meaning there is little room for competitive differentiation.
Vertical concepts are ones where depth is the more important concern. In contrast to the commoditization associated with horizontal concerns, vertical items are ones where innovation can occur and where companies can set themselves apart from their competitors. User experience is still an area where significant differentiation can occur, so most user-facing applications fall into this category. Business knowledge, customer experience (preferably at a partnership level to have them involved in the process), are keys at this level.
By nature, vertical and horizontal concerns are orthogonal and can create tension. I have a friend who works as a user experience consultant and he once asked me about how to balance the concerns that may come from a user experience focus, clearly rooted in the vertical domains with the concerns of SOA, which are arguably focused on more horizontal concerns. There’s no easy answer to this. Just as the business must make decisions over time on where to commoditize and where to specialize, the same holds true for IT. Apple is a great example to look at, as their decision to not commoditize in their early days clearly resulted in them being relegated to niche (but still relevant) status in computer sales. Those same principles, however, to remain more vertically-focused with tight top-to-bottom controls have now resulted in their successes with their computers, iTunes, Apple TV, the iPod, and the forthcoming iPhone. There are a number of ways to be successful, although far fewer ways than there are to be unsuccessful.
When trying to slice up your functional domains into domains of services, you must certainly align it with the business goals. If there is an area of the business where they are trying to create competitive differentiation, this is probably not the best area to look for services that will have broad enterprise reuse, although it is very dependent on whether technology plays a direct role in that differentiation or an indirect role, such as whether the business to consumer interaction is solely through a website, or if it is through a company representative (e.g. a financial advisor). These areas that are closest to the end user are likely to require some degree of verticality to allow for tighter controls and differentiation. That’s not to say they own the entire solution, top to bottom, however, which would be a monolith.
As we go deeper into the stack, you should look for areas where commoditization and standardization outweighs the benefits of customization. It may begin at a domain level, such as integration across a suite of applications for a single business unit, with each successive level increasing the breadth of coverage. There is no point where the vertical solutions stop, and everything beneath it has enterprise breadth. Rather, it is a continuum of decreasing emphasis on depth and increasing emphasis on breadth. A Internet company may try to differentiate themselves in their end-user facing systems that the users interact with, allowing a large degree of autonomy for each product line. The supporting services behind those user interfaces will increase in the breadth of their scope, but still may require some degree of specialization, such as having to deal with a particular region of a country or even the world for global organizations. We then bleed into the area of “applistructure” and solutions that fall more into the support arena. A CRM system will have close ties to the end-user facing sales portal. The breadth of CRM coverage may need to span multiple lines of business, unlike the sales portal, where specialization could occur. Going deeper, we have applications that may have no ties to the end-user facing systems, but are still necessary to run a business, such as Human Resources. Interestingly, if you’re following my logic you may be thinking that these things should all be outsourced, but the truth is that many of these areas are still far from being commoditized. That’s because they involve user facing components, which brings us back to those vertical domains where customization can add value. An organization that outsources the technology side of HR, but doesn’t have an associated reduction in HR staff may have a potential conflict when they want to have specialized HR processes, but are dealing with commodity interfaces and systems. Put simply, you can’t have it both ways.
The trend continues on down the stack to the infrastructure and the world of the individual developer. If you’re truly wanting to adopt SOA from top to bottom, there should be a high degree of commoditization and standardization at this level. Organizations where solutions are still built top-to-bottom, with customized hardware platforms, source code management, programming languages, etc. are going to struggle with SOA, because their culture is vertically-oriented to an extreme.
While the speed of change, business decisions on what things are core competencies and what things are not do not change overnight. Taking an organization where each product group had its own staff (vertically-oriented) and switching it to a centralized sales organization (horizontally-oriented) is a significant cultural change, and often doesn’t go smoothly. You only need to look at the number of mergers and acquisitions that have been deemed successful (less than 50%) to understand the difficulty. Switching from vertically-focused IT solutions to horizontally-focused IT solutions is just as difficult, if not more difficult. Humans are significantly more adaptable than technology solutions, which at the core, are binary, yes/no environments. The important thing is to recognize when misalignment is occurring and take action to fix it. That’s all about governance. If users are trying to apply significant customization to a technology area that has been deemed as a commodity by the business, push back needs to occur to emphasis that the greater good takes precedence over individual needs. If IT is taking far too long to deliver a solution in an area where time to market and competitive differentiation is critical, remove the barriers and give that group more control over the solution, at the expense of broader applicability. If you don’t know what your priorities and principles are, however, for each domain, you’ll end up in and endless sequence of meetings that are rooted in opinions, rather than documented principles and behaviors desired by the organization.
External consumers and providers
James McGovern, in his links entry for April 11th posted this comment in regards to my entry on what SOA adoption actually means:
“A measurement that would be interesting is to ask enterprises how many services do you have that are consumed outside of your enterprise. The numbers would be dramatically lower…”
As I thought about this, it became more and more interesting. First, I definitely agree that the number of services is going to be dramatically lower, unless your company is already a service provider (think ASP), in which case, then it should constitute the majority of your service portfolio. What about other verticals, however? Certainly supply chain interactions will involve external entities. Truth be told, there’s lot of potential for interactions with partner companies. How many companies outsource payroll processing to ADP or someone else? I’d venture a guess that there are probably areas for commodity services in every vertical. Over time, things that once were competitive differentiators become commodities. Once that happens, a marketplace opens up for commodity providers that focus on operational excellence and low cost, and the companies that prefer to focus on customer service get rid of their homegrown infrastructure and leverage the commodity provider. Guess what, when that happens, the potential now exists for service interactions. I recently presented some introductory information on service concepts and described business services as services that both ones that represent the primary business functions as well as ones that support the primary business such as HR, payroll, etc. Technology clearly plays a big role in both.
You may be thinking, “No arguments on what you said, but James asked about services consumed by outsiders, not provided by outsiders.” Quite true, but again, I’d be willing to bet that the vast majority of these B2B interactions will require bi-directional communications. It may be the case that 90% of the time, the partner acts in the service provider role, but odds are that some of that processing will require them having the ability to make service calls back to you. At a minimum, some form of events should be flowing back into your infrastructure. The more information flow can be a circle, rather than a one-way line, the greater the potential for leveraging emerging technologies like CEP for continued innovation. If the information only flows one way, you severely restrict your ability to innovate based on that information.
Aside:James also posted some musings on Open Source and the possibilities of it playing a role in commodity vertical applications yesterday. If that happened, there would certainly have potential implications. It probably wouldn’t take long for someone to create a hosted solution for these open source offerings, again creating the potential for service interactions between the two companies.
The end result of my thinking on this is that if your thinking on SOA is constrained to inside your firewall, it won’t be very long at all before you need to extend that thinking, both as a consumer of services provided from the outside as well as a provider of services that will be consumed by the outside. Companies that make the claim that they’ve “adopted SOA” should have a view that encompasses all of it, regardless of whether their core business is being a service provider or not.
SOA Adoption – What does it mean?
Evans Data Corporation released another survey that stated that, according to Sys-Con’s article:
…close to a quarter of enterprise-level developers indicated that they already have service-oriented architecture in place, and another 28% plan to do so within the next 24 months.
I have two major issues with this. First, while I don’t mean to diminish the role of the developer, that’s not the person I would be asking about SOA adoption. From a developer’s perspective, this could mean anything from having used web service technologies within a single project to having tens or hundreds of shared services, orchestrated by BPM solutions. What does the Chief Architect or the CIO say? That would increase my faith in this data, although, there’s still problem number two. What does it mean to adopt SOA and how do you know when you’re done? The statement in the Sys-Con release says, “have service-oriented architecture in place.” Again, what does that mean? You don’t put SOA in place, you put services in place. The architecture is the set of guiding principles that led to the particular services that are in production. What is the domain of those guiding principles? Was it at a project level? A line of business? The entire enterprise? I would have liked to see every respondent who said they had SOA in place also answer a question asking them to describe their SOA and then do some analysis on the responses. It certainly could have made for some good quotes.
This does come back to the notion of a maturity model, or at least some form of assessment criteria. In the maturity model that I’ve worked on and continue to refine, the very first level above ad hoc activity is all about planning. Part of planning is establishing goals and assessment criteria. It’s at that stage where you define what SOA adoption is for your company and what criteria you’re going to use to evaluate where you’re at in the effort. Is technology infrastructure part of it? Absolutely. Are your architectural models part of it? Absolutely. Are your operational processes part of it? Absolutely. Are your governance processes part of it? Absolutely. Is your organizational structure/approach part of it? Absolutely. There’s a lot more to it than just buying a product or building a few web services.
SOA Consortium
The SOA Consortium recently gave a webinar that presented their Top 5 Insights based upon a series of executive summaries they conducted. Richard Mark Soley, Executive Director of the SOA Consortium, and Brenda Michelson of Elemental Links were the presenters.
A little background. The SOA Consortium is a new SOA advocacy group. As Richard Soley put it during the webinar, they are not a standards body, however, they could be considered a source of requirements for the standards organizations. I’m certainly a big fan of SOA advocacy and sharing information, if that wasn’t already apparent. Interestingly, they are a time-boxed organization, and have set an end date of 2010. That’s a very interesting approach, especially for a group focused on advocacy. It makes sense, however, as the time box represents a commitment. 12 practitioners have publicly stated their membership, along with the four founding sponsors, and two analyst firms.
What makes this group interesting is that they are interested in promoting business-driven SOA, and dispelling the notion that SOA is just another IT thing. Richard had a great anecdote of one CIO that had just finished telling the CEO not to worry about SOA, that it was an IT thing and he would handle it, only to attend one of their executive summits and change course.
The Top 5 insights were:
- No artificial separation of SOA and BPM. The quote shown in the slides was, “SOA, BPM, Lean, Six Sigma are all basically on thing (business strategy & structure) that must work side by side.” They are right on the button on this one. The disconnect between BPM efforts and SOA efforts within organizations has always mystified me. I’ve always felt that the two go hand in hand. It makes no sense to separate them.
- Success requires business and IT collaboration. The slide deck presented a before and after view, with the after view showing a four-way, bi-directional relationship between business strategy, IT strategy, Enterprise Architecture, and Business Architecture. Two for two. Admittedly, as a big SOA (especially business-driven SOA) advocate, this is a bit of preaching to the choir, but it’s great to see a bunch of CIOs and CTOs getting together and publicly stating this for others to share.
- On the IT side, SOA must permeate the organization. They recommend the use of a Center of Excellence at startup, which over times shifts from a “doing” responsibility to a “mentoring” responsibility, eventually dissolving. Interestingly, that’s exactly what the consortium is trying to do. They’re starting out with a bunch of people who have had significant success with SOA, who are now trying to share their lessons learned and mentor others, knowing that they’ll disband in 2010. I think Centers of Excellence can be very powerful, especially in something that requires the kind of cultural change that SOA will. Which leads to the next point.
- There are substantial operational impacts, but little industry emphasis. As we’ve heard time and time again, SOA is something you do, not something you buy. There are some great quotes in the deck. I especially liked the one that discussed the horizontal nature of SOA operations, in contrast to the vertical nature (think monolithic application) of IT operations today. The things concerning these executives are not building services, but versioning, testing, change management, etc. I’ve blogged a number of times on the importance of these factors in SOA, and it was great to hear others say the same thing.
- SOA is game changing for application providers. We’ve certainly already seen this in action with activities by SAP, Oracle, and others. What was particularly interesting in the webinar was that while everyone had their own opinion on how the game will change, all agreed that it will change. Personally, I thought these comments were very consistent with a post I made regarding outsourcing a while back. My main point was that SOA, on its own, may not increase or decrease outsourcing, but it should allow more appropriate decisions and hopefully, a higher rate of success. I think this applies to both outsourcing, as well as to the use of packaged solutions installed within the firewall.
Overall, this was a very interesting and insightful webinar. You can register and listen to a replay of it here. I look forward to more things to come from this group.
ROI and SOA
Two ZDNet analysts, Dana Gardner and Joe McKendrick, have had recent posts (I’ve linked their names to the specific posts) regarding ROI and SOA. This isn’t something I’ve blogged on in the past, so I thought I’d offer a few thoughts.
First, let’s look at the whole reason for ROI in the first place. Simply put, it’s a measurement to justify investment. Investment is typically quanitified in dollars. That’s great, now we need to associate dollars with activities. Your staff has salaries or bill rates, so this shouldn’t be difficult, either. Now is where things get complicated, however. Activities are associated with projects. SOA is not a project. An architecture is a set of constraints and principles that guide an approach to a particular problem, but it’s not the solution itself. Asking for ROI on SOA is similar to asking for ROI on Enterprise Architecture, and I haven’t seen much debate on that. That being said, many organizations still don’t have EA groups, so there are plenty of CIOs that may still question the need for it as a formal job classification. Getting back to the topic, we can and do estimate costs associated with a project. What is difficult, however, is determining the cost at a fine-grained level. Can you determine the cost of developing services in support of that project accurately? In my past experience, trying to use a single set of fine-grained activities for both project management and time accounting was very difficult. Invariably, the project staff spent time that was focused on interactions that were needed to determine what the next step was. These actions never map easily into a standard task-based project plan, and as a result, caused problems when trying to charge time. (Aside: For an understanding on this, read Keith Harrison-Broninski’s book Human Interactions or check out his eBizQ blog.) Therefore, it’s going to be very difficult to put a cost on just the services component of a project, unless it’s entire scope of the project, which typically isn’t the case.
Looking at the benefits side of the equation, it is certainly possible to quantify some expected benefits of the project, but again, only a certain level. If you’re strictly looking at IT, your only hope of coming up with ROI is to focus on cost reduction. IT is typically a cost center, with at best, an indirect impact on revenue generation. How are costs reduced? This is primarily done by reducing maintenance costs. The most common approach is through a reduction in the number of vendor products involved and/or a reduction in the number of vendors involved. More stuff from fewer vendors typically means more bundling and greater discounts. There are other options, such as using open source products with no licensing fees, or at least discounted fees. You may be asking, “What about improved productivity?” This is an indirect benefit, at best. Why? Unless there is a reduction in headcount, the cost to the organization is fixed. If a company is paying a developer $75,000 / year, that developer gets that money regardless of how many projects get done and what technologies are used. Theoretically, however, if more projects are completed within a given time, you’d expect that there is a greater potential for revenue. That revenue is not based upon whether SOA was used or not, it’s based upon the relevance of that project to business efforts.
So now we’re back to the promise of IT – Business agility. For a given project, ROI should be about measuring the overall project cost (not specific actions within it) plus any ongoing costs (maintenance) against business benefits (revenue gain) and ongoing cost reduction. So where will we get the best ROI? We’ll get the best ROI by picking projects with the best business ROI. If you choose a project that simply rebuilds an existing system using service technologies, all you’ve done is incurred cost unless those services now create the potential for new revenue sources (a business problem, not a technology problem), or cost consolidation. Cost consolidation can come from IT on its own through reduction in maintenance costs, although if you’re replacing one homegrown system with another, you only reduce costs if you reduce staff. If you get rid of redundant vendor systems, clearly there should be a reduction in maintenance fees. If you’re shooting for revenue gain, however, the burden falls not to IT, but to the business. IT can only control the IT component of the project cost and we should always be striving to reduce that through reuse and improved tooling. Ultimately, however, the return is the responsibility of the business. If the effort doesn’t produce the revenue gain due to inaccurate market analysis, poor timing, or anything else, that’s not the fault of SOA.
There are two last points I want to make, even though this entry has gone longer than I expected. First, Dana made the following statement in his post:
So in a pilot project, or for projects driven at the departmental level, SOA can and should show financial hard and soft benefits over traditional brittle approaches for applications that need integration and easy extensibility (and which don’t these days?).
I would never expect a positive ROI on a pilot project. Pilots should be run with the expectation that there are still unknowns that will cause hiccups in the project, causing it to run at a higher cost that a normal project. A pilot will then result in a more standardized approach for subsequent projects (the extend phase in my maturity model discussions) where the potential can be realized. Pilots should be a showcase for the potential, but they may not be the project that realizes it, so be careful in what you promise.
Dana goes on to discuss the importance of incremental gains from every project, and this I agree with. As he states, it shouldn’t be an “if we build it, they will come” bet. The services you choose to build in initial projects should be ones that you have a high degree of confidence that they will either be reused, or, that they will be modified in the future but where the more fine-grained boundaries allow those modifications to be performed at a lower cost than previously the case.
Second, SOA is an exercise in strategic planning. Every organization has staff that isn’t doing project work, and some subset of that staff is doing strategic planning, whether formally or informally. Without the strategic plan, you’ll be hard pressed to have accurate predictions on future gains, thus making all of your ROI work pure speculation, at best. There’s always an element of speculation in any estimate, but it shouldn’t be complete speculation. So, the question then is not about separate funding for SOA. It’s about looking at what your strategic planners are actually doing. Within IT, this should fall to Enterprise Architecture. If they’re not planning around SOA, then what are they planning? If there are higher priority strategic activities that they are focused on, fine. SOA will come later. If not, then get to work. If you don’t have enterprise architecture, then who in IT is responsible for strategic planning? Put the burden on them to establish the SOA direction, at no increase in cost (presuming you feel it is higher priority than their other activities). If no one is responsible, then your problem is not just SOA, it’s going to be anything of a strategic nature.
How many versions?
While heading back to the airport from a recent engagement, Alex Rosen and I had a brief discussion on service versioning. He said, “you should blog on that.” So, thanks for idea Alex.
Sooner or later, as you build up your service library, you’re going to have to make a change to the service. Agility is one of the frequently touted benefits of SOA, and one way of looking at it is the ability to respond to change. When this situation arises, you will need to deal with versioning. In order to remain agile, you should prepare for this situation in advance.
There are two extreme viewpoints on versioning, and not surprisingly, they match up with the two endpoints associated with a service interchange- the service consumer and the service provider. From the consumers point of view, the extreme stance would be that the number of versions of a service remain uncapped. In this way, systems that are working fine today don’t need to be touched if a change is made that they don’t care about. This is great for the consumer, but it can become a nightmare for the provider. The number of independent implementations of the service that must be managed by the provider is continually growing, increasing the management costs thereby reducing the potential gains that SOA was intended to achieve. In a worst-case scenario, each consumer would have their own version of the service, resulting in the same monolithic architectures we have today, except with some XML thrown in.
From the providers point of view, the extreme stance would be that only one service implementation ever exists in production. While this minimizes the management cost, it also requires that all consumers move in lock step with the service, which is very unlikely to happen when there are more than one consumer involved.
In both of these extreme examples, I’m deliberately not getting into the discussion of what the change is. While backwards compatibility can have influence on this, regardless of whether the service provider claims 100% backwards compatibility or not, my experiences have shown that both the consumer and the provider should be executing regression tests. My father was an electrician, and I worked with him for a summer after my freshman year in college. He showed me how to use a “wiggy” (a portable voltage tester) for checking whether power was running to an outlet, and told me “If you’re going to work an outlet, always check if it’s hot. Even if one of the other electricians or even me tells you the power is off, you still check if it’s hot.” Simply put, you don’t want to get burned. Therefore, there will always be a burden on the service consumers when the service changes. The provider should provide as much information as possible so that the effort of the consumer is minimized, but the consumer should never implicitly trust that what the provider says is accurate without testing.
Back to the discussion- if we have these two extremes, the right answer is somewhere in the middle. Choosing an arbitrary number isn’t necessarily a good approach. For example, suppose the service provider states that no more than 3 versions of a service will be maintained in a production. Based upon high demand, if that service changes every 3 months, that means that the version released in January will be decommissioned in September. If the consumer of that first version is only touched every 12 months, you’ve got a problem. You’re now burdening that consumer team with additional work that did not fit into their normal release cycle.
In order to come up with a number that works, you need to look at both the release cycle of the consuming systems as well as the release cycle of the providers and find a number that allows consumers to migrate to new versions as part of their normal development efforts. If you read that carefully, however, you’ll see the assumption. This approach assumes that a “normal release cycle” actually exists. Many enterprises I’ve seen don’t have this. Systems may be released and not touched for years. Unfortunately, there’s no good answer for this one. This may be a symptom of an organization that is still maturing in their development processes, continually putting out fires and addressing pain points, rather than reaching a point of continuous improvement. This is representative of probably the most difficult part of SOA- the culture change. My advice for organizations in this situation is to begin to migrate to a culture of change. Rather than putting an arbitrary cap on the number of service versions, you should put a cap on how long a system can go without having a release. Even if it’s just a collection of minor enhancements and bug fixes, you should ensure that all systems get touched on a regular basis. When the culture knows that regular refreshes are part of the standard way of doing business, funding can be allocated off the top, rather than having to be continually justified against major initiatives that will always win out. It’s like our health- are you better off having regular preventative visits to your doctor in addition to the visits when something is clearly wrong? Clearly. Treat your IT systems the same way.
The Scope of SOA Adoption
I just finished giving a webinar on the importance of SOA pilots with Alex Rosen, and I hope the attendees found it informative. One of the things that I discussed in the webinar was the scope of SOA adoption. Given the recent attention to my last post, I thought I’d discuss it a bit more, since it’s one of the two dimensions of the maturity matrix. It’s also what makes the effort more than just a “search and replace” on the SEI CMMI models as one commenter over on InfoWorld thought.
The last post introduced the levels of maturity, which are:
- Ad Hoc
- Common Goals
- Pilot
- Extend
- Standardize
- Optimize
Those levels are a pretty straightforward way of describing the maturation process of just about anything. So what’s really import is the other dimension which defines exactly what we’re maturing.
In the case of this model, we’re discussing SOA adoption maturity. SOA adoption is not simply about purchasing technology. No one can sell you an SOA, although there was someone selling “SOA in a box” back around Christmas on eBay in Australia. SOA adoption does involve new technologies that can provide support in service development and hosting, such as orchestration engines or web service frameworks, service connectivity, such as SOA appliances or ESBs, and service management. SOA adoption also involves organizational changes. If your organization is structured around application development, which team is responsible for building a service that spans multiple groups? SOA adoption involves governance, whether it be funding models, design time policies, or run time policies. SOA adoption involves new processes designed around the consumer/provider interaction. SOA adoption involves training and communication. How do we market services that have been created to ensure their reuse? Clearly, SOA adoption involves architecture. Enterprise architecture must provide appropriate reference architectures and reviews to ensure both tactical and strategic success. SOA adoption involves Operational Management. Services can’t be dumped into production and forgotten, we must take a proactive approach to monitoring and metric collection and feed that information back into the machine for continuous improvement.
SOA is not easy. If it were, we’d all be done by now. Every company will have different drivers, and different technology needs. An assessment of their maturity in SOA adoption should examine all of dimensions required.
SOA Maturity Model
David Linthicum recently re-posted his view on SOA maturity levels and I wanted to offer up an alternative view, as I’ve recently been digging into this subject for MomentumSI.
Interestingly, Dave calls out a common pattern that other models he’s seen define their levels around components and not degrees of maturity. He states:
While components are important, a maturity model is much more important, considering that products will change over time…
I completely agree on this. Maturity is not about what technologies you use, it’s about using them in the right way. Comparing this to our own maturity, just because you’re old enough to drive a car, doesn’t mean you’re mature. Just because you’ve purchased an ESB, built a web service, or deployed a registry doesn’t mean you’re mature.
Dave then presents his levels. I’ve cut and paste the first sentence that describes each level here.
- Level 0 SOAs are SOAs that simply send SOAP messages from system to system.
- Level 1 SOAs are SOAs that also leverage everything in Level 0 but add the notion of a messaging/queuing system.
- Level 2 SOAs are SOAs that leverage everything in Level 1, and add the element of transformation and routing.
- Level 3 SOAs are SOAs that leverage everything in Level 2, adding a common directory service.
- Level 4 SOAs are SOAs that leverage everything in Level 3, adding the notion of brokering and managing true services.
- Level 5 SOAs are SOAs that leverage everything in Level 4, adding the notion of orchestration.
While these levels may be an accurate portrayal of how many organizations leverage technology over time, I don’t see how they are an indicator of maturity, because there’s nothing that deals with the ability of the organization to leverage these things properly. Furthermore, not all organizations may proceed through these levels in the order presented by Dave. The easiest one to call out is level 5: orchestration. Many organizations that are trying to automate processes are leveraging orchestration engines. They may not have a common directory yet, they may have no need for content based routing, and they may not have a service management platform. You could certainly argue that they should have these things in place before leveraging orchestration, but the fact is, there are many paths that lead to technology adoption, and you can’t point to any particular path and say that is the only “right” way.
The first difference between my efforts on the MomentumSI model and Dave’s levels is that my view is targeted around SOA adoption. Dave’s model is a SOA Maturity Model, and there is a difference between that and a SOA Adoption Maturity Model. That being said, I think SOA adoption is the right area to be assessing maturity. To get some ideas, I first looked to other areas, such as CMMI and COBIT. If we look at just the names of the CMMI and COBIT levels, we have the following:
Level | CMMI | COBIT |
0 | Non-Existent | |
1 | Initial | Initial |
2 | Managed | Repeatable |
3 | Defined | Defined |
4 | Quantitatively Managed | Managed |
5 | Optimizing | Optimized |
So how does this apply to SOA adoption? Quite well, actually. COBIT defines a level 0, and labels it as “non-existent.” When applied to SOA adoption, what we’re saying is that there is no enterprise commitment to SOA. There may be projects out there building services, but the entire effort is ad hoc. At level 1, both CMMI and COBIT label it as “Initial.” Again, applied to SOA adoption this means that the organization is in the planning stage. They are learning what SOA is and establishing goals for the enterprise. Simply put, they need to document an answer to the question “Why SOA?” At level 2, CMMI uses “Managed” and COBIT uses “Repeatable.” At this level, I’m going to side with CMMI. Once goals have been established, you need to start the journey. The focus here is on your pilot efforts. Pilots have tight controls to ensure their success. Level 3 is labeled as “Defined” by both CMMI and COBIT. When viewed from an SOA adoption effort, it means that the processes associated with SOA, whether it be the interactions required, or choosing which technologies to use where, have been documented and the effort is now underway to extend this to a broader audience. Level 4 is labeled as “Quantitatively Managed” by CMMI and “Managed” by COBIT. If you dig into the description on both of these, what you’ll find is that Level 4 is where the desired behavior is innate. You don’t need to handhold everyone to get things to come out the way you like. Standards and processes have been put in place, and people adhere to them. Level 5, as labeled by CMMI and COBIT is all about optimization. The truly mature organizations don’t set the processes, put them in place, and then go on to something else. They recognize that things change over time, and are constantly monitoring, managing, and improving. So, in summary, the maturity levels I see for SOA Adoption are:
- Ad hoc: People are doing whatever they want, no enterprise commitment.
- Common goals: Commitment has been established, goals have been set.
- Pilot: Initial efforts are underway with tight controls to ensure success.
- Extend: Broaden the efforts to the enterprise. As the effort expands beyond the tightly controlled pilots, methodology and governance become even more critical.
- Standardize: Processes are innate, the organization can now be considered a service-oriented enterprise.
- Optimize: Continued improvement of all aspects of SOA.
You’ll note that there’s no mention of technologies anyway in there. That’s because technology is just one aspect of it. Other aspects include your organization, governance, operational management, communications, training, and enterprise architecture. SOA adoption is a multi-dimensional effort, and it’s important to recognize that from the beginning. I find that the maturity model is a great way of assessing where an organization is, as well as providing a framework for measuring continued growth. That being said, your ability to assess it is only as good as your model.