Finding Value in BPM/Workflow Technology

Some recent conversations about the use of workflow and orchestration technologies got me thinking about how to properly look for value when trying to apply these technologies, whether associated with a BPM suite, or with any of the other multitude of tools out there that claim to have orchestration/automation/workflow/work management capabilities.

The one common term that always comes up is process. All of these tools always wind up having some sort of process definition be a requirement. There is one big factor, however, that has a significant impact on where you should look for value, and that’s whether those processes involve manual (i.e. done by a person) activities or not.

Let’s handle the simpler of the two cases, first, which is where there is no manual activities whatsoever. In this case, what we’re really talking about is process automation. If there are no manual steps, then there is no reason that the entire process can’t be fully automated. If we fully automate a process, what are the factors in the value equation? Clearly, if the process isn’t fully automated today, there is a one-time benefit in efficiency. The execution time should move from a variable, potentially unpredictable value, to a consistent, predictable value. This is the case regardless of what tools we use to automate it. Theoretically, I could automate the process with scripts or a programming language and achieve the same value. If you agree with me, then the real value contribution in applying BPM/Workflow technologies lies not in the run-time space, but in the development time space. By either reducing inefficiencies in the communication between analysts and developers through a common language (a process model), or by improving productivity in the development time through the drag-and-drop visual environments of most tools, value can be obtained through time-to-delivery. Beyond this, there is probably not as much value to be obtained through the “management” portion of the BPM suite. Even if the process is subject to frequent change, the area of interest is the time to deliver the change, not optimization of the process itself, since by fully automating the process, we should assume it’s also fully optimized.

If we throw manual tasks into the equation, then we have a different story. While the development time efficiencies certainly still apply, there’s now significant value that can be obtained through process analysis and optimization. I need to know how long those manual tasks take, why Judy accomplishes more tasks than John, what chaos ensues when Fred calls in sick, what the impact of task assignment and escalations are, etc. This information can be obtained by managing the processes, through instrumentation, analytics, and reporting. By doing so, we can get into a cycle of continuous improvement, and strive to optimize the manual efforts that can’t be automated.

Now the reason I bring this up is that there are no shortage of tools that claim to have workflow/business process capabilities. If you have a BPM suite, now you’re faced with the question of which workflow tool to use. What you need to think deeply about is where you’re going to get your value. Products with workflow capabilities may have advantages in development time value because they will come pre-populated with actions/tasks appropriate to the context of that tool, while a generalized BPM platform may not. The flipside, however, is that those same tools with workflow capabilities may only provide a piece of the BPM suite, namely, business process development. If what you really need is business process management, with the ability to monitor, analyze, and optimize the manual parts of your processes, then you may need to sacrifice some development time efficiencies to get the more important run-time value.

Finally, keep in mind that not all work can be defined by a process. As Keith Harrison-Broninski talks about in his book, Human Interactions: The Heart And Soul Of Business Process Management: How People Reallly Work And How They Can Be Helped To Work Better, there will always be ad hoc work. You’ll still need to consider how to best utilize technology to support those ad hoc activities, rather than trying to define a rigid process for something that isn’t.

Book Review: Building Website with Joomla! 1.5

Packt Publishing sent me a complimentary copy of “Building Website with Joomla! 1.5” by Hagen Graf to review. I specifically was interested in this book as I was researching the use of Joomla! as part of redoing an elementary school website.

First off, the book was well organized. It begins with an introduction to web content management systems and the role they play today in web sites, covers a brief history of Joomla!, and then focuses the rest of the book on the installation and configuration options. As a first time Joomla! user, I can definitely say that the book helped me out quite a bit in understanding the way things were configured. Chapters 4 through 12 focus on different areas of Joomla!, including menus, extensions, components, and content in sufficient detail to make a useful reference but in a way that a first time user can get through the configuration. As a reference, this is the book’s strong point.

Where I was a bit disappointed was that I was look more for a tutorial, rather than a pure reference book. Given that I was setting up a new site using Joomla! for the first time, I was hoping for an extended step by step approach that would take me through a complete site development. While there is a chapter in the book called “A Website with Joomla!,” it was only 12 pages long and was presented more in the context of a demonstration than a tutorial.

Overall, I was pleased with the book. I think it will be a useful reference as I develop the site, but I also think I’ve still got more work ahead of me to get the site out the door.

More on review boards…

In response to my post on the “Effective Governance” talk given at the Gartner EA Summit, Ron Rosenhead said:

For me there are a couple of overlapping issues:
Do project boards actually know what they are established for? Plus, how well trained are members of project boards? I have to say that my experience here in the UK is that Boards are established sometimes with (overly) large numbers, give little guidance and are not well trained in understanding what they are to do and in project management. They usually receive the thumbs down from project managers who say they add little or no value.
Yes, they should set the parameters of decision making and enable others to make decisions. If I was to ask everyone who came through courses we ran in 2008 very few would say that this had actually happened.

His first question is really a great point. All too often, these boards are created without sufficient direction to be effective. If I were on one of these boards, even though it might be boring, I’d really want to just be able to rubber stamp as many of the projects as possible. That can only happen if the board effectively sets expectations in advance so the project teams know what they’re in for. If the project team is forced to guess as to what the board will want, it’s far more likely that they’ll guess incorrectly. At the same time, if the expectations are set, it’s also important for the review board to move through it as quickly as possible. If the team has done their homework, provided the information necessary, don’t waste the project team’s time by walking through the answers for an hour knowing full well that they’ve complied with the policies. This is why I like having explicit policies and think that the use of self asssessments via scorecards can be a very powerful tool.

When is Redundancy Okay?

A common theme that comes up in architecture discussions is the elimination of redundancy. Simply stated, it’s about finding systems that are doing the same thing and getting rid of all of them except one. While it’s easily argued that there are cost savings just waiting to be realized, does this mean that organizations should always strive to eliminate all redundancy from their technology architectures? I think such a principle is too restrictive. If you agree, then what should the principle be?

The principle that I have used is that if I’m going to have two or more solutions that appear to provide the same set of capabilities, then I must have clear and unambiguous policies on when to use each of those solutions. Those policies should be objective, not subjective. So, a policy that says “Use Windows Server and .NET if your developer’s preferred language is C#, and use if your developer’s preferred language is Java” deosn’t cut it. A policy that says, “Use C# for the presentation layer of desktop (non-browser) applications, use Java for server-hosted business-tier services” is fine. The development of these policies is seldom cut and dry, however. Two factors that must be considered are the operational model/organizational structure and the development-time values/costs involved.

On the operational model/organizational structure side of things, there may be a temptation to align technology choices with the organizational structure. While this may work for development, frequently, the engineering and operations team are centralized, supporting all of the different development organizations. If each development group is free to choose their own technology, this adds cost to the engineering and operations team, as they need expertise in all of the platforms involved. If the engineering and operations functions are not centralized, then basing technology decisions the org chart may not be as problematic. If you do this, however, keep in mind that organizations change. An internal re-organization or a broader merger/acquisition could completely change the foundation on which policies were defined.

On the development side of things, the common examples where this comes into play are environments that involve Microsoft or SAP. Both of these solutions, while certainly capable of operating in a heterogeneous environment, provide significant value when you stay within their environments. In the consumer space, Apple fits into this category as well. Their model works best when it’s all Apple/Microsoft/SAP from top-to-bottom. There’s certainly other examples, these are just ones that people will associate with this more strongly than others. Using SAP as an example, they provide both middleware (NetWeaver) and applications that leverage that middleware. Is it possible to have SAP applications run on non-SAP middleware? Certainly. Is there significant value-add if you use SAP’s middleware? Yes, it’s very likely. If your entire infrastructure is SAP, there’s no decisions to be made. If not, now you have to decide whether you want both SAP middleware and your other middleware, or not. Likewise, if you’ve gone through a merger, and have both Microsoft middleware and Java middleware, you’re faced with the same decision. The SAP scenario is bit more complicated because of the applications piece. If we were only talking about custom development, the more likely choice is to go all Java, all C#, or all -insert your language of choice-, along with the appropriate middleware. Any argument about value-add of one over the other is effectively a wash. When we’re dealing with out-of-the-box applications, it’s a different scenario. If I deploy a SAP application that will automatically leverage SAP middleware, that needs to be compared against deploying the SAP application and then manually configuring the non-SAP middleware. In effect, I create additional work by not using the SAP middleware, which now chips away at the cost reductions I may have gained by only going with a single source of middleware.

So, the gist of this post is that a broad principle that says, “Eliminate all redundancy” may not be well thought out. Rather, strive to reduce redundancy where it makes sense, and where it doesn’t, make sure that you have clear and unambiguous policies that tells project teams how to choose among the options. Make sure you consider all use cases, such as where the solution may span domains. Your policies may say “use X if in domain X, use Y if in domain Y,” but you also need to give direction on how to use X and Y when the solution requires communication across domains X and Y. If you don’t, projects will either choose what they want (subjective, bad), or come back to you for direction anyway.

Another Review of SOA Governance

Another review of my book has been posted here at the Exforsys, Inc. (Execution for System) site. I’m not familiar with Exforsys, but they seem to be an aggregator/news provider of IT training resources and news. Anyway, the author of the review gave a very thorough review of the book, so if you’re on the fence of whether or not my book is a good resource for your SOA Governance efforts, this review may aid you in your decision making process.

Gartner EA Summit: Managing the Migration to Your Future State Architecture

Presenter: Scott Bittler, Gartner

Another presentation from Scott, this time over breakfast. The bulk of this talk was focused on the importance of what he termed as “Next State Architecture.” If we have the future state and current state architectures documented, the challenge that exists is if we can’t achieve the future state architecture in one step. If that’s the case, then there’s a gap in the prescriptive guidance needed for project teams. If they know they can’t get to the future state, and don’t have guidance on how they should move from current state, they’re likely to stick with what they know. Good advice.

There were some specific nuggets outside of this core topic that I also wanted to call out. First, he said that the most important EA deliverable is principles, because it’s those principles that lead to consistent decision making. The talk wasn’t focused on this, so he didn’t go into depth, but some examples of these principles would be good. I definitely see the importance in these and agree with his statement. I’ve been in many situations with two (or more) compelling options where we seem to be at a stalemate. The principles need to assist in getting decisions made.

Second, I liked the fact that he said that EA’s role is to provide prescriptive guidance so that appropriate choices are made on projects and programs. This emphasizes the point that I was hoping would be made in his governance talk yesterday. Provide the policies, and anyone can make the right decisions.

Finally, the last comment he made was that with the advent of EA-focused web sites, etc., any team that claims ignorance when confronted with non-compliance (“I didn’t know I was supposed to do that”) is unacceptable in this day. Here, I disagree. I make extensive use of RSS feeds in my work so that I get information pushed to me, but I know many of my colleagues do not. A web site is still a pull-model, and there’s very few people that I know of that have the discipline to regularly check common web sites. EA has to be accountable for the communication effort and ensuring that it gets pushed out to the people who need it. Putting it on a web site isn’t enough. So, this one I disagree with. I think if EA is serious about achieving compliance, then they should be serious about pushing the information out. Create a formal communication plan and execute it.

Gartner EA Summit: Effective Governance, Best Practices

Presenter: Scott Bittler, Gartner

This presentation got me on my soap box. This talk took the traditional approach to governance, framing it around decision rights. In my opinion, this is too narrow of a scope and leads to the typical review board approach that contributes to the negative connotation most project staff have around governance. A focus on decision rights always jumps to some kind of an exception process, or better stated, a situation where there is a project that is not compliant with the architecture. The problem I have with this view is that these decisions assume that the project was knowingly out of compliance. More often than not, I don’t think that’s the case. I think the project team isn’t aware of what “in compliance” is, makes decisions based upon their knowledge and context, and only when (and if) someone else who has some other context comes into the picture, does a discussion around “decision rights” even enter the mix. When that happens, it’s usually too late in the project, and the schedule wins. What’s the real source of the problem here? It’s not a problem with decision rights, it’s a problem with not providing the people making the decisions the knowledge they need to do it right. What’s even worse in that if we didn’t even find out about the non-compliance, the focus is misplaced on inserting a checkpoint/review instead of actually getting the project team to make the right decision to begin with.

Put another way, how fast would you drive on a road that has no speed limit signs posted? If everyone was speeding, is the right answer to put police officers out there every mile, or is the right answer to post speed limit signs. Unfortunately, with projects, we’re doing the former. We stick more police out in the form of big review boards, but we still haven’t bothered to give the teams the information they need to do things right.

Instead of focusing on who has authority, focus on what the policies are that state what the “right” thing to do is, and enable the people that are must make the decisions to make them properly, rather than taking away their ability to make decisions by requiring them to guess which decisions must be escalated up the ladder to the person who is deemed the authority. The person who is the authority shouldn’t be making decisions, they should be making policies and enabling the decision makers.

The session is now over, and I want to point out that his recommendations slide did have a bullet point encouraging lots of proactive communication on the architecture. I also want to add that there was good content in the presentation, especially the brief discussion on federated governance across business units near the end, I just wish he had emphasized his recommendation on proactive communication (and introduced the concept of policy as part of that) in the earlier slides instead of so much focus on review boards and waivers. I caught him after the presentation and told him about my book, hopefully we can continue the conversation. He’s not on Gartner’s blogroll yet, so I’ll have to hope for some offline communication on the topic.

Gartner EA Summit: Case Study from Health Care Service Corporation

In this session, Bernadette Rasmussen, Chief Enterprise Architect at Health Care Service Corporation, gave a case study discussing their efforts to establish a future-state architecture. The highlight of this session for me was the fact that a deliverable of their future state architecture was a formal communication plan, and then the actual communication activities articulated in that plan. This included large presentations for lots of people, DVDs containing an overview, development of on-line training, formal communication to senior IT leadership (who in turn had them communicate it senior leadership outside of IT), and more. I’ve had the opportunity to work on one enterprise-level effort with someone who was passionate about communication and had us develop a similar plan, and I think it was a huge contributor to the success of the effort. Developing the artifacts is one thing, but if people don’t know they exist, they won’t get used.

Governance and Iterative Development

Chuck Allen, in this blog entry posted after he read my book, felt that the book was missing a discussion on the role of iteration and test-driven development in building a canonical model. He felt that my description of the role of a canonical model felt like a waterfall methodology. I had posted a comment on his blog, but it hasn’t shown up there, so I’d thought I’d post a response here.

There’s two things that came to my mind as a result of Chuck’s post. First, Chuck’s viewpoint is consistent with a lot of people’s thinking around governance as some big, heavyweight process that has more in common with BUD (big up-front design) practices. When applied to agile methodologies and iterative development, they feel it won’t work. This is not my view on it, however. My view is that governance is a requirement, regardless of your methodology. If your project teams feel it’s getting in the way, it’s not that you need to get rid of governance, it’s that you need to change your approach. Where teams get frustrated is where they’re forced to go before some review board or reviewer who starts asking them, “Did you do this? Did you do that?” and the answer is always, “No, I didn’t know I needed to do that.” Therein lies the rub. The team didn’t know about the policies that existed. If the policies aren’t documented, how can we expect projects to be compliant? If the policies are documented, then there should be no reason why a technical lead or project architect can’t bring them up as appropriate within an iterative approach, or bring them up as part of some up-front design, if that’s your preferred approach.

The second thing that came to mind is more about developing those policies and that reference material. If we’re adopting SOA at an enterprise level, then there will need to be policies that define what that “enterprise” success is. My book calls out what those reference materials are, because those are what’s important to good governance. The book did not, however, go into depth on how some of those artifacts would get created. It doesn’t describe how to develop a canonical model or a business capability map, rather, it describes how those artifacts should be used to achieve SOA success. That is the governance question. Developing a business capability map is a business analysis and architecture question. Developing a canonical model is an information architecture question. There are books out there that can teach you how to do that. To Chuck’s point, however, when these artifacts are intended to define something at an “enterprise” level, there is significant risk that they never get created because we go into analysis paralysis. I did call this out in my book, as Chuck pointed out, but I think he offers some good advice that it may make sense to not only apply iterative approaches to your software development effort, but also to your efforts to produce policies and reference material. That’s embodied in my four processes of governance, where the last process is one of continuous improvement. Establish some policies, communicate and educate, enforce them, measure the impact, and then adjust as needed.

Gartner EA Summit: Cracking the Code of Business Architecture

Presenter: Al Newman, Director of Architecture Services at Allstate Insurance Company

Al discussed Allstate’s journey on the path to establishing a business architecture practice at Allstate.

He walked us through an eight step process:

  1. Define business architecture
  2. Secure executive sponsorship
  3. Develop a framework
  4. Secure an initial engagement
  5. Build an engagement team
  6. Creating a competency center
  7. Build out infrastructure
  8. Formalize the operating model

Two highlights that I wanted to call out. First, he’s emphasized the need for an engagement model. I’ve seen too many teams, whether formally on the org chart or not, that don’t have an idea on how either the team members or the artifacts that they may create will be utilized within project efforts. In the IT organizations I’ve seen, the work gets done in projects, period. Architecture teams that don’t have people formally allocated to projects need to figure out how their artifacts and/or staff will be utilized in those projects.

Second, he emphasized the need for business architecture in making solid project decisions. I couldn’t agree more, and have a chapter discussing this in an SOA context in my book. In the context of SOA, one question that gets asked is “How do I build the right services?” Asking this question after a project has been initiated is already problematic as the project establishes scope boundaries, and changing those requires more effort than it would have if those discussions were had during the project definition process.

Gartner EA Summit in Vegas

I will be at the Gartner EA Summit in Vegas on Thursday the 11th and Friday the 12th, including being a panelist on EA and SOA on Friday morning. Introduce yourself to me there and perhaps you will get a discount code for my book. Offer open to attendees only.

Personal Brands and Corporate Blogging

Jeremiah Owyang, Senior Analyst at Forrester Research on Social Computing, has a very interesting post titled, “How Companies Respond to the Risks of Personal Brands.” As a corporate practitioner with a public blog and a decent enough following to claim a “personal brand,” I thought I’d share my thoughts on this topic.

Jeremiah stated that his personal brand helped him get his current job, and that was the case for me as well. When I originally started blogging, I was a corporate practitioner, however, I did my best to keep those worlds separate. The blog did help me enter the world of consulting, where I knew it wouldn’t be an issue since the company’s CEO blogged, but then when I went back to the corporate world, it was very interesting having this very public view of my thoughts. Like Jeremiah, I think my blog played a key role in me getting my current position. I also hoped that I would be able to continue my public blogging and made sure I discussed this during the interview process.

Jeremiah went on to call out the potential risks to a company, however. He listed three risks:

Risk 1: The personal brand is a cost to the company: Why let employees build their own brand on the dime of the company or leveraging the brand of the employer?
Risk 2: The now popular employee is likely to get poached: Perhaps a common concern I hear is that competitors can easily identify the stars, and hire away these folks along with their market reputation and google juice.
Risk 3: Employee exits leaving a chasm to fill: In the modern workforce, we hear less of lifetime employees seeking pension than we do of job migrants, or career gypsies that move from company to company every few years. As a result, after they’ve built up trust with the market using social tools, they leave the company, and a gap is left that the brand can’t fill.

In my case, I drew some lines in the sand to make sure that risk #1 would not be an issue. I don’t blog from work or even using my work laptop if I have it at home or on the road. I will record ideas for blogs on my iPhone when I run across something in my RSS reader, but I follow those RSS feeds for work purposes. As for getting “poached,” it may be true that someone with a very public persona may get more calls from headhunters. Perhaps I don’t work my network well, but I get a lot more cold calls from headhunters due to talking at Gartner than I do from my blog. Personally, I think blogging may incrementally add a few more cold calls, but LinkedIn has made candidates so readily available, I don’t see this as a big deal. The real mitigator for this risk, however, is the fact that I’m very happy with my current position, and the culture of the company fits both what I want to do as my day job, as well as allowing me to maintain my personal brand. To me, that’s the best scenario. It’s a win for me, and it’s a win for the company. I believe that my blogging helps attract great candidates to my employer. The only reason this works is because there’s a mutual understanding and respect, and a cultural match. It would be a mismatch if a company hired me based on my blog, but then wanted me to stop all outward communication. There may be a time where public blogging isn’t that important to me, but for now it is, and finding a company that is supportive of it is the way to go.

I’ve always tried to be conservative with what I discuss. Regardless of whether you avoid mentioning your company’s name on your public blog and have disclaimers like I do, it’s still very easy to find out who my employer is. I completely understand that I am a representative of my company, regardless of whether it appears on this blog, and that people will associate me with them. If I screw up here, that can impact my employer. If I do well here, that impacts them too. Discuss this with your employer and establish some ground rules. For example, I avoid talking about vendors except in very general terms. Positive or negative comments can damage vendor relationships. In general, my view has always been to limit my topics to areas that I would be comfortable sharing at a local user group or at a conference, avoiding anything that even comes close to proprietary information or anything that could be company confidential. If you’re working for a consulting firm or a vendor, you may be a bit more free to blog, but keep in mind that you have to preserve the confidentiality of your clients. Even if you make things anonymous, someone can perceive that public posting as something similar to secretly recording a phone call but then bleeping out people’s names. If someone did that to me, I wouldn’t be happy.

So, my advice is to find a company that matches your needs but also one that you can contribute to their success. Think about how important your personal brand is to you, but also think about the importance of job security and stability. Find the right balance, get a win-win situation, and don’t assume anything. Be upfront about your desires, what you’ll do to preserve your integrity and the company’s, and establish some ground rules with your supervisor. If you do this, that personal brand can be a win for you, a win for your employer, and lead to a long and successful career.

SOA Consortium Podcast on SOA Governance

I’m pleased to announce that my “soapbox derby” presentation from the September meeting of the SOA Consortium is now available in podcast form from the consortium’s website (basic registration required to download). I thought the “derby” format worked very well, with all of the derby participants given 15 minutes to present, followed by a discussion from the meeting attendees. It kept things brief and to the point on a narrow, but important area. I had just completed my book, so naturally I talked about governance. Give it a listen, as well as the other excellent presentations from Victor Harrison of CSC, Mike Kavis of Kavis Technology Consulting, and Britta Schatz of Penn National Insurance.

Houston, We Have a Governance Problem

Joe McKendrick recently reported on a quote of mine from a podcast I recorded with Dana Gardner in both his eBizQ and ZDNet blogs. In the discussion, Dana asked me a very good question on what the telltale signs are that an organization is missing the governance boat. Now that I’ve had some time to think a bit more deeply on this, I’d like to expand upon my original answer.

In the podcast, I suggested that one telltale sign is that you are in meeting after meeting with people disagreeing over priorities. I still stand by this, although I’ll say it’s probably more likely that they’re saying, “This is what I want” rather than “This is what my management told me is my priority.” When governance is really bad, people don’t know what the priorities are, and as a result, they fall back to their own interests, which may or may not be the best interests of the company as a whole, if anyone even knows what that is.

Another telltale sign of a governance problem is when people outside of project efforts have nothing but criticism for the people working on projects, and the people working on the projects have nothing but criticism for the people whose jobs lie outside of project efforts. It could be the workers versus the managers or the developers versus the architects. Whoever the two (or more) parties are, it’s clear that they’re not working effectively, and one possible source of that is ineffective governance.

How about consistency? How many times have you heard someone say, “We need to be more consistent in how we do this”? Once again, this can be a governance problem. Has someone defined what consistency is? What are the policies that people should be following? It’s easy to point out that something is done differently every time, but it’s difficult to articulate one way that it should be done, and to then get the people to actually do it that way. Keep in mind, however, that not everything should be consistent. There are some things that should change every time, and some things that shouldn’t. If we attempt to apply consistency and standardization in the wrong areas or across the wrong domains, we could wind up making things even worse.

Finally, the biggest sign has to be a general feeling of being on a sinking ship. While this can be due to far more than governance, if your efforts are consistently viewed as not good enough, and everyone knows it, then it’s very likely that there’s a governance problem.

There has to be many more signs that people can add to this list. Please add a comment or trackback with your telltale signs of ineffective governance. And, if these signs are hitting a little too close to home, I can recommend a good book.

What are the Services?

I recently completed a certification in ITIL v3 Foundations. On the plus side, I found that the ITIL framework provided some great structure around the concept of service management that is very applicable to SOA. There was one key question, however, that I felt was left unanswered. What are the services?

My assumption going in was that ITIL was very much about running IT operations within an enterprise, so I expected to see some sort of a service domain model associated with the “business of IT.” That’s not the case, at least not in the material I was given. There are a number of roles defined that are clearly IT specific, but overall, I’d say that many of the processes and functions presented were not specific to IT at all. As an example, ITIL foundations won’t tell you whether server provisioning or application deployment should be services in your catalog or not. Without this, an effort to adopt ITIL can struggle in the same way as an SOA adoption effort can. I’ve seen first hand where an organization thrashed around what the right operational and engineering services were. ITIL does offer the right guidance in helping you define them, in that it begins with understanding your customer.

This is the same question where many SOA initiatives struggle. We can have lots of conceptual talk about how to build services the right way, but actually defining the services that should be built is a challenge. In both ITIL and SOA adoption, there is a penalty for defining too many services. It’s probably much more pronounced in ITIL, because those services likely have a higher cost since managing and using those services tends to have a higher cost in the manual effort than managing and using a web service, although, if you’re doing business-driven SOA, the costs may be very similar.

Overall, I definitely felt there is a lot of value in the ITIL v3 framework, and I think if you are leading an SOA adoption effort, it’s worth learning about, as it will help your efforts. If you’re looking to improve IT operations, it will likewise help your efforts. Just know that there you’ll still need to figure out what your services are on your own, and that can have a big impact on the success of your adoption efforts.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.