So much workflow, so little time
If your organization has begun to leverage workflow technologies as built into the typical BPM suite, you’ve probably run into one of the key challenges: when to use the BPM’s workflow engine versus the workflow capabilities of any of the other products you have in your enterprise. There’s no shortage of products that have built in workflow capabilities. Think about it. Most document management products have workflow capabilities (albeit typically associated with document approval). Your portal may have workflow capabilities. Your SDLC tools may include workflow for bug tracking. Your operations group may leverage a ticketing/work management system with workflow capabilities. So what’s the enterprise to do?
One of things that I’ve advised in the past is to consider whether or not the workflow involved requires any customization or not. Take the SDLC tooling as an example. While there’s typically some ability to change some of the data that flows through the process, the workflow itself probably doesn’t vary much from organization to organization. As a different example, imagine the workflow that supports the procurement and installation of new servers. While most organizations probably support this through a service request system, the odds are that the sequencing of tasks will vary widely from organization to organization. So, the need for customization is one factor. Unfortunately, it’s not the only one. You also have to consider the data element. Any workflow is going to have some amount of contextual information that’s carried throughout. So, while there may be a big need for process customization, that may be offset by the contextual information and schemas that may be provided as part of a tailored third party product.
All in all, the decision on when to use a general purpose workflow engine and when to use a tailored product is no different than the decisions an organization makes on when to build solutions in house versus when to buy packaged solutions. Look at the degree of customization you need in the workflow versus the pre-defined processes. Look at the work that will involved in setting up your own custom data stores and schemas versus the pre-defined databases. Ultimately, I think all large organizations will have a mixture of both. Unfortunately, that means some short term pain. All workflow systems typically have task management with them. Multiple tools means multiple task managers. Multiple task managers means multiple places to do your work. This isn’t exactly efficient, but, until we have standard ways to publish tasks to a universal task list and other standards associated with the use of workflow engines, we should strive to make good decisions on a process-by-process basis.
CapeClear and Workday
I saw the announcements yesterday that Workday had acquired CapeClear. This is an interesting acquisition. At first glance, I thought this was just normal evolution in the ERP space, except that Workday just happens to be a hosted solution. After all, Oracle and SAP both have integration products that enable integration with their backend applications, so why not Workday? The part of this that is a bit distressing, however, is that Workday is a new entrant into this space. A big reason that these integration products exist is because every application had its own proprietary integration approach. As the need to integrate became more and more important, the desire to standardize the integration approaches increased, and led to technologies like Web Services. Workday, as a relatively new player in the space, should have been able to go with standardized integration approaches from the start, significantly reducing the need for integration middleware. There will always be a need for something in the middle, but it should start to look more like network devices than complex middleware. If this theory is true, then why would Workday need to acquire an integration middleware company like CapeClear? Perhaps the only reasoning in this is that CapeClear had always tried to be more that just mediation fabric. After all, I had railed on CapeClear in the past when David Clarke said, “We consider ESB the principal container for business logic. This is the next generation application server.” It’s likely that Workday actually used them in this fashion, rather than as a lightweight mediation fabric that I prefer. If that’s the case, then it’s entirely possible that Workday was in a position where it would be too expensive to migrate to a traditional Java application server, and CapeClear was struggling being one of the few SOA startups left with rough economic times ahead. The best course for both parties, therefore, is the actions that have taken place. It will remain to be seen how well Workday can support the existing CapeClear customers who are using it as a mediation fabric, since their bread and butter is the ERP system, not middleware.
Why do we make things so difficult?
I’ve recently been trying to help out with an issue which has required me to roll up my sleeves a bit, and unfortunately it has brought pack all too familiar memories. Our computer systems (and often the documentation that goes along with it) simply makes things way too difficult. In my opinion, the thing that I’m trying to do should be a relatively straightforward function, but as I’ve dug into it, I just seem to run into an endless set of configuration parameters that need to be specified. Now, while I’m pretty far removed from my days as a developer, I still consider myself tech savvy and a very quick learner. I can’t help but think what the average joe must go through to try to make these things work. It’s almost as if these systems were written to ensure a marketplace for systems integrators and other consultants.
This all comes back to usability and human-computer interaction, areas that have always been a passion of mine. If your products aren’t usable, it’s simply going to create frustration and distrust. If your documentation or support channels are equally poor, the situation will continue to go down. What’s even worse is when there isn’t a workaround, and the only option left to a user is to find an expert who can help. As a user, I don’t like to be painted into a corner where I have no options. We need to keep these things in mind when we build our own systems for our business partners. If we design systems that aren’t usable and require that the business come find someone in IT every time they try to do certain options, that’s a recipe for disaster. If you don’t have a usability team in your organization, I strongly urge you to find some experts and start building one.
Tools Support Governance, Not Define It
I was surprised at David Linthicum’s latest blog entry. Normally, he’s pretty good about emphasizing that you can’t buy an SOA, but in his “Defining SOA Governance” post, a lot of the conversation was very tool-centric. He stated the following.
Key components of design time SOA governance include:
- A registry and/or repository for the tracking of service design, management, policy, security, and testing artifacts.
- Design tools, including service modeling, dependency tracking, policy creation and management, and other tools that assist in the design of services.
- Deployment tools, including service deployment, typically through binding with external development environments.
- Links to testing tools and services, providing the developer/designer the ability to create a test plan and testing scenarios, and then leverage service testing technology.
On the runtime governance side of things, he did preface the tool capability discussion with this statement: “Thus, runtime governance is the process of enforcing and implementing those policies at service run time.”
I’ve said it before, and I’ll say it again. Governance is about people, policies, and process. Tooling really only comes into play when you start looking at the process portion of the equation. I don’t want to dismiss tooling, because it absolutely is an important part of the governance equation, but if you don’t have the people or the policies, tools won’t help.
The other thing that I want to call out is how “SOA Management” has morphed into “Runtime SOA Governance.” Back in March of 2006, I was on a panel at the InfoWorld SOA Executive Forum and teased the panelist from AmberPoint about hijacking the term, but I’ve relented a bit simply because these terms all have preconceived notions about them, and the use of the term governance probably gets people thinking in the right direction. For many people, SOA Management may imply the kind of passive monitoring that people associated with traditional systems management (not that it should be passive, but that’s the perception many have). Runtime SOA Governance, however, is rooted in the notion of the active enforcement of policies associated with the consumer-provider interaction. If a change in marketing terms helps people understand it, I’m all for it.
But back to the main subject… whether it’s runtime or design time, there’s still a need to understand the policies and people/parties involved. If you don’t understand the concepts associated with a service contract (and it’s not just the functional interface) and have people on the both sides of the interaction who care about them, governance tools aren’t going to do you any good. If you don’t have people defining the enterprise policies associated with choosing and defining services, again, the tools aren’t going to be used effectively.
Don’t vote? Don’t complain.
On my way home from work today, the news on the radio was talking about how the precincts in my area were seeing a voter turnout of 30-40%, and viewing it as a good thing. I think it is pathetic. There are many countries where individual citizens don’t have the right to vote, and here we can’t even get a simple majority to show up, and that’s only of the people who have taken the time to register. My parents always voted, and I’m proud to do the same, no matter how insignificant a particular ballot might be. My Dad told me, “if you don’t vote, you have no right to complain if you don’t like the way things turned out.” Think of what could happen in the current political races if even half of those non-voters cast their ballot. I exercised my right this morning. While most Super-Duper Tuesday polls will have closed when this gets read, I hope those of you who live in areas that haven’t voted yet do the right thing and cast your ballot.
February Events
Here are the SOA, BPM, and EA events coming up in February. If you want your events to be included, please send me the information at soaevents at biske dot com. I also try to include events that I receive in my normal email accounts as a result of all of the marketing lists I’m already on. For the most up to date list as well as the details and registration links, please consult my events page. This is just the beginning of the month summary that I post to keep it fresh in people’s minds.
- 2/4 – 2/6: Gartner BPM Summit
- 2/5: ZapThink Practical SOA: Energy and Utilities
- 2/7 – 2/8: Forrester’s Enterprise Architecture Forum 2008
- 2/11: Web Services on Wall Street
- 2/13 – 2/15: ARIS ProcessWorld
- 2/13: ZapThink Webinar: Leverage Document Centric SOA for Competitive Advantage
- 2/19: Webinar: Integrated SOA Governance
- 2/25 – 2/28: BPTG’s Business Process Transformation – BPM Practitioner Course
- 2/25 – 2/27: Global Excellence Awards in BPM & BPM Technology Showcase
- 2/26 – 2/29: ZapThink LZA Bootcamp
The Funding/Scope Relationship
A recent conversation brought up another real world governance example that I think is an excellent illustration of where we need to get to in our SOA (and IT) governance efforts. The town I live in borders the Mississippi River. In early 2007, the city council signed a master development agreement with an area developer to develop the “bottoms” area, essentially farm land that butts up against the river levees. Shortly after that, elections changed the makeup of that council, and now there is a majority that is against this development and has been doing everything they can to keep things from moving forward, so much so, that the developer has a suit against the city. That’s not integral to this post, however. What has recently been the topic of discussion is the levee system. The Army Corps
of Engineers is going to decertify the levees and require the local municipalities to pay for upgrades. So, the question at hand is, “Who’s going to pay for this?” The master development agreement calls for the developer to pay for the improvements as there’s a vested interest there, since the levees would be protecting the businesses in the development. If that agreement falls apart with the current issues, some alternative method of funding would need to be found. The city and/or county really only have taxes at their disposal. Either sales taxes or property taxes would need to be increased.
So what does this have to do with SOA and IT Governance? Well, look at it this way. The master development agreement is like an IT project in many companies these days. The primary focus is on delivering some new businesses to the area, but as part of that, there’s some infrastructure work that needs to happen. The levee upgrade is a clear infrastructure need, but it’s extremely difficult to find funding for it without bundling it in with some other more tangible effort that is closer to the end user, in this case the residents of the surrounding area.
Now, if this effort were the typical IT project, What would happen? The developer would build their shopping centers, distribution centers, warehouses, whatever, but they would put in only the infrastructure required for that. Imagine seeing a levee that surrounds their development, but as soon as you leave their property, we go back to some crumbling infrastructure which may actually now be worse, because the added strength of the new section puts more pressure on the aging infrastructure. Now, this won’t actually happen, because the Army Corps of Engineers have regulations that will ensure that the levee upgrades are built properly.
What we need to happen in the IT world is to have the same types of regulations so that when projects go and request funding, the scope is such that funding includes building services and infrastructure in the right way, rather than how it is today where the scope is set solely to the immediate needs of the project. The problem is that we don’t have the regulations that can actually set the scope. We instead have regulations that begin with something like, “If you build a shared service…” which immediately gives the project managers or proposers a way out, because they’ll claim that their services aren’t going to be shared, when in fact, they probably simply haven’t looked into whether there is a shared need, simply because of the risk of expanded scope. I think some form of business domain model as I’ve discussed in the past (here and here) that classifies domains of capabilities from the perspective of “share-ability” is a step in the right direction.
Gartner EA Summit Podcast
Thanks to Gartner and the SOA Consortium, the panel discussion I was part of in December at the Gartner Enterprise Architecture Summit is now available as a podcast. I’ve referenced it as an enclosure in this entry, so if you subscribe to my normal blog feed in iTunes, you should get it. If you have difficulty, you can access the MP3 file directly here. For all the details on the session, I encourage you to read Dr. Richard Soley’s post over at the SOA Consortium blog.
Gaining Visibility
While I always take vendor postings with a grain of salt, David Bressler of Progress Software had a very good post entitled, “We’re Running Out of Words II.” Cut through the vendor-specific stuff, and the message is that routing all of your requests through a centralized process management hub in order to gain visibility may not be a good thing. In his example, he tells the story of a company that took some existing processes and, in order to gain visibility into the execution of the process steps, externalized the process orchestration into a BPM tool. In doing so, the performance of the overall process took a big hit.
To me, scenarios like this are indicative of one major problem: we don’t think about management capabilities. Solution development is overwhelmingly focused on getting functional capabilities out the door. Obviously, it should be, but more often than not there is no instrumentation. How can we possibly claim that a solution delivers business value over time if there is no instrumentation to provide metrics? Independent of whether external process management is involved, gateway-based interception is used versus agent-based approaches, etc. we need to begin with the generation of metrics. If you’re not generating metrics, you’re going to have poor, or even worse, no visibility.
Unfortunately, all too often, we’re only focused on problem management and resolution, and the absence of metrics only comes to light if something goes wrong and we need to diagnose the situation. To this, I come back to my earlier statement. How can we have any confidence in saying that things are working properly and providing value without metrics?
Interestingly, once you have metrics, the relationships (and potentially collisions) between the worlds of enterprise system management, business process management, web service management, business activity monitoring, and business intelligence start to come together, as I’ve discussed in the past. I’ve seen one company take metrics from a BPM tool and placing them into their data warehouse for analysis and reporting from their business intelligence system. Ultimately, I’d love to see a differentiation based upon what you do with the data, rather than on the mere ability to collect the data.
SOA and the Economy
Ok, so Brenda Michelson called me out today on a conference call that I hadn’t made any comments regarding the impact of the economy on SOA initiatives. Of course, that’s like dangling a carrot in front of my face, so I now feel obligated to at least say something.
As I’ve stated before, I consider myself to be very pragmatic. I try to avoid knee-jerk reactions to highs or low. So, the recent buzz around how the economy would impact SOA efforts was a bit of a non-event for me. Companies have always had to deal with their own performance, and adjust their budgets accordingly, and obviously, the general economy has a role in that performance. Depending on your industry, it could be large, or it could be insignificant. So, whether it’s an SOA project or any other IT project, the decision making process is still the same. The only things that have changed are the resources that are available. If we have resources to do it, do it. If you don’t, it gets put off. Now, if we’re discussing how SOA initiatives should be prioritized relative to other IT projects, it’s a different debate, and one that is independent of the current funding at hand.
The other angle on this discussion is whether or not successful SOA adoptions will help companies when their revenue streams are not what they’ve been in the past. If we come back to the marketing terms for SOA: reuse, agility, etc., I think these are all realized in the form of IT productivity and efficiency. In other words, I’m either getting more things done in the some standard time frame, or I’m reducing the time it takes to get any particular solution done in comparison to similar efforts in the past. I firmly believe that SOA should be doing this, and therefore, it’s a logical conclusion that companies that have successfully adopted SOA are in a better position to execute in an economic downturn than their competitors that haven’t. Of course, that assumes that information technology plays a key role in a business’ ability to execute and can be a competitive differentiator. There you go, Brenda!
More on Events
Ok, the events page is finally functional. I gave up on WordPress plugins, and am now leveraging an embedded (iframe) Google Calendar. A big thank you to Sandy Kemsley, as she already had a BPM calendar which I’m now leveraging and adding in SOA and EA events. I hope this helps people to find out the latest on SOA, EA, and BPM events.
SOA, EA, and BPM Conferences and Events
I’ve decided to conduct an experiment. In the past, I’ve been contacted about posting information regarding events on SOA, either via a comment to an existing post, or through an email request. I’ve passed on these, but at the same time, I do like the fact that I’ve built up a reader base and my goal has always been to increase the knowledge of others on SOA and related topics.
What I’ve decided to try is having one post per month dedicated to SOA events that will be occuring in the near future, in addition to this permanent page on the blog that people can review when they please. I’m not about to go culling the web for all events, but I will collect all events that I receive in email at soaevents at biske dot com, and post a summary of them including dates, topic, discount code, and link to the detail/registration page. Readers would be free to post additional events in comments, however, I don’t think very many people subscribe to the comment feed, so the exposure would be less than having it in my actual post. I’m going to try to leverage a Google Calendar (iCal, HTML) for this, which will also be publicly available, even if someone wants to include it their own blog.
While this is essentially free marketing, in reality, I’d make a few more cents by having more visitors to this site than I would if all of these events organizers threw their ads into the big Google Ad Pool with the hopes of it actually showing up on my site. If I get a decent number of events from multiple sources the first two months, I’ll probably keep it up. If I only get events from one source, I’ll probably stop, as it will begin to look like I’m doing marketing just for them and I don’t want anyone to have the perception that my content is influenced by others.
The doors are open. Send your SOA, EA, and BPM events to me at soaevents at biske dot com. Include:
- Date/Time
- Subject/Title
- Description (keep it to one or two sentences)
- URL for more detail/registration
- Discount code, if applicable
If you want to simply send an iCal event, that would probably work as well, and it would make it very easy for me to move it into Google Calendar. My first post will be on Feb. 1, the next on March 1, and so on. I will post events for the current month and the next month, and may include April depending on how many events I receive.
What happened to the OS?
As I listened to some rumblings on some podcast on the soon-to-be-released Microsoft virtualization product, a thought came to my mind- what happened to the operating system?
This isn’t the first time that this thought has popped into my head, or something related to it. When I was first exposed to virtualization technology, I had the same question. Isn’t it the job of the operating system to manage the underlying physical computing resources effectively? Of course, I also realized how much I ignored the purpose of the operating system in work work as a Java developer and designer. I had become so embedded in the world of Java application servers that I had a difficult time getting my head back on straight when trying to guide some .NET server side development. I kept trying to find something to equate to the Java Application Server that sat on top of Windows Server, when in fact, Windows Server is the application server. After all, operating systems manage the allocation of resources to system processes, which are applications.
Now this isn’t a post to knock virtualization technology. There are many benefits that it can provide, such as the ability to move applications around independent of the underlying physical infrastructure (i.e. it’s not bound to one physical server). But, if we look at the primary benefit being touted, it’s better resource utilization. Before we jump to virtualization, shouldn’t we understand why the existing operating systems can’t do their job to begin with? If our current processes for allocating applications to operating systems is fundamentally flawed, is virtualization technology merely a band-aid on the problem?
Based on my somewhat limited understanding of VMs, you can typically choose between dedicating resources to a VM Guest, allowing the VM Guest to pull from a shared pool, or some combination of the two. Obviously, if we dedicate resources completely, we’re really not much better off than physical servers from a resource utilization standpoint (although I agree there are other benefits outside of resource management). To gain the potential for improved resource utilization, we need to allow the VM Host to reclaim resources that aren’t being used and give them to other processes. At this extreme, we run the risk of thrashing as the VM Guests battle for resources. The theory is that the VM Guests will need resources at different times, so the theoretical thrashing that could occur, won’t. So, we probably take some hybrid approach. Unfortunately, we still now have the risk of wasting resources in the dedicated portion.
The real source of the problem, in my opinion, is that we do a lousy of understanding the resource demands of our solutions. We use conservative ballpark estimates, choose some standard configuration and number of app servers, do a capacity test and send it out into the wild. When it’s in the wild, we don’t collect metrics to see if the real usage (which is probably measured in site visits, not in resources consumed) matches the expected usage, and even if it comes in less, we certainly won’t scale back because we now have “room to grow”. If we don’t start doing this, we’re still going to have less than optimal resource utilization, whether we use VMs or not. I don’t believe that going to a 100% shared model is the answer either unless the systems get much more intelligent and take past trends into account in deciding whether to take resources away from a given VM guest or not.
Again, this post isn’t a knock on virtualization. One area that I hope virtualization, or more specifically, the hypervisor, will address is the bloat of the OS. Part of the resources go to the operation of the OS itself, and one can argue that there’s a lot of things that we don’t need. While we can try to configure the OS to turn these things off, effectively a black-listing approach (everything is on unless those things appear on the black list), I like the white-list approach. Start with the minimal set of capabilities needed (the hypervisor) and then turn other things on if you need them. I expect we’ll see more things like BEA’s WebLogic Virtual Edition that just cut the bloated OS out of the picture. But, as I’ve said, it only gets us so far if we don’t do a better job of understanding how our solutions consume resources from the beginning.
Hug your kids
Today, I’m asking you, my readers, to take some time to hug the children in your lives, be it your own kids, nieces, nephews, grandchildren, friends’ kids, whatever. Today, I got a reminder of just how precious and fragile life, and the life-making process can be. My newest nephew was born on Wednesday afternoon and is now in the wonderful care of the good people in the neonatal intensive care unit at the hospital where my own children were born. My nephew came roughly 3 months early and weighs in at only 2 pounds, 4 ounces. He has a big battle ahead of him over the next few days, and we’re praying that he’s able to win it. With our health care today, we sometimes forget how challenging pregnancy can be. It can be smooth as silk (well, as smooth as the first trimester can be) one day, and then take an unexpected turn the next. I’m thankful for my three wonderful children, my two nieces, and now two nephews, and all of the children I interact with. They are our future and we can’t ever lose sight of that.
Music, Technology, and the Mind
James McGovern asks, “Are folks who are left-handed smarter?” Well, since I’m right-handed, my answer should be no. But then again, I don’t view “handedness” as an absolute, but rather a continuum. There’s a lot of left handers in my family, as well as a certain amount of ambidextrious behavior. I can do several things left-handed (although writing isn’t one of them).
Anyway, James’ post reminded me of an observation that a former boss made once. He was wondering whether there was a correlation between musical ability and technical ability. He noticed that nearly all of our strong technical guys had some talent in music, whether it was singing, playing an instrument, composing, etc. While we never tested his hypothesis, it’s still very interesting.
I’ve always been interested in psychology, social dynamics, etc. and one of the interesting things that I’ve thought about in my own life is how I wound up in the College of Engineering at the University of Illinois, something that most people would associate with a “left-brained” mentality, while my brother went on to become an artist, including a stint with Disney Feature Animation until they closed the Orlando studio, a very creative “right-brained” discipline. The common thread between us, at least in my opinion, is that we were both able to bring thinking associated with the other half of our brain into our daily work. In my brother’s case, one thing he’s been complimented on is his ability to create precise illustrations, etc. without the use of a reference. I can only surmise that his “left brain” analytical skills have something to do with this. In my case, while I am very analytical and logical, I’m also a very big picture thinker, a “right brain” activity. Interestingly, musical ability is associated more with “right brain” thinking.
So, if the hypothesis holds, the best people are the ones that are able to use both parts of their brains effectively. It is certainly a possibility that left handers are able to do this more often because of having to live in a right handed world, so they’ve always had to use the other half of the brain, whereas most right handed people have not. Who knows, but it’s fun to ponder.