Making events, EDA, CEP, and SOA interesting

I’m back from my vacation and have a few topics queued up for some blog entries. The first one that I wanted to cover was the topic of events and SOA. The relationship between EDA, CEP, and SOA is one that pops up on a regular basis, however, in my opinion, it still hasn’t reached a sustained level of interest. There was a big peak some time ago when Oracle and Gartner used the ill-fated moniker SOA 2.0 to represent the combination of SOA and EDA, but once the backlash died down, the discussion around events faded back into the background again. Thankfully, it hasn’t gone away completely. One of the more recent publications on the topic came from Rich Seeley with SearchWebServices.com in this article. He stated that, “The relationship of complex event processing (CEP) to service-oriented architecture (SOA) remains, in a word, complex.” Why is the case?

The article included quotes from Jason Bloomberg of ZapThink comparing EDA to SOA. While I usually agree with the ZapThink guys, I disagree with Jason’s quote that there is no particular reason to distinguish SOA from EDA. Jason points out that all service messages are essentially software events which “contain all the information you’d ever want about the behavior and state of your business.” At a technical level, there’s nothing wrong with Jason’s statement. Where there are differences, however, is in the intent of the message. A message associated with an interaction between a service consumer and a service provider is either a request for the provider to do something or a response from the provider back to that consumer. The fundamental difference, in my opinion, is that these messages are directed at a specific destination. While you can certainly intercept these messages and use them for other purposes (and I’d argue that doing for so for business intelligence analysis reasons is a good thing), there’s a risk involved in doing this because now there are unintended side effects. In contrast, events have no requirement to be directed at any particular destination, typically using a publish-subscribe approach for distribution.

Let me get back to the question of complexity. I will admit that discussions like paragraph above are part of the reason that people find EDA and CEP hard to grasp. Often times, discussion in the blogosphere will focus on areas of disagreement, losing sight of the areas of agreement. I’d argue that if you talked to Jason and myself on the relationship between SOA and EDA, 80% of what you’d hear would be consistent. So, just because Rich Seeley received a number of different takes on SOA, CEP, and EDA, doesn’t mean it’s complex.The right question we should be tackling is how to make events, EDA, and CEP more interesting, building on the natural relationship to SOA. As I’ve previously stated in my uptake of CEP post, I don’t think that most organizations are ready for CEP and EDA yet. The debates that are occurring in the blogosphere and press are being made by people that have a vested interest (typically vendors, but you could argue that niche analyst firms do as well) in creating “buzz” about the topic. As a corporate practitioner, I have no such vested interest except where it makes business sense for my employer. So, I need to ask the question on how to make CEP and EDA relevant (and interesting) to the business. The challenge with it, and why it was important that I wrote about my difference of opinion with Jason of ZapThink, is that events, on their own, don’t do anything. A service performs some function. It does something. The business can grasp this. An event is just a nugget of information. A collection of events presented to business stakeholders are not going to be very meaningful until you start doing something with them. As a comparison, let’s look at baseball. If you watch or listen to a baseball game, you’ll get a barrage of statistics. Are they useful? Some managers, like Tony LaRussa of my hometown St. Louis Cardinals, have always made extensive use of the data. Has it made him more successful? We’ll probably never know. We can certainly say that he’s been a successful manager, but can we tie it specifically to his use of event capture and analysis? There are probably other managers or baseball pundits that would argue that the cost of collection and analysis isn’t worth it.

The same thing holds true for EDA and CEP. There is a cost associated with the generation of events. There is a cost associated with the analysis of events. What’s missing is the benefit. To get this, we need to do analysis of the business and come up with suitable justification. For a domain such as risk analytics associated with securities trading, the justification is there. Complex analysis of the trading and news events occurring in real time can result in better timed market activities with millions of dollars in potential benefits. In other domains, it may not be as crystal clear. If an organization has a stated goal of better knowledge of their customers, it would seem that event capture and analysis could assist in this, but how do we quantify the potential benefits? Just as with SOA, I think a key to this is selecting an appropriate proof-of-concept and then pilot. Some event capture and analysis can be done without purchasing any new infrastructure. There’s nothing wrong with performing analysis on a week’s worth of data that takes another week to complete if the end result is valuable, business relevant information. As Jason suggests, you can use service messages as your starting point for analysis, so if you’ve got audit logs of them, you only then need an analysis engine. Every organization already has many of these, and I’m not talking about a BI system. I’m talking about employees. While we may not capture all of the correlations, most of us are pretty good at spotting trends. It’s simply a matter of having someone look at the information that’s already there. Guess what that activity is? It’s business analysis. Do the analysis, understand the business, create the business case, and go make things better.

$OA

Two recent posts by Jeff Schneider and Nick Malik, recently brought some attention to a very important aspect of SOA, which is funding it.

Jeff’s post, entitled “Budgeting for SOA” gives a great breakdown of all aspects of adopting SOA, which based on my experience, would certainly be a multi-year effort. He begins with establishing the SOA Foundation, which includes establishing a strategy, roadmap, reference architecture, and governance strategy, continues on with SOA Infrastructure Realization, establishing a SOA Governance Team, performing architectural analysis, training the staff, and then building the services and their consumers. Jeff articulates the costs associated with each of these for a typical organization based upon his experience.

Nick’s post, address the other side of this, which is where to find the money. In his post, SOA Economic Model: Avoiding Chargeback with Transaction Ration Funding, Nick calls out his disdain for chargeback models, and instead presents an alternate view whereby shared resources are funded out of a fixed operating budget that is funded through some form of flat tax. The teams that are funded by this pool are then allocated funds based upon the amount of transactions they process. While Nick presents a very simple model based upon the number of transactions, I’m sure other funding models could be used since a simple request count doesn’t take into account the complexity of the services, but his model is accurate in that it properly incents the service development teams. The teams that use the services are going to pay the tax no matter what, so they’re also now incented to make use of the services being provided, unlike a chargeback model whereby they pass less if they don’t use the shared services.

I think both of these posts do an excellent job of laying out the possibilities. Unfortunately, I wish more organizations were at the state where they have to worry about these factors. It seems that the vast majority of organizations are simply trying to build services out of existing efforts. Unless those efforts are sufficiently broad in scope (and surprisingly, many organizations I’ve seen do have at least one major initiative that typically qualifies), it’s unlikely that services will be created that will have broader applicability. Now I consider myself an optimistic person, but I also recognize that this is not an easy challenge. To get to what I envision and what Jeff and Nick describe represent fundamental changes in the way software development takes place. To do this will mean leveraging staff that already operates out of a shared funding model, such as enterprise architecture, since their efforts merely need to be assigned by the EA manager. If the EA group establishes the appropriate strategic models, smaller projects with limited scope can now demonstrate how the will contribute to broader strategic goals, thus warranting funding of shared services, versus doing things they way they’ve always been done. All of this comes back to something that I frequently bring up in discussions around SOA adoption. If the only thing that’s changed within IT is that the same old projects that we’ve been doing for years now are creating services within them, do we really expect to see any kind of dramatic change within IT, or will it just be chalked up as another overhyped trend? I, as an EA, certainly plan on doing everything I can to make sure that’s not the case. We have to be careful not to bite off more than the organization can chew at any given time, but we also need to make sure that we are the impetus for change.

New Greg the Architect video

Tibco was kind enough to let me know that a new Greg the Architect video, Off the Grid, is now available. I just watched it and it is a riot. You can watch it here at the Greg the Architect site, or, if the plugin I just installed works correctly, you should see the YouTube player below this.[kml_flashembed movie=”http://www.youtube.com/v/CnhEfxxhg34″ width=”425″ height=”350″ wmode=”transparent” /] 

Book Review

I had the opportunity to do a review of a book, and then discuss it in a podcast with the author and Dana Gardner. The book is entitled, “Succeeding with SOA: Realizing Business Value Through Total Architecture” and is written by Dr. Paul Brown of TIBCO Software.

You can view a transcript here, listen to the podcast here, or I’ve also added it as an enclosure on this entry.

I enjoyed this book quite a bit, and have to point out that it’s not your typical technology-focused SOA book. It presents many of the cultural and organization aspects behind SOA, and does a pretty good job. It tries to offer guidance that works within the typical project-based structures of many IT organizations. While I personally would like to see some of these project-based cultures broken down, this book offers practical advice that can be used today and eventually lead to the cultural changes necessary. Overall, I recommend the book. I found myself thinking, “Boy, if I were writing a book on SOA, these are things that I’d want to cover.” Give the podcast a listen, and check out the book if you’re interested.

Full disclosure: Outside of receiving a copy of the book to review, I did not receive any payment or other compensation for doing the review or participating in the podcast.

Is Metadata the center of the SOA technology universe?

MetadataAtom.pngAs reported by many bloggers/analysts/columnists, including this article by Tony Baer, BEA announced Project Genesis. Tony, in his article, stated that “there is considerable speculation that it will build on the [newly announced AquaLogic Enterprise] registry/repository.” I have a couple comments on the announcement that I wanted to discuss here.

First, we have yet another API in the metadata registry/repository space. Back in February, IBM created a whole bunch of brouhaha when they stated that they felt a new standard was needed for registry/repository communication, concluding that UDDI wouldn’t cut it. Now, we have the Metadata Interoperability Framework. Add this to UDDI, the WSRR protocol, HP/Mercury/Systinet Governance Interoperability Framework, and I’m sure this won’t be the end. You could certainly throw in LDAP for good measure, as well. Interestingly, no one has criticized BEA that I’ve seen for this move, but then again, they didn’t publicly come out and state that UDDI won’t cut it, either.

Second, given all of these APIs for metadata communication, it certainly does raise the question of whether metadata is really at the center of the SOA technology universe? Clearly, if all of the major platform vendors are now having to come up with custom protocols for communication with their registry/repository, it certainly means that there’s a lot going on with it. If you think about it, a metadata repository is a bridge between the design/development time world and the run-time world. Likewise, it also plays a key role in the run-time world and the world of operational analytics. Most people would also agree that most SOA efforts strive to separate non-functional concerns from functional concerns and allow for a policy-based approach to allow changes to be configured rather than coded.

We’ve certainly come a long way from the early days of the registry/repository where the focus was solely on design time discovery of services. Organizations that are solely focused on design time discovery of services still wrestle with whether they even need a registry/repository prior to reaching some critical mass of services. Vendors like BEA, IBM, and WebMethods are all certainly now pushing the Registry/Repository for much more than just this. I’ll admit, I have some concerns that we’re getting too far out ahead of the need of many enterprises. I’m a big proponent of a policy-driven infrastructure for SOA, but I’ll also admit that if you don’t have a sound understanding of your services and their consumers first, it’s not going to reap the benefits that are possible. Only time will tell whether we’re over-engineering based on what-if scenarios or if this level of sophistication driven by metadata is applicable to the masses.

Assume Enterprise!

One of my pet peeves when it comes to discussing services is when an organization gets into debates over whether a particular service is an “enterprise service” or not. These discussions always drive me nuts. My first question usually is what difference does it make? Will you suddenly change the way you construct something? It shouldn’t. More often than not, this conversation comes up when a project team wants to take a shortcut and avoid doing the full analysis that it will take to determine the expected number of consumers, appropriate scoping, etc. Instead, they want to focus exclusively on the project at hand and do only as much as necessary to satisfy that project’s needs. My advice is to always assume that a service is going to be used by the entire enterprise, and if time tells us that it’s only used by one consumer, that’s okay. Unfortunately, it seems that most organizations like to make the opposite assumption: assume that a service will only be used by the particular consumer in mind at that moment unless proven otherwise. This is far easier to swallow in the typical project-based culture of IT today, because odds are the service development team and the service consumer team are most likely the same group all working on the same project.

The natural argument against assuming that all services are “enterprise” services is that all of our services will be horribly over-engineered with a bunch of stuff thrown in because someone said, “What if?” The problem with over-engineering a service (or anything else) doesn’t stem from assuming that a service will have enterprise value, it stems from someone coming up with “what if” scenarios in place of analysis techniques to deeply understand the “right” capabilities that a service needs to provide. Analysis isn’t easy, and there’s no magic bullet that will ensure the right questions are asked to uncover this information, but I think many efforts today are not done to the best of our ability. As a result, people make design decisions based on a best guess, which can lead to either over or under-engineering.

I believe that if you are adopting SOA at an enterprise level, it will result in a fundamental change in the way IT operates and solutions are constructed. Requiring someone to prove that a service is an “enterprise” service before treating it as a service with appropriate processes and hygiene to manage the service lifecycle does nothing to promote this culture change, and in fact, is an inhibitor to that culture change. Will assuming that all services are enterprise services result in higher short term costs? Probably. Building something for use by a broader audience is more expensive, plenty of studies have shown that. On the other hand, assuming that all services are enterprise services will position you far better to achieve cost reduction in the long term as advocated by SOA.

External Events in Action

I received a press release in email from Xignite entitled “Partnership Delivers Financial Professionals Responsiveness, Collaboration Via Timely Earnings Data.” In this release Xignite announced their partnership with Wall Street Horizon, a provider of earnings event and calendar information to the investment industry. Xignite will redistribute Horizon’s earnings and events calendar content as part of its street-event driven series on-demand financial web service.

While I normally don’t try to be a recycler of press releases from vendors, as I’d much rather comment on things more directly associated with work as a practicing architect, I’d be very happy to see more and more of these types of releases. Why? In the past, I’ve talked about the importance of events, such as this post. One of the challenges, however, is that I don’t really feel that there are good sources of events, especially ones that come in from outside of the enterprise (although there are times that I think that outside sources are more likely that internal sources…). Here’s a press release that shows that external sources are appearing and through partnerships, trying to increase their audience. It would be great if some of the industry consortiums for specific verticals would develop some standards in the event space.

Podcast on RIA and more

Another Briefings Direct SOA Insights podcast has been posted by Dana Gardner in which I’m a panelist. In this edition, Dana, myself, Joe McKendrick, and indepdent blogger Barb Darrow discussed the role of RIA and rich media with SOA and the impact of associated technologies, such as Flash, AJAX, and Silverlight on the space. You can find a full transcript here or listen to it here. You can also subscribe via iTunes.

Back in the High Life

Okay, well maybe not the “High Life”, but I’ve had that Steve Winwood song in my head. On Monday, I am returning back to corporate life after nearly a year with MomentumSI. In a nutshell, a year as a consultant has shown me that the corporate world is where I am most comfortable, and best suited for my career goals. MomentumSI treated me very well, and I’m very impressed with their team and their offerings. I learned a lot from the excellent team that they have, and do plan on keeping in touch with them, offering insight from the corporate practitioner’s perspective as they continue their success. I certainly thank Jeff, Alex, Tom, and the rest of the MomentumSI team for the opportunity.

I’m not going to reveal where I’m going, other than to say that it’s a Fortune 500 company in the St. Louis Metro area where I reside, and I’m not returning to A.G. Edwards/Wachovia (AGE isn’t a Fortune 500 company, anyway). I’ll be an enterprise architect, involved with SOA, and other cool architecture topics. While I’m sure people will figure out where I’m working, this blog represents my own personal thoughts and opinions, and not that of my employer or anyone else (and there’s a disclaimer on the right hand side of the blog that states exactly that). I’m very happy that I’m going somewhere that doesn’t mind that I’m a blogger, and I fully intend on adhering to their policies regarding it. So, it’s back to the world of big IT and corporate politics for me, and I’m looking forward to it. While my colleague James McGovern has lamented about the lack of corporate EA bloggers in the past, he can add me back to the list!

Thank you Steve Jobs!

As reported yesterday, all of the early iPhone adopters who aren’t already receiving some form of a rebate (like of the Apple employees who got a free iPhone) will receive a $100 store credit for use at the Apple Store. I did not expect this, and I wasn’t one who was crying sour grapes, but I’m very happy to be able to put it toward my eventual purchase of Leopard in October, or iWork ’08 sometime between now and Christmas. Hopefully there won’t be a bunch of people complaining that it should be $200 rather than $100. Apple was under no obligation to do anything, and as Steve said in his open letter, this is what happens when you make technology purchases. Technology either gets better or cheaper, the important thing is to be happy with your purchase the day you make it and ensure that you feel it is money well spent at that time.

The pains of being an early adopter

Steve Jobs and Apple cut the price of the 8GB iPhone to $399 from the $599 that I paid for it. This is a very unusual move by Apple, as they traditionally have not changed their price points, but instead, offered more limited capabilities at a lower price point. I think it is a smart move, however, as it puts the price point at a much a closer level to phones that are considered its competitor. Unfortunately, I bought my iPhone on day 1, but I’m not going to complain. Sure, I’d love to have that $200 back, but the ultimate question we all must ask is whether or not the money spent has been worth it. For me, it’s a resounding yes.

As for other announcements, the key question is whether people will choose to keep their old cellphone and get the iPod touch. Personally, if I were buying an iPod, I’d certainly go for the iPod touch, regardless of whether or not I wanted the Wi-Fi web browsing. The quality of the video is a no-brainer and at least for my use, 16GB would be fine. On the topic of Wi-Fi, however, I have to admit that the only time I use Wi-Fi on my phone is at home, and on the rare occurrence that I’m in a restaurant with free Wi-Fi. Probably 95% of my usage is over the EDGE network, so the Wi-Fi capabilities isn’t as important to me. But, given that there’s probably a large contingent of iPod owners in the 17-24 range that are leveraging the Wi-Fi capabilities of their university or college, I’m sure this will be a big win.

Is it about the technology or not?

Courtesy of Nick Gall, this post from Andrew McAfee was brought to my attention. Andrew discusses a phrase which many of us have either heard or used, especially in discussions about SOA: “It’s not about the technology.” He premises that there are two meanings behind this statement:

  1. “The correct-but-bland meaning is ‘It’s not about the technology alone.’ In other words a piece of technology will not spontaneously or independently start delivering value, generating benefits, and doing precisely what its deployers want it to do.”
  2. “The other meaning … is ‘The details of this technology can be ignored for the purposes of this discussion.’ If true, this is great news for every generalist, because it means that they don’t need to take time to familiarize themselves with any aspect of the technology in question. They can just treat it as a black box that will convert specified inputs into specified outputs if installed correctly.”

In his post, Nick Gall states that discussions that are operating around the second meaning are “‘aspirational’ — the entire focus is on architectural goals without the slightest consideration of whether such goals are realistically achievable given current technology trends. However, if you try to shift the conversation from aspirations to how to achieve them, then you will inevitably hear the mantra ‘SOA is not about technology.'”

So is SOA about the technology or not? Nick mentions the Yahoo SOA group, of which I’m a member. The list is known for many debates on WS-* versus REST and even some Jini discussions. I don’t normally jump into some of these technology debates not because the technology doesn’t matter, but because I view these as implementation decisions that must be chosen based upon your desired capabilities and the relative priorities of those capabilities. Anne Thomas Manes makes a similar point in her response to these blogs.

As an example, back in 2006, the debate around SOA technology was centered squarely on the ESB. I gave a presentation on the subject of SOA infrastructure at Burton Group’s Catalyst conference that summer which discussed the overlapping product domains for “in the middle” infrastructure, which included ESBs. I specifically crafted my message to get people to think about the capabilities and operational model first, determining what your priorities are, and then go about picking your technology. If your desired capabilities are focused in the run-time operations (as opposed to a development activity like Orchestration) space, and if you developers are heavily involved with the run-time operations of your systems, technologies that are very developer-focused, such as most ESBs, may be your best option. If your developers are removed from run-time operations, you may want a more operations focused tool, such as a WSM or XML appliance product.

This is just one example, but I think it illustrates the message. Clearly, making statements that flat our ignore the technology is fraught with risk. Likewise, going deep on the technology without a clear understanding of the organization’s needs and culture is equally risky. You need to have balance. If your enterprise architects fall into Nick’s “aspirational” category, they need to get off their high horse and work with the engineers that are involved with the technology to understand what things are possible today, and what things aren’t. They need to be involved with the inevitable trade-offs that arise with technology decisions. If you don’t have enterprise architects, and have engineers with deep technical knowledge trying to push technology solutions into the enterprise, they need to be challenged to justify those solutions, beginning with a discussion on the capabilities provided, not on the technology providing them. Only after agreement on the capabilities can we now (and should) enter a discussion on why a particular technology is the right one.

Composite Applications

Brandon Satrom posted some of his thoughts on the need for a composite application framework, or CAF, on his blog and specifically called me out as someone from which he’d like to hear a response. I’ll certainly oblige, as inter-blog conversations are one of the reasons I do this.

Brandon’s posted two excerpts from the document he’s working on, here and here. The first document tries to frame up the need for composition, while the second document goes far deeper into the discussion around what a composite application is in the first place.

I’m not going to focus on the need for composition for one very simple reason. If we look at the definition presented in the second post, as well as articulated by Mike Walker in his followup post, composite applications are ones which leverage functionality from other applications or services. If this is the case, shouldn’t every application we build be a composite application? There are vendors out there who market “Composite Application Builders” which can largely be described as EAI tools focused on the presentation tier. They contain some form of adapter for third party applications, legacy systems, that allow functionality to be accessed from a presentation tier, rather than as a general purpose service enablement tool. Certainly, there are enterprises that have a need for such a tool. My own opinion, however, is that this type of an approach is a tactical band-aid. By jumping to the presentation tier, there’s a risk that these integrations are all done from a tactical perspective, rather than taking a step back and figuring out what services need to be exposed by your existing applications, completely separate from the construction of any particular user-facing application.

So, if you agree with me that all applications will be composite applications, then what we need is not a Composite Application Framework, but a Composition Framework. It’s a subtle difference, but it gets us away from the notion of tactical application integration and toward the strategic notion of composition simply being part of how we build new user-facing systems. When I think about this, I still wind up breaking it into two domains. The first is how to easily allow user-facing applications to easily consume services. Again, in my opinion, there’s not much different here than the things you need to do to make services easily consumable, regardless of whether or not the consumer is user-facing or not. The assumption needs to be that a consumer is likely to be using more than one service, and that they’ll have a need to share some amount of data across those services. If the data is represented differently in those services, we create work for the consumer. The consumer must translate and transform the data from one representation to one or more additional representations. If this is a common pattern for all consumers, this logic will be repeated over and over. If our services all expose their information in a consistent manner, we can minimize the amount of translation and transformation logic in the consumer, and implement it once in the provider. Great concept, but also a very difficult problem. That’s why I use the term consistent, rather than standard. A single messaging schema for all data is a standard, and by definition consistent, but I don’t think I’ll get too many arguments that coming up with that one standard is an extremely difficult, and some might say impossible, task.

Beyond this, what other needs are there that are specific to user-facing consumers? Certainly, there are technology decisions that must be considered. What’s the framework you use for building user-facing systems? Are you leveraging portal technology? Is everything web-based? Are you using AJAX? Flash? Is everything desktop-based using .NET and Windows Presentation Foundation? All of these things have an impact on how your services that are targeted for use by the presentation tier must be exposed, and therefore must be factored into your composition framework. Beyond this, however, it really comes down to an understanding of how applications are going to be used. I discussed this a bit in my Integration at the Desktop posts (here and here). The key question is whether or not you want a framework that facilitates inter-application communication on the desktop, or whether you want to deal with things in a point-to-point manner as they arise. The only way to know is to understand your users, not through a one-time analysis, but through continuous communication, so you can know whether or not a need exists today, and whether or not a need is coming in the near future. Any framework we put in place is largely about building infrastructure. Building infrastructure is not easy. You want to build it in advance of need, but sometimes gauging that need is difficult. Case in point: Lambert St. Louis International Airport has a brand new runway that essentially sits unused. Between the time the project was funded and completed, TWA was purchased by American Airlines, half of the flights in and out were cut, Sept. 11th happened, etc. The needs changed. They have great infrastructure, but no one to use it. Building an extensive composition framework at the presentation tier must factor in the applications that your users currently leverage, the increased use of collaboration and workflow technology, the things that the users do on their own through Excel, web-based tools, and anything else they can find, how their job function is changing according to business needs and goals, and much more.

So, my recommendations in this space would be:

  1. Start with consistency of data representations. This has benefits for both service-to-service integration, as well as UI-to-service integration.
  2. Understand the technologies used to build user-facing applications, and ensure that your services are easily consumable by those technologies.
  3. Understand your users and continually assess the need for a generalized inter-application communication framework. Be sure you know how you’ll go from a standard way of supporting point-to-point communication to a broader communication framework if and when the need becomes concrete.

SOA and ROI

I finally decided to post regarding the Nucleus Research/KnowledgeStorm study that many of the SOA bloggers have been commenting about. In the InfoWorld article by Paul Krill, David O’Connell, senior analyst at Nucleus, is quoted as saying, “Only a minority of companies are getting a return on investment of SOA.”

Like many others, I don’t put a lot of faith in anything that tries to associate ROI and SOA. ROI should be addressed at the business initiative level, i.e. something with quantifiable business benefit. Opening up a new store location has quantifiable business benefits. The elimination of certain paperwork associated with an employee hiring process can have quantifiable business benefits. SOA isn’t a project, but rather it’s a way of approaching the technical solutions within a project. I can apply the principles of SOA to the automation of employee hiring processes. I can apply the principles of SOA to the technology pieces associated with opening a new store.

If we want to narrow the discussion to something more closer to where SOA can have an impact, we can look at development costs. As has been stated by others, however, this really only deals with the area of reuse. How do you capture the ability of IT to work closely with the business and save time on analysis because of the better mutual understanding of how technology and business come together? It should be reflected in lower development costs/quicker time to market. At this level, however, things get fuzzy. Ultimately, however, the more important point is whether or not business value is being produced. The development cost of the technology is just one piece of the puzzle in implementing a business solution. IT should always be striving to improve itself and continue to bring those costs down. Do you even know what these costs are? Are you collecting metrics about your development processes to know whether things are getting better or worse? Even if you want to attach ROI to SOA, if you don’t have the before and after numbers, all you’re getting is someone’s gut feeling.

This entry isn’t meant to say that we should simply ignore ROI and just do SOA because it’s the right thing to do. That sort of blind adoption can be just as damaging as not doing anything. The point is that discussions around ROI at the executive level should be about business benefits. You can’t just define and run an SOA project. You run business projects, and apply SOA within it. The ROI for the executives is based upon the ROI of the business project, of which SOA is just one piece.

Cool iPhone Feature

On a whim, I just determined that if you navigate to a Google Maps URI within Safari on the iPhone, it will launch the Google Maps application on the iPhone, rather than staying within Safari. It even works for directions. For example, click on this link from your iPhone, and you’ll get directions from the St. Louis Airport to Busch Stadium. This is a pretty slick way of providing integration between the native iPhone apps and the Web.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.