Archive for the ‘IT’ Category

Gartner Time Again

The east coast Gartner AADI and EA Summits (that’s Application Architecture, Development, and Integration and Enterprise Architecture) are nearly upon us, and thanks to the SOA Consortium and Gartner, I’ll be part of two end user panels again. In the AADI Summit, I’ll be speaking with Mike Kavis and Melvin Greer on “Measuring the Value of SOA.” I think I’ll bring an interesting perspective to this. That session is at 10:45 AM on Wednesday, near the end of the AADI but right before the beginning of the EA Summit. At the EA Summit, I’ll be part of a panel with John Williams, Maja Tibbling, and Marty Colburn in a session titled “SOA and EA: Lessons Learned from the Trenches.” That one is at 8:30 AM on Friday, so stop by the coffee shop on your way down to conference center and pick up a double latte. I’ll be at both events (and blogging) for the duration of them, so stop by and introduce yourself.

Why Service Versioning is Important

Surekha Durvasula had a very good post entitled ‘Why you need a stated “service versioning policy.”‘ In it, she presented 6 different scenarios where multiple versions of services may be required. If you don’t think you’ll need to deal with versioning, perhaps you should review these scenarios and determine how you’ll handle them when the need arises.

Social Networking and the Enterprise

One of the things I recently started thinking about was the relevance of social networking sites like Facebook, Myspace, Plaxo, LinkedIn, etc. have to enterprises. While there are certainly individual usage of these sites, is there a play for the enterprise? Ann All of IT Business Edge, had a post about two weeks ago titled, “Facebook Not So Useful as a Business Tool,” quoting a study from Flowing Data that “just a tiny percentage of Facebook’s 23,160 applications are business-oriented.” In the comments that followed, one reader named Peter stated “businesses should take a serious look at integrating social media in their marketing strategy.”

The more I thought about this, the more I agree with Peter. If your company has individuals as either direct or indirect customers, I’m sure that the marketing department has segmented them into different groups each with their own strategy for how they will be marketed. I don’t know of any enterprise of significant size in the U.S. that doesn’t have an internet presence, and I’m willing to bet that nearly all of their marketing departments see their web sites as more than just a place to get electronic versions of paper documentation or marketing materials. In other words, the web site has gone through three phases.

  1. The Information Web: In this phase, everything revolved around pushing information out to the visitor.
  2. The Transaction Web: In this phase, the communication is bi-directional, predominantly focused on information from the enterprise, and business (i.e. money) coming from the visitor.
  3. The Participatory Web: Here, the emphasis shifts from the individual to the community. It’s not just the enterprise pushing information out, it’s the full ecosystem all of the site visitors and all of the enterprise’s partners.

The big challenge with this third phase comes down to community. When an enterprise tries to own the community, it will probably work very well for established customers, but it may have a hard time bringing in new members. In contrast, a site focused on enabling communities of all sorts, like Facebook or MySpace, is better positioned for community growth. If this is the case, why wouldn’t an enterprise try to involve these sites in their marketing strategies as a growth tool. The point would not be to own the community, but to attract new members to its community. This is no different than the physical world where a company establishes a branch office or a retail location in a community. It has to compete with others, but at the same time, if it is perceived as valuable and meeting the needs of the community, it will survive and thrive. The time is ripe is to think about how your company can build applications and content for these sites to attract new interest.

Enterprises need to think architecture, not integration

In a blog entry last week and his podcast for this week, David Linthicum lamented the fact that many technology vendors are too focused on integration and not enough on architecture. My opinion on this is that the problem lies first with enterprises, and not with technology vendors.

In order to first explain this, I need to split the technology product space into two large groups. First, there are products that are pure infrastructure. They are platforms on which someone else builds solutions. This is the familiar space of database platforms, application servers, network appliances, EAI platforms, ESBs, MOM servers, etc. For products in this space, I have absolutely no problem with the vendors providing products that are focused on making integration easier. Does this enable enterprises to build up layers of “glue” in the middle? Absolutely, but at the same time, the enterprise had to have a need (whether perceived or real) to make their integration efforts easier.

The second group of technology products are the actual business solution providers, whether it’s a big suite from SAP or Oracle, web-based solutions like Workday and Salesforce.com, or anything in between. These vendors absolutely should be focused on architecture first. At the same time, I don’t think many of these products are being marketed and sold on their integration benefits, they’re being sold on their business capabilities.

So, what’s the problem then? The problem comes when the enterprise IT staff involved with technology identification and selection is too focused on integration, rather than architecture. Almost always, when I hear an enterprise talk about integration, it’s a just in time effort. Someone is building some new system and as part of the design of that system, they decide they need to talk to some other system. No thought of this need occurred in advance from either side of the integration effort. In putting together the solution, the focus is simply on the minimal amount of work to put the glue in the middle. As long as this trend continues, the infrastructure vendors are going to continue to market their products to this space. While it’s a noble quest to try to educate and market at the same time, it’s a risky strategy to present using a different mental model than your target audience.

The change that needs to occur is that integration needs to be a primary principle that is thought about at the time a system is placed into production. Normal behavior is to build a solution for my stakeholders and my users, and not think about anything else. In past posts (here, here), I’ve talked about three simple questions that all projects should start thinking about. One of those questions is “What services does your solution use / expose?” How many projects actually identifying anything other than just what their front end consumes? Does anyone see this as a problem? Let’s come back to the infrastructure vendors. They actually do need to think about architecture and services, but in a different space- management. I’ve railed on this in the past. How many vendors expose all of the capabilities in their user-facing management console through one or more service interfaces? If I want to embrace IT Systems Automation, how on earth am I going to do this what what these vendors give me? I’m not. I’m going to have to leverage management adapters in more automation environment. Does this sound familiar? It sure sounds like EAI to me. The best way I see to address this is think about integration in advance. Don’t think about it at the time someone comes and says, “I need to talk your system,” think about it at the time you build your solution and ask the question, “How will other systems need to interact with this.” Yes, this is a bit of predicting the future, and we’ll probably expose things that no one ever uses, but I think an enterprise will be in a better state if they try to anticipate in advance, thinking about architecture, rather that continue with today’s approach of integrate on demand.

Another day, one less vendor

The press releases came out today that SOA Software has bought LogicLibrary, with blogosphere comments from Miko Matsumura, Dana Gardner, and Jeff Schneider. I see this as a step toward the bigger SOA platform players by SOA Software. At this point, most of the players in SOA platforms all now have a registry/repository offering. IBM has WebSphere Registry Repository, Oracle/BEA has AquaLogic Registry Repository (consisting of the OEM’d Systinet and purchased Flashline products), Tibco resells Systinet, SoftwareAG has the former WebMethods/Infravio, Iona has Artix Registry/Repository, SAP has their Enterprise Service Repository, and Microsoft has their Oslo efforts. I think it’s safe to say that the vendors that are trying to be the acquirer rather than the acquired have all realized that a registry/repository is the center of the SOA technology universe. Now if only they could talk to each other easily along with the CMDBs of the ITIL technology world.

In my “Future of ESBs” post, I talked about how selling an ESB on its own is a difficult proposition because of the relative value that a developer will place on it. The same thing certainly holds true for a registry/repository, and I think the market has shown that to be the case by now having all of the registry/repository providers get swallowed up by larger fish. It would be interesting to know how many times these products are sold on their own versus being bundled in as a value-add with a larger purchase.

Lobbyists and Governance

I’ve had this topic on my list for some time now. I’ve used analogies to municipal/local/state/federal governance in past posts, and in a conversation someone made a comment that they thought I was going to continue the analogy on to include lobbyists. I made a mental note, because I knew there were definitely some parallels that could make for good blog fodder.

So, in a typical government, what do lobbyists do? In a nutshell, they do whatever they can to influence the policy makers to establish policies that are benefits the lobbyists or whoever they represent. In general, I think most individual voters probably have a negative view of lobbyists, except those whose beliefs happen to align with their own. So, are they a good thing or a bad thing?

Let’s come back to the whole purpose of governance. My definition of governance is that is the combination of people, policies, and processes that an entity utilizes to achieve a desired behavior. People set policies and processes ensure they are followed. As a reminder, enforcement processes are only one subset of processes that can be used. An organization could just as easily focus on education processes rather than enforcement and achieve the desired behavior. I stated earlier that lobbyists try to influence the policy makers (people) to establish policies in the interest of the lobbyists. Where this becomes a problem is when the people involved in governance lose sight of the objective of governance. Lobbyists are frequently associated with or simply referred to as “special interests.” By that term alone, there’s an obvious risk. Policies should be set to achieve the desired behavior of the organization, not the desired behavior of any special interest.

This is actually a frequent problem in the typical corporate enterprise. The first potential scenario is when the desired behavior of the enterprise isn’t well defined. Therefore, the policy makers won’t base their policies on enterprise behavior, but rather on the desired behavior of the people in the organization who have their ear (the lobbyists). This can go down a really bad path, because it’s likely to lead to infighting within the governance structure, and most likely ineffective governance.

The second scenario is when the desired behavior of the organization is well known to the policy makers, but not to the rest of the organization. Once again, the rest of the organization will operate like a bunch of lobbyists, trying to sway policy in their direction so they can do what they think is best. The governance team will likely be perceived as being in an ivory tower and out of touch. The real problem in this scenarion is that the constituents in the enterprise don’t know what the desired behavior is, and as a result, they’re guessing. Some will be right, many will be wrong, and all will be unhappy.

A third scenario, which can’t be forgotten is the role of vendors and other third parties. Once again, their vested interest is not in your desired behavior, but theirs. Buy our products, buy our services. You need to be in control of the desired behavior and choose vendors and services that are in alignment, rather than letting them try to change your policies to something more amenable to them.

The whole point of this is that the presence of lobbyists in the entity being governed has the potential for problems. If you see a lot of lobbying in your organization, the first place to go back to is your desired behavior. If that behavior is well understood by the organization, your need for active enforcement should be far less because people understand and want to do the right thing. If the desired behavior isn’t known by the governors or the constituents, you’ve open the doors to outside influence and controversy. This doesn’t imply that a governor shouldn’t have advisors, but the first question that should always be asked is, “Is this action consistent with the desired behavior we want?”

The Future of ESBs

Yogish Pai had a interesting post titled, “A decision maker’s concern about ESB.” In it, he provided two quotes, one from a Chief Architect of a financial services company and another from a CTO of a transportation company, both of which were raising some concern about leveraging an ESB.

ESBs have been one of the more controversial technology products in quite some time. They’ve been attacked as either rebranded EAI technology or efforts by vendor to “sell SOA” when most of us pundits have all stated that you can’t buy SOA. I’ve posted in the past (here, here, and here) on ESBs with more of a neutral approach, discussing capabilities that are needed and simply pointing out that ESBs are one way of providing those capabilities, and that’s still my stance. I’ve had the opportunity to work with companies that had purchased an ESB as well as companies that wouldn’t touch it with a ten foot pole. In both cases, the companies had found a suitable way to provide these capabilities, so you can’t say that one approach was better than the other.

What ultimately will decide the fate of the ESB will probably not be the specific technical capabilities associated with it, but the value that enterprises place on those capabilities. My past posts have stated my preference that the capabilities associated with the space really belong in the hands of operations rather than the hands of developers. As a result, you’d have to compare the cost/value of an ESB or other intermediary to the cost of other network intermediaries, such as switches, load balancers, and proxying appliances. Unfortunately, the ESB space is dominated not by traditional networking companies, but middleware companies. As a result, the products are being marketed to developers with feature after feature being thrown in, creating overlap with service hosting platforms, integration brokers, and orchestration engines. This dilutes the benefits of the core capabilities, and if anything, can make it more complicated to get those things done. In addition, these products may now clash with other products in the vendor’s portfolio, putting the sales staff in a difficult position.

The challenge that I see is that the value of a typical network load balancer from the view of a developer is pretty low. From their perspective, the features provided by the load balancer are minimal compared to what they need from the typical application server. As a result, I suspect that ESBs are very likely to become bundled capabilities rather than standalone products. It certainly means that there’s room for open source products, given that developers aren’t putting a lot of value to those capabilities, yet they are necessary. Open source products still need mindshare, however, so it will be interesting to see where it goes.

Followup on WOA SOA…

Since Dave was nice enough to give me another shout out in his podcast this week, I thought I’d follow up in my blog. I thought he did a much better job in discussing my post when he said that these new breeds of applications and services that are available on demand are (my words) another tool in the toolbox of the enterprise architect. There’s no doubt that there are potential cost benefits to these platforms, and a review of them should be something you consider as your architecture evolves. Just remember, however, that they don’t define your architecture any more than any product installed on site defines your architecture. They only define your architecture if you let them, and that’s a bad situation. Rather, define your architecture, and choose the solutions that best fit your needs, whether it is building it in house, buying an off the shelf product to install on site, or going with a web-based provider. Don’t focus on functionality alone. Make sure that it aligns with your management needs and your information needs as best you know their future direction. You need to be in control, not at the control of your vendors.

Some recent podcasts

I wanted to call attention to four good podcasts that I listened to recently. The first is from IT Conversations and the Interviews with Innovators series hosted by Jon Udell. In this one, he speaks with Raymond Yee of UC Berkeley, discussing mashups. I especially liked to discussion about public events, and getting feeds from the local YMCA. I always wind up putting in all my kids games into iCal from their various sports teams, it would be great if I could simply subscribe from somewhere on the internet. Jon himself called out the emphasis on this in the podcast in his own blog.

The next two are both from Dana Gardner’s Briefings Direct series. The first was a panel discussion from his aptly-renamed Analyst’s Insight series (it used to be SOA Insights when I was able to participate, but even then, the topics were starting to go beyond SOA), that discussed the recent posts regarding SOA and WOA. It was an interesting listen, but I have to admit, for the first half of the conversation, I was reminded of my last post. Throughout the discussion, they kept implying that SOA was equivalent to adopting SOAP and WS-*, and then using that angle to compare it to “WOA” which they implied was the least common denominator of HTTP, along with either POX or REST. Many people have picked up on one comment which I believe was from Phil Wainewright, who said, “WOA is SOA that works.” Once again, I don’t think this was a fair characterization. First off, if we look at a company that is leveraging a SaaS provider like Salesforce.com, Salesforce.com is, at best, a service provider within their SOA. If the company is simply using the web-based front end, then Salesforce.com isn’t even a service provider in their SOA, it’s an application provider. Now, you can certainly argue that services from Amazon and Google are service providers, and that there’s some decent examples of small companies successfully leveraging these services, we’re still a far cry away from having an enterprise SOA that works, whichever technology you look at. So, I was a bit disappointed in this part of the discussion. The second half of the discussion got into the whole Microhoo arena, which wound up being much more interesting, in my opinion.

The second one from Dana was a sponsored podcast from HP, with Dana discussing their ISSM (Information Security Service Management) approach with Tari Schreider. The really interesting thing in this one was to hear about his concept of the 5 P’s, which was very familiar to me, because the first three were People, Policies, and Process (read this and this). The remaining two P’s were Products and Proof. I’ve stated that products are used to support the process, if needed, typically making it more efficient. Proof was a good addition, which is basically saying that you need a feedback loop to make sure everything is doing what you intended it to. I’ll have to keep this in mind in my future discussions.

The last one is again from IT Conversations, this time from the O’Reilly Open Source Conference Series. It is a “conversation” between Eben Moglen and Tim O’Reilly. If nothing else, it was entertaining, but I have to admit, I was left thinking, “What a jerk.” Now clearly, Eben isn’t a very smart individual, but just as he said that Richard Stallman would have come across as to ideological, he did the exact same thing. When asked to give specific recommendations on what to do, Eben didn’t provide any decent answer, instead he said, “Here’s your answer: you’ve got another 10 years to figure it out.”

Piloting within IT

Something I’ve seen at multiple organizations is problems with the initial implementation of new technology. In the perfect world, every new technology would be implemented using a carefully controlled pilot that exercised the technology appropriately, allowed repeatable processes to be identified and implemented, and added business value. Unfortunately, it’s that list item that always seems to do us in. Any project that has business value tends to operate under the same approach that any project for the business does, which usually means schedule first, everything else second. As a result, sacrifices are made, and the project doesn’t have the appropriate buffers to account for the lack of experience the organization has. Even if professional services are leveraged, there’s still a knowledge gap that relates the product capabilities to the business need.

One suggestion I’ve made is to look inside of IT for potential pilots. This can be a chicken versus the egg situation, because sometimes funding can not be obtained unless the purchase is tied to a business initiative. IT is part of the business, however, and some funding should be reserved for operating efficiency improvements within IT, just as the same should be done for other non-revenue producing areas, such as HR.

BPM technology is probably the best example to discuss this. In order to fully leverage BPM technology, you have to have a deep understanding of the business process. If you don’t understand the processes, there’s no tool that you can buy that will give you that knowledge. There are packaged and SaaS solutions available that will give you their process, but odds are that your own processes are different. Who is the keeper of knowledge about business processes? While IT may have some knowledge, odds are this knowledge resides within the business itself, creating the challenge of working across departments when trying to apply the new technology. These communication gaps can pose large risks to a BPM adoption effort.

Wouldn’t it make more sense to apply BPM technology to processes that IT is familiar with? I’m sure nearly every large organization purchases servers and installs them in its data center. I’m also quite positive that many organizations complain about how long this process takes. Why not do some process modeling, orchestration, and execution using BPM technologies in our own backyard? The communication barriers are far less, the risk is less, and value can still be demonstrated through the improved operational efficiencies.

My advice if you are piloting new technology? Look for an opportunity within IT first, if at all possible. Make your mistakes on that effort, fine tune your processes, and then take it to the business with confidence that the effort will go smoothly.

Infrastructure in the Cloud

James Urquhart sent me an email about one of his posts and invited me to join the conversation. After reading his post and Simon Wardley’s post, it was interesting enough that I thought I’d throw in my two cents.

The topic of discussion was Google’s new App Engine. Per Google’s site:

Google App Engine lets you run your web applications on Google’s infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. With App Engine, there are no servers to maintain: You just upload your application, and it’s ready to serve your users.

The theme of James’ post, and why I think he invited me into the conversation, is does this matter to the large enterprise? I tend to agree with James. While I think this is cool technology, in its present form it’s probably of little value to the typical large enterprise. At the same time, I would definitely qualify this, and Amazon’s cloud of services, as disruptive technologies. I can’t help but find myself making mental comparisons to Clayton Christensen’s discussion of steel mills. Smaller steel mills came along and catered to the low end, low margin area of the market that the larger, integrated steel mills were happy to give up. Over time, however, those smaller mills expanded their offerings until the business model of the larger mills was completely disrupted. Will something similar occur in the infrastructure space?

There are certainly parallels in the potential markets. Big enterprises are not the target of Google or Amazon, just as the smaller steel mills focused on re-bar rather than on the more expensive and potentially lucrative market for structural beams or sheet metal. One key difference, however, is that it’s hard to figure out who the “big steel mill” is in this case. Clearly, both Google and any major enterprise currently buy servers. So, Google is not disrupting HP, IBM, Dell, etc. What we’re really talking about disrupting is the internal IT data center. In most cases, save for outsourcing the data center to EDS or IGS, there is no business to be disrupted. The right comparison of this to the steel mills would be a comparison of companies that leveraged the products from the smaller mini-mills disrupting the companies that leveraged the products from the larger, integrated mills. While efficient cost controls are certainly part of the equation, there’s much more that goes into the disruption equation.

In the end, it’s very clear to me that tools like Google App Engine are good for the industry as a whole. They cater nicely to the low end of the market and the size of Google can sustain low margins or even a loss on making these services available. Over time, some of the companies that leverage them will become bigger companies, making additional requests of Google, which will in turn evolve the product that with each evolution makes it more attractive to a broader set of customers, eventually including the big enterprise.

Aligning REST with Services

I’ve been meaning to call out a blog from Anne Thomas Manes posted back in March, along with a message in the Yahoo SOA group, as they were finally something that, in my opinion, added some useful information to the ever present REST versus SOAP/WSDL debate. Normally, I stay out of this religious war when it kicks up in the blogs or the Yahoo SOA group, but that’s not to say that I don’t care about it. The fact is, I’m a practicing enterprise architecture in a Fortune 500 company, so I need to be providing guidance when a team comes to me asking about REST versus SOAP/WSDL.

In most of the conversations about REST and SOAP/WSDL, it’s usually a comparison of a single SOAP endpoint (a single URI) to a single REST endpoint (again, a single URI). Invariably, the conversation always wind up being about the uniform interface (GET/PUT/POST/DELETE when using HTTP) versus the non-uniform interface (whatever operations are defined in WSDL) tunneled through the transport (POST when using HTTP) in SOAP. I’ve always felt that this was a bit of an apples and oranges debate because the REST endpoint is exposing a resource, and the SOAP endpoint was exposing a service. When thinking of services in a conceptual, technology independent manner, the mapping to a resource just didn’t seem as straightforward.

The comment that Anne made that helped put things in the right perspective was this:

Service consumers interact with the service through the set of resources it exposes. In other worlds, the resource model is the interface to the service. Each resource exposes a uniform interface (e.g., GET, PUT, POST, and DELETE), but an individual resource is not the complete service.

This made it very clear. If you’re trying to go from a conceptual service model to a design based on REST, an individual service does not equate to a single REST endpoint. Rather, it equates to a collection of REST endpoints that together comprise the resource model associated with that service. In my opinion, the lack of an understanding around this concept is probably also why most of the “REST” services out there really aren’t REST, but rather are XML over HTTP without the WSDL and SOAP envelope. The people involved are still trying to do a single endpoint comparison and not thinking about the resource model as a whole.

Now, I’ll admit that this insight still doesn’t solve the uniform versus non-uniform debate, but I do think that it brings us to the point where a valid comparison of approaches for a particular problem could be taken on within an organization. Thanks Anne!

Dealing with committees

If you work in a typical large IT enterprise, it’s very likely that there are one or more committees that frequently receive presentations from various people in the organizations. These can be some of the most painful meetings for an organization, or they can be some of the most productive. Here are some of my thoughts on how to keep them productive.

First, if you are a member of one of these committees, you need to understand your purpose. If you are part of the approval pipeline, then it should be clear. Your job is to approve or deny, period. If you can’t make that decision, then your job is tell the presenter what information they need to come back with so you can either approve or deny. Unfortunately, many committee members often forget this as they get caught up in the power that they wield. Rather than focusing on their job, they instead focus on pointing out all the things that the presenter did or did not do, regardless of whether those things have any impact on the decision.

Second, the committee should make things as clear as possible for the incoming presenters. I’ve had to endure my fair share of architecture and design reviews where my guidance was, “You need to have an architecture/design review.” At that point, the presenter is left playing a guessing game on what needs to be presented, and it’s likely to be wrong. Nobody likes being stuck in a meetings all day long, so give the people the information they need to ensure that your time in that weekly approval meeting is well spent.

From the perspective of the presenter, you need to know what you want from the committee, even if it should be obvious. As I’ve stated before, the committee members may have easily lost sight of what their job is, so as the presenter, it’s your job to remind them. Tell them up front that you’re looking for approval, looking for resources, looking for whatever. Then, at the end of your presentation, explicitly ask them for it. Make sure you leave enough time to do so. It’s your job to watch the clock and make sure you are able to get your question answered, even if it means cutting off questions. Obviously, you should recognize that there is some debate, and ask appropriately. In the typical approval scenario, you should walk out of the meeting with one of three possibilities:

  1. You receive approval and can proceed (make sure you have all the information you need to take the next step, such as the names of people that will be involved)
  2. You are denied.
  3. The decision is deferred. In this scenario, you must walk out of the meeting knowing exactly what information you need to bring back to the committee to get a decision at your next appearance. Otherwise, you’re at the risk of creating an endless circle of meeting appearances with no progress.

I hope you find these tidbits useful. They may seem obvious, but personally, I find them useful to revisit when I’m in either situation (reviewer or presenter).

Is your vendor the center of the universe?

A recent post from James McGovern reminded me about some thoughts I had after a few different meetings with vendors.

Vendors have a challenge, and it all stems from a view that they can be the center of the universe. A customer buys their product and builds around it, thereby becoming the “center of the universe” for that customer, exhibiting a gravitational field that attempts to mandate that all other products abide by its laws of physics. In other words, every other product must integrate with it, but that’s the responsibility of those products. For reasons I went into in my last post, that doesn’t work well. It’s a very inward-facing view rather than being the consumer-oriented view.

The challenge is that even if a vendor didn’t want to come across as the center of the universe, for some customers, it is required. For example, if a customer doesn’t have a handle on enterprise identity management, a vendor can shoot themselves in the foot if their own product doesn’t provide some primitive identity management capabilities to account for customers that don’t have an enterprise solution. In the systems management space, you may frequently hear the term “single pane of glass” intended for the Tier 1 operations person. Once again, however, every monitoring system that deals with a specialized portion of the infrastructure will have its own console. It’s a difficult challenge to open up that console to other monitoring sources, and it’s a difficult challenge to open up the data and events to an outside challenge. So what’s an enterprise to do?

To me, it all comes back to architecture. When evaluating these products, you have to evaluate them for architectural fit. Obviously, in order to do that, you need to have an architecture. The typical functional requirements don’t normal constitute an architecture. You can make this as complicated or as simple as you’d like. A passion of mine tends to be systems management capabilities, so I normally address this in an RFI/RFP with just one question:

Are all of the capabilities that are available in your user-facing management console also available as services callable by another system, orchestration engine, or script?

Now, there are obviously follow-ons to this question, but this does serve to open up the communication. Simply put, the best advice for corporate practitioners is to ensure that you are in charge of your architecture and setting the laws of physics for your universe, not the vendors you choose.

More on SOA Organizational Challenges

I just listened to Anne Thomas Manes’ podcast, “What Does it Take to Succeed With SOA?” that was released on Burton Group’s Inflection Point channel. One of the things that Anne pointed out is that many organizations do not have the right culture, especially on the business side, that promotes the sharing of services. A culture of sharing, collaboration, and trust is required to be successful. She also pointed out that the IT organization frequently mirrors the organization of the business, and if those business organizations don’t share, it makes it very difficult.

I started thinking about the organizational aspects of this. Many people in IT only have awareness of the highest levels of the business organization, and it may not be apparent that there’s a problem. But here are two common patterns that clearly point out the potential problems. First, there’s what I’ll call regionalization. Whether we’re dealing with global entities, or national companies with regional presence, it’s very likely that they have business units aligned along regions rather than along business capabilities. There may be very valid reasons for doing this, but it must also be recognized that they may all be performing the same business functions, only with the expected regional nuances. While it’s oversimplifying the problem, if a company sells widgets in 52 countries, there should be enough commonality to warrant some common services that all of them can leverage. Second, there’s product specialization. I have first-hand experience with an organization that had separate business units (and associated IT staff) for different products that the company offered. There were opportunities for shared services that were recognized within IT, but never made it to reality because of the cultural challenges within the organization. In this case, the cultural challenges were within IT, but it’s just as likely that the same challenges existed on the business side as well.

As Anne rightly points out in the podcast, somehow, we need to present the business value associated with the sharing. In some ways, this shouldn’t be any different than the value that makes companies choose to centralize sales and marketing. One of the big challenges faced within IT, however, it that the associated costs of redundant technology can be a bit harder to quantify. Yes, there are software licensing fees and maintenance agreements, but some of these one time costs are glossed over in deference to the project schedule. While I’ve never personally been involved with a centralization of sales and marketing, I’m guessing that a big part of the equation was an associated reduction in cost attributable to staff reduction, or in a more positive light, getting the cost to revenue ratio looking better by resource re-allocation. If we’re talking information technology, while it’s certainly a possibility, staff reduction doesn’t necessarily come into play, so that associated cost reduction must come from somewhere else, and at least in my experience, IT isn’t very good at quantifying it. So, across the board, our work is cut out for us, but that’s not to say it can’t be done. The one takeaway is to invest heavily in making your pitch. If you don’t have baseline metrics to be able to show the value improvement, whether in increased revenue or decreased cost, take the time to get them together before trying to make your pitch. It needs to be in terms the business can understand and is used to dealing with.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.