Archive for the ‘IT’ Category

CapeClear and Workday

I saw the announcements yesterday that Workday had acquired CapeClear. This is an interesting acquisition. At first glance, I thought this was just normal evolution in the ERP space, except that Workday just happens to be a hosted solution. After all, Oracle and SAP both have integration products that enable integration with their backend applications, so why not Workday? The part of this that is a bit distressing, however, is that Workday is a new entrant into this space. A big reason that these integration products exist is because every application had its own proprietary integration approach. As the need to integrate became more and more important, the desire to standardize the integration approaches increased, and led to technologies like Web Services. Workday, as a relatively new player in the space, should have been able to go with standardized integration approaches from the start, significantly reducing the need for integration middleware. There will always be a need for something in the middle, but it should start to look more like network devices than complex middleware. If this theory is true, then why would Workday need to acquire an integration middleware company like CapeClear? Perhaps the only reasoning in this is that CapeClear had always tried to be more that just mediation fabric. After all, I had railed on CapeClear in the past when David Clarke said, “We consider ESB the principal container for business logic. This is the next generation application server.” It’s likely that Workday actually used them in this fashion, rather than as a lightweight mediation fabric that I prefer. If that’s the case, then it’s entirely possible that Workday was in a position where it would be too expensive to migrate to a traditional Java application server, and CapeClear was struggling being one of the few SOA startups left with rough economic times ahead. The best course for both parties, therefore, is the actions that have taken place. It will remain to be seen how well Workday can support the existing CapeClear customers who are using it as a mediation fabric, since their bread and butter is the ERP system, not middleware.

Why do we make things so difficult?

I’ve recently been trying to help out with an issue which has required me to roll up my sleeves a bit, and unfortunately it has brought pack all too familiar memories. Our computer systems (and often the documentation that goes along with it) simply makes things way too difficult. In my opinion, the thing that I’m trying to do should be a relatively straightforward function, but as I’ve dug into it, I just seem to run into an endless set of configuration parameters that need to be specified. Now, while I’m pretty far removed from my days as a developer, I still consider myself tech savvy and a very quick learner. I can’t help but think what the average joe must go through to try to make these things work. It’s almost as if these systems were written to ensure a marketplace for systems integrators and other consultants.

This all comes back to usability and human-computer interaction, areas that have always been a passion of mine. If your products aren’t usable, it’s simply going to create frustration and distrust. If your documentation or support channels are equally poor, the situation will continue to go down. What’s even worse is when there isn’t a workaround, and the only option left to a user is to find an expert who can help. As a user, I don’t like to be painted into a corner where I have no options. We need to keep these things in mind when we build our own systems for our business partners. If we design systems that aren’t usable and require that the business come find someone in IT every time they try to do certain options, that’s a recipe for disaster. If you don’t have a usability team in your organization, I strongly urge you to find some experts and start building one.

Tools Support Governance, Not Define It

I was surprised at David Linthicum’s latest blog entry. Normally, he’s pretty good about emphasizing that you can’t buy an SOA, but in his “Defining SOA Governance” post, a lot of the conversation was very tool-centric. He stated the following.

Key components of design time SOA governance include:

  • A registry and/or repository for the tracking of service design, management, policy, security, and testing artifacts.
  • Design tools, including service modeling, dependency tracking, policy creation and management, and other tools that assist in the design of services.
  • Deployment tools, including service deployment, typically through binding with external development environments.
  • Links to testing tools and services, providing the developer/designer the ability to create a test plan and testing scenarios, and then leverage service testing technology.

On the runtime governance side of things, he did preface the tool capability discussion with this statement: “Thus, runtime governance is the process of enforcing and implementing those policies at service run time.”

I’ve said it before, and I’ll say it again. Governance is about people, policies, and process. Tooling really only comes into play when you start looking at the process portion of the equation. I don’t want to dismiss tooling, because it absolutely is an important part of the governance equation, but if you don’t have the people or the policies, tools won’t help.

The other thing that I want to call out is how “SOA Management” has morphed into “Runtime SOA Governance.” Back in March of 2006, I was on a panel at the InfoWorld SOA Executive Forum and teased the panelist from AmberPoint about hijacking the term, but I’ve relented a bit simply because these terms all have preconceived notions about them, and the use of the term governance probably gets people thinking in the right direction. For many people, SOA Management may imply the kind of passive monitoring that people associated with traditional systems management (not that it should be passive, but that’s the perception many have). Runtime SOA Governance, however, is rooted in the notion of the active enforcement of policies associated with the consumer-provider interaction. If a change in marketing terms helps people understand it, I’m all for it.

But back to the main subject… whether it’s runtime or design time, there’s still a need to understand the policies and people/parties involved. If you don’t understand the concepts associated with a service contract (and it’s not just the functional interface) and have people on the both sides of the interaction who care about them, governance tools aren’t going to do you any good. If you don’t have people defining the enterprise policies associated with choosing and defining services, again, the tools aren’t going to be used effectively.

February Events

Here are the SOA, BPM, and EA events coming up in February. If you want your events to be included, please send me the information at soaevents at biske dot com. I also try to include events that I receive in my normal email accounts as a result of all of the marketing lists I’m already on. For the most up to date list as well as the details and registration links, please consult my events page. This is just the beginning of the month summary that I post to keep it fresh in people’s minds.

  • 2/4 – 2/6: Gartner BPM Summit
  • 2/5: ZapThink Practical SOA: Energy and Utilities
  • 2/7 – 2/8: Forrester’s Enterprise Architecture Forum 2008
  • 2/11: Web Services on Wall Street
  • 2/13 – 2/15: ARIS ProcessWorld
  • 2/13: ZapThink Webinar: Leverage Document Centric SOA for Competitive Advantage
  • 2/19: Webinar: Integrated SOA Governance
  • 2/25 – 2/28: BPTG’s Business Process Transformation – BPM Practitioner Course
  • 2/25 – 2/27: Global Excellence Awards in BPM & BPM Technology Showcase
  • 2/26 – 2/29: ZapThink LZA Bootcamp

The Funding/Scope Relationship

A recent conversation brought up another real world governance example that I think is an excellent illustration of where we need to get to in our SOA (and IT) governance efforts. The town I live in borders the Mississippi River. In early 2007, the city council signed a master development agreement with an area developer to develop the “bottoms” area, essentially farm land that butts up against the river levees. Shortly after that, elections changed the makeup of that council, and now there is a majority that is against this development and has been doing everything they can to keep things from moving forward, so much so, that the developer has a suit against the city. That’s not integral to this post, however. What has recently been the topic of discussion is the levee system. The Army Corps
of Engineers is going to decertify the levees and require the local municipalities to pay for upgrades. So, the question at hand is, “Who’s going to pay for this?” The master development agreement calls for the developer to pay for the improvements as there’s a vested interest there, since the levees would be protecting the businesses in the development. If that agreement falls apart with the current issues, some alternative method of funding would need to be found. The city and/or county really only have taxes at their disposal. Either sales taxes or property taxes would need to be increased.

So what does this have to do with SOA and IT Governance? Well, look at it this way. The master development agreement is like an IT project in many companies these days. The primary focus is on delivering some new businesses to the area, but as part of that, there’s some infrastructure work that needs to happen. The levee upgrade is a clear infrastructure need, but it’s extremely difficult to find funding for it without bundling it in with some other more tangible effort that is closer to the end user, in this case the residents of the surrounding area.

Now, if this effort were the typical IT project, What would happen? The developer would build their shopping centers, distribution centers, warehouses, whatever, but they would put in only the infrastructure required for that. Imagine seeing a levee that surrounds their development, but as soon as you leave their property, we go back to some crumbling infrastructure which may actually now be worse, because the added strength of the new section puts more pressure on the aging infrastructure. Now, this won’t actually happen, because the Army Corps of Engineers have regulations that will ensure that the levee upgrades are built properly.

What we need to happen in the IT world is to have the same types of regulations so that when projects go and request funding, the scope is such that funding includes building services and infrastructure in the right way, rather than how it is today where the scope is set solely to the immediate needs of the project. The problem is that we don’t have the regulations that can actually set the scope. We instead have regulations that begin with something like, “If you build a shared service…” which immediately gives the project managers or proposers a way out, because they’ll claim that their services aren’t going to be shared, when in fact, they probably simply haven’t looked into whether there is a shared need, simply because of the risk of expanded scope. I think some form of business domain model as I’ve discussed in the past (here and here) that classifies domains of capabilities from the perspective of “share-ability” is a step in the right direction.

Gaining Visibility

While I always take vendor postings with a grain of salt, David Bressler of Progress Software had a very good post entitled, “We’re Running Out of Words II.” Cut through the vendor-specific stuff, and the message is that routing all of your requests through a centralized process management hub in order to gain visibility may not be a good thing. In his example, he tells the story of a company that took some existing processes and, in order to gain visibility into the execution of the process steps, externalized the process orchestration into a BPM tool. In doing so, the performance of the overall process took a big hit.

To me, scenarios like this are indicative of one major problem: we don’t think about management capabilities. Solution development is overwhelmingly focused on getting functional capabilities out the door. Obviously, it should be, but more often than not there is no instrumentation. How can we possibly claim that a solution delivers business value over time if there is no instrumentation to provide metrics? Independent of whether external process management is involved, gateway-based interception is used versus agent-based approaches, etc. we need to begin with the generation of metrics. If you’re not generating metrics, you’re going to have poor, or even worse, no visibility.

Unfortunately, all too often, we’re only focused on problem management and resolution, and the absence of metrics only comes to light if something goes wrong and we need to diagnose the situation. To this, I come back to my earlier statement. How can we have any confidence in saying that things are working properly and providing value without metrics?

Interestingly, once you have metrics, the relationships (and potentially collisions) between the worlds of enterprise system management, business process management, web service management, business activity monitoring, and business intelligence start to come together, as I’ve discussed in the past. I’ve seen one company take metrics from a BPM tool and placing them into their data warehouse for analysis and reporting from their business intelligence system. Ultimately, I’d love to see a differentiation based upon what you do with the data, rather than on the mere ability to collect the data.

SOA and the Economy

Ok, so Brenda Michelson called me out today on a conference call that I hadn’t made any comments regarding the impact of the economy on SOA initiatives. Of course, that’s like dangling a carrot in front of my face, so I now feel obligated to at least say something.

As I’ve stated before, I consider myself to be very pragmatic. I try to avoid knee-jerk reactions to highs or low. So, the recent buzz around how the economy would impact SOA efforts was a bit of a non-event for me. Companies have always had to deal with their own performance, and adjust their budgets accordingly, and obviously, the general economy has a role in that performance. Depending on your industry, it could be large, or it could be insignificant. So, whether it’s an SOA project or any other IT project, the decision making process is still the same. The only things that have changed are the resources that are available. If we have resources to do it, do it. If you don’t, it gets put off. Now, if we’re discussing how SOA initiatives should be prioritized relative to other IT projects, it’s a different debate, and one that is independent of the current funding at hand.

The other angle on this discussion is whether or not successful SOA adoptions will help companies when their revenue streams are not what they’ve been in the past. If we come back to the marketing terms for SOA: reuse, agility, etc., I think these are all realized in the form of IT productivity and efficiency. In other words, I’m either getting more things done in the some standard time frame, or I’m reducing the time it takes to get any particular solution done in comparison to similar efforts in the past. I firmly believe that SOA should be doing this, and therefore, it’s a logical conclusion that companies that have successfully adopted SOA are in a better position to execute in an economic downturn than their competitors that haven’t. Of course, that assumes that information technology plays a key role in a business’ ability to execute and can be a competitive differentiator. There you go, Brenda!

SOA, EA, and BPM Conferences and Events

I’ve decided to conduct an experiment. In the past, I’ve been contacted about posting information regarding events on SOA, either via a comment to an existing post, or through an email request. I’ve passed on these, but at the same time, I do like the fact that I’ve built up a reader base and my goal has always been to increase the knowledge of others on SOA and related topics.

What I’ve decided to try is having one post per month dedicated to SOA events that will be occuring in the near future, in addition to this permanent page on the blog that people can review when they please. I’m not about to go culling the web for all events, but I will collect all events that I receive in email at soaevents at biske dot com, and post a summary of them including dates, topic, discount code, and link to the detail/registration page. Readers would be free to post additional events in comments, however, I don’t think very many people subscribe to the comment feed, so the exposure would be less than having it in my actual post. I’m going to try to leverage a Google Calendar (iCal, HTML) for this, which will also be publicly available, even if someone wants to include it their own blog.

While this is essentially free marketing, in reality, I’d make a few more cents by having more visitors to this site than I would if all of these events organizers threw their ads into the big Google Ad Pool with the hopes of it actually showing up on my site. If I get a decent number of events from multiple sources the first two months, I’ll probably keep it up. If I only get events from one source, I’ll probably stop, as it will begin to look like I’m doing marketing just for them and I don’t want anyone to have the perception that my content is influenced by others.

The doors are open. Send your SOA, EA, and BPM events to me at soaevents at biske dot com. Include:

  • Date/Time
  • Subject/Title
  • Description (keep it to one or two sentences)
  • URL for more detail/registration
  • Discount code, if applicable

If you want to simply send an iCal event, that would probably work as well, and it would make it very easy for me to move it into Google Calendar. My first post will be on Feb. 1, the next on March 1, and so on. I will post events for the current month and the next month, and may include April depending on how many events I receive.

What happened to the OS?

As I listened to some rumblings on some podcast on the soon-to-be-released Microsoft virtualization product, a thought came to my mind- what happened to the operating system?

This isn’t the first time that this thought has popped into my head, or something related to it. When I was first exposed to virtualization technology, I had the same question. Isn’t it the job of the operating system to manage the underlying physical computing resources effectively? Of course, I also realized how much I ignored the purpose of the operating system in work work as a Java developer and designer. I had become so embedded in the world of Java application servers that I had a difficult time getting my head back on straight when trying to guide some .NET server side development. I kept trying to find something to equate to the Java Application Server that sat on top of Windows Server, when in fact, Windows Server is the application server. After all, operating systems manage the allocation of resources to system processes, which are applications.

Now this isn’t a post to knock virtualization technology. There are many benefits that it can provide, such as the ability to move applications around independent of the underlying physical infrastructure (i.e. it’s not bound to one physical server). But, if we look at the primary benefit being touted, it’s better resource utilization. Before we jump to virtualization, shouldn’t we understand why the existing operating systems can’t do their job to begin with? If our current processes for allocating applications to operating systems is fundamentally flawed, is virtualization technology merely a band-aid on the problem?

Based on my somewhat limited understanding of VMs, you can typically choose between dedicating resources to a VM Guest, allowing the VM Guest to pull from a shared pool, or some combination of the two. Obviously, if we dedicate resources completely, we’re really not much better off than physical servers from a resource utilization standpoint (although I agree there are other benefits outside of resource management). To gain the potential for improved resource utilization, we need to allow the VM Host to reclaim resources that aren’t being used and give them to other processes. At this extreme, we run the risk of thrashing as the VM Guests battle for resources. The theory is that the VM Guests will need resources at different times, so the theoretical thrashing that could occur, won’t. So, we probably take some hybrid approach. Unfortunately, we still now have the risk of wasting resources in the dedicated portion.

The real source of the problem, in my opinion, is that we do a lousy of understanding the resource demands of our solutions. We use conservative ballpark estimates, choose some standard configuration and number of app servers, do a capacity test and send it out into the wild. When it’s in the wild, we don’t collect metrics to see if the real usage (which is probably measured in site visits, not in resources consumed) matches the expected usage, and even if it comes in less, we certainly won’t scale back because we now have “room to grow”. If we don’t start doing this, we’re still going to have less than optimal resource utilization, whether we use VMs or not. I don’t believe that going to a 100% shared model is the answer either unless the systems get much more intelligent and take past trends into account in deciding whether to take resources away from a given VM guest or not.

Again, this post isn’t a knock on virtualization. One area that I hope virtualization, or more specifically, the hypervisor, will address is the bloat of the OS. Part of the resources go to the operation of the OS itself, and one can argue that there’s a lot of things that we don’t need. While we can try to configure the OS to turn these things off, effectively a black-listing approach (everything is on unless those things appear on the black list), I like the white-list approach. Start with the minimal set of capabilities needed (the hypervisor) and then turn other things on if you need them. I expect we’ll see more things like BEA’s WebLogic Virtual Edition that just cut the bloated OS out of the picture. But, as I’ve said, it only gets us so far if we don’t do a better job of understanding how our solutions consume resources from the beginning.

Music, Technology, and the Mind

James McGovern asks, “Are folks who are left-handed smarter?” Well, since I’m right-handed, my answer should be no. But then again, I don’t view “handedness” as an absolute, but rather a continuum. There’s a lot of left handers in my family, as well as a certain amount of ambidextrious behavior. I can do several things left-handed (although writing isn’t one of them).

Anyway, James’ post reminded me of an observation that a former boss made once. He was wondering whether there was a correlation between musical ability and technical ability. He noticed that nearly all of our strong technical guys had some talent in music, whether it was singing, playing an instrument, composing, etc. While we never tested his hypothesis, it’s still very interesting.

I’ve always been interested in psychology, social dynamics, etc. and one of the interesting things that I’ve thought about in my own life is how I wound up in the College of Engineering at the University of Illinois, something that most people would associate with a “left-brained” mentality, while my brother went on to become an artist, including a stint with Disney Feature Animation until they closed the Orlando studio, a very creative “right-brained” discipline. The common thread between us, at least in my opinion, is that we were both able to bring thinking associated with the other half of our brain into our daily work. In my brother’s case, one thing he’s been complimented on is his ability to create precise illustrations, etc. without the use of a reference. I can only surmise that his “left brain” analytical skills have something to do with this. In my case, while I am very analytical and logical, I’m also a very big picture thinker, a “right brain” activity. Interestingly, musical ability is associated more with “right brain” thinking.

So, if the hypothesis holds, the best people are the ones that are able to use both parts of their brains effectively. It is certainly a possibility that left handers are able to do this more often because of having to live in a right handed world, so they’ve always had to use the other half of the brain, whereas most right handed people have not. Who knows, but it’s fun to ponder.

What is Reference Architecture?

A previous post discussed how no two organizations seem to define the role of the architect the same way. It just occurred to me that the same thing holds true for another common deliverable of the enterprise architect: the reference architecture. I’ve had the opportunity to work on a few reference architectures with a few different companies and I can certainly say that no two were the same, although there was certainly commonality between them.

There are at least two questions that have come up in multiple efforts. The first is whether or not a reference architecture should recommend specific technologies (e.g. “if you meet these conditions, you should use vendor product MagicFooBar”) or only describe the capabilities required, leaving implementations teams to choose a technology that fits (e.g. “the services tier must be exposed using XML-based interfaces”). The second question is one of depth. How much guidance should a reference architecture give? Should it go so far as to place constraints on how individual classes are named, or how the file hierarchy is set up in the source code repository? Or should it remain at a level somewhere above the design of individual classes? It may seem simple at first glance, but I can speak from experience that once you start providing some guidance, it’s very easy to get pulled into the depths.

What’s the right answer to these questions? I don’t think there is one. Why? Because reference architecture is about guidance, and the guidance needed in any one particular organization is going to be dependent on the skills of the staff, the organizational structure, the technologies involved, the problems being solved, etc. So what do you do? In my opinion, a good course of action is to focus on delivering guidance that is singular in purpose, at least within a single deliverable. What does this mean? It means that you should avoid trying to answer all of the questions within a single (and likely very large) document. Rather, each deliverable should start with one simple question, and focus on answering that question clearly. For example, rather than having one big reference architecture that attempts to cover all possible business solutions which could easily include web UIs, desktop UIs, mobile UIs, workflows, services, automated processes, applications portals, content portals, collaboration portals, and much more, consider having a separate document for each one, forming essentially a hierarchy of documents. The earlier documents discusses things more broadly, intending to answer the question of what collection of technologies are needed for a solution, but not necessarily guidance on the appropriate way to use that technology. For that, once the decision to use a particular technology has been made, a separate document goes into added depth, and the process continues. The reference material will eventually work its way down to development and operational guidelines (or hook up with those that may already exist).

Of course, all of this is easier said than done. It’s not easy to determine the “right” hierarchy up front. Again, the litmus test to apply is whether the document has clarity and singularity in purpose. If you find yourself with a document that starts getting onerous to use, consider breaking it apart.

What are others thoughts on this? As always, this is simply my opinion (after all, this is my blog, so that’s what you’re going to get), but I also am always interested in the thoughts of others and striving for a more collective wisdom. What has been successful for you when trying to provide reference guidance to teams? What hasn’t been successful? Use comments or trackback.

Update: I was just checking my feeds and there were two others blogs that discussed reference architectures: this one from George Ambler and this one from Simon Brown. Enjoy.

Happy Acquisition Day!

Wow, I was very surprised to come into work to find out that Oracle had agreed to acquire BEA and that Sun had agreed to acquire MySQL. Based on the reaction of many in the blogosophere, like Miko Matsumura whose Facebook status said he was “amazed by sun buying MySQL and yawning at Oracle buying BEA” not many people were surprised by the Oracle/BEA deal. I actually was a little bit surprised, simply because it felt like BEA would stick it out as an independent, and it didn’t appear that Oracle was taking its “we will get what we want no matter how long it takes” approach as it did with PeopleSoft. That’s about all the comment I’ll make on this as I try to avoid too much speculation in the vendor space in this blog, but this was too big of an event to not mention. While I chose not to do 2008 predictions this year, all of those people who picked the no-brainer of more vendor acquisitions in the SOA space can check this one off your list only 16 days into the year.

The other acquisition was one that was very surprising in so much as not many people probably saw it coming. While not in the SOA space, it certainly shows that open source is becoming more and more ingrained in Sun. This is one that’s going to take some time to digest in the blogosphere to see what people will think about it. I think being associated with a bigger name can only help MySQL. As for the benefits to Sun, we’ll see. I’ll be interested to see what others have to say about it.

The Roles of the Architect

One thing that I have noticed over the past few years of my career is that no two companies define the role of the architect the same way. Personally, I view this as neither good nor bad, and this isn’t going to be a post on the whole architect certification efforts that exist out there. While it may make it more challenging to find qualified candidates when the definition of the role changes from company to company, once you’re in a company, clarity of the specific responsibilities at that company are certainly more important than whether or not you’re doing the same work as Joe or Jane Architect down the street. As another aside, I also think that those job responsibilities should serve as a guide, but not a barrier. All organizations require some flexibility in what people do if they expect to get things done versus devolving into a finger-pointing episode of people saying, “That’s not my responsibility.” Which brings us to the first type of role.

The Gluer

In this role, the person with the lucky fortune to have the architect title is expecting to hold everything together from a technical perspective. A typical software development project involves far more technical tasks than just writing code. Servers may need to be built, software installed, routers configured, etc. Put simply, the person playing this role was expected to be the glue that holds everything together from a technical perspective. You’re probably asking, “Isn’t this the job of the technical lead?” Well, the problem with the technical lead is that in most organizations I’ve seen, it was purely a role, and not a job title. Therefore, it was difficult to actually know who the technical leads were in an organization, get coverage across all projects, etc. Titles like senior developer or lead developer don’t cut it, because the emphasis is still on development, and not the other tasks involved with getting something into production. Unfortunately, giving them the title of architect is problematic. While it does allow an organization to establish some clear technical leadership, it’s likely that the individuals with this title will be so consumed with tactical project decisions, that very little of their time will actually be spent on architecture.

The Scheduler

While I never would have guessed this one, I heard it mentioned at two different organizations that architects are supposed to act as project managers for technical activities, normally working in conjunction with the “real” project manager. While I do think that all leaders should have some ability to do basic project management activities. That is, break down a goal into some set of constituent tasks on a timeline. There are certainly ties back to the gluer role, as this essentially takes the technical leadership a step further. In addition to identifying all of the technical activities, the person now has to manage all of them, rather than delegating this back to the project manager. Of all the roles, this one is the least desirable to me because I don’t consider project management a strong point for me. I’m admittedly a big picture thinker. Most good project managers I know are extremely detail-oriented. The best working scenario for me personally, is not to have me be the project manager, but work side-by-side with the project manager.

You’ll notice that both of these roles are very project-specific. If all of the architects in your organization are on project assignments, that’s a problem. Project, or solution, architecture is important, but you also need architects outside of projects, hence the next two roles.

The Decision Maker

This architect is normally not assigned to projects, but is still involved with the decision making process within projects. This person is likely viewed as the top of the technical hierarchy within some organizational or technical domain. If it’s associated with software development, I typically see it following organizational boundaries (e.g. an architect for all solutions developed by one development organizationy), while on some of the other activities, it follows technical domains, such as a Security Technology Architect, a Networking and Communications Technology Architect, etc. Projects go to these architects when decisions need to be made or approved. The challenge I’ve seen with this role is the flood gate, especially when the title is first established. Many organizations don’t have a technical hierarchy in their organization, and as a result, it can be unclear as to who has the technical decision making responsibilities. As a result, when someone is granted the title, the flood gates open, and every technical decision, even those that should be no brainers, wind up coming to those deemed “architects.” The challenge here is that the architect winds up having all their time consuming by decision making, with no time left over for establishing a strategy and direction.

The Strategist/Policy Maker

At the opposite end of the spectrum from the decision maker. Architects acting in this capacity are the legislative branch of the government, focused on established reference architectures, policies, roadmaps, etc. I thought about breaking this into two roles, because there are plenty of architects who don’t do strategy, but in general, the perception of the enterprise is similar. There’s a significant risk of becoming an ivory tower in this role. Just as the decision maker gets sucked back into project activities, the strategist can become disconnected from the reality of projects.

So what’s right? Personally, I think an organization needs gluers, decision makers, and strategists. We already have project managers, and as I previously stated, I don’t think we need to break out “technical” project management as a separate discipline. Should the remaining roles all have the “architect” title? In my mind, there’s really only debate about one role, the gluer. Clearly, not all of the activities associated with technical leadership have something to do with architecture. At the same time, however, if it’s not clear whose responsibility it is, these technical concerns will bubble their way up to the “decision makers.” If it’s necessary to bless that role with a title like “Solution Architect” to avoid this scenario, then do it.

What other roles and responsibilities have others seen with architecture? What’s missing from the list?

IT as a Service Provider

The IT as a service provider discussion was brought back up by Joe McKendrick of ZDNet in this blog. In it, Joe made reference to a past blog of mine in which I stated my opinion that a customer/supplier relationship between IT and their end users in the business was a bad thing, and I still believe that. Joe’s post brought to light some small nuances on that opinion that need clarification.

In my original post, I stated that IT moving solely to a supplier model to the business is an invitation to be outsourced. If you’re simply an order taker, there’s no reason that someone else can’t take those orders. The value-add that an internal IT department provides is not technology expertise, but technology expertise combined with knowledge of the company. Any SaaS provider or outsourcing agency must provide services to a mass market, or at least a market with more than one customer.

Getting back to Joe’s post, the inference that could be made is that IT shouldn’t be a service provider. This is not the case however. The IT-Business relationship should be a partnership, but you can’t be a partner if you’re not providing good service. Understanding the services that you bring to the table and doing them well is critical to the relationship. The difference is that those services do not define the boundaries of the relationship. Instead, they bring structure and foundation to it, on which partnership can be built. If your foundation is weak, the relationship will crumble. Therefore, adopting principles of service management within IT is a good thing, however, don’t approach it from the standpoint of competing against outside service providers. The decision to use outside providers should be made by the business, which includes IT. IT should be the one driving the discussion to say, “some aspects of our technology are really becoming commoditized and we can achieve some significant cost benefits through an external provider” rather than being told, “you’re a commodity and we’ve outsourced you.” In this sense, IT is no different than other business support organizations. Take HR as a good example. One could certainly argue that HR could be outsourced as it provides commodity services that all companies need. Every large company I’ve been at still has an HR department, though. What is more the norm is to have HR working as part of the business to make good business decisions on what aspects of HR to outsource, and what aspects of HR should remain within the company because there is value add. Only someone within the company can really understand the corporate culture which is critical to attracting and retaining talented individuals.

Be a partner in your business, but ensure that your partnership is on a solid foundation.

Comments on The End of The Application

I received a couple comments on my previous post, and rather than respond in the comments themselves, I thought I’d use another blog post to address them since I don’t think many people subscribe to the comments feed. Before I get into them, I also wanted to call out this blog post from Anne Thomas Manes of the Burton Group. Out of a discussion on the Yahoo SOA Group about the relationship between 3-tier and SOA, she posted this entry which included this statement:

I also expect that the concept of “application” is likely to go away. Why is it that we as systems users have to be constrained by the limits of this artificial boundary called an application? Why do I have to shift my focus among multiple applications? Why do I have to copy data from one application to another? Why do I have to launch this stupid browser or that stupid application to get my work done? Why isn’t everything just accessible to me from my desktop (via widgets and gadgets) or from within my preferred operating context (e.g., email)?

It was this that prompted me to put together my original post, since I couldn’t find an entry in my blog that was specifically dedicated to this topic, although I had mentioned it as part of other entries. I had meant to include a link to Anne’s post in my first entry and forgot.

The first comment came from Brian “Bex” Huff who stated:

People understand the term “application,” but the word “solution” is a bit too nebulous.
Applications stand alone… what good is a widget to view an Excel doc, without the Excel application to create the doc in the first place?
I agree that IT should always *think* in terms of dynamic, evolving “solutions”… but the basic building blocks still include “applications”… as well as toolkits, frameworks, libraries, etc.

Bex actually made a statement that is indicative of the current culture, which was “Applications stand alone.” My opinion is that applications shouldn’t stand alone. Why shouldn’t we have the ability to present a UI component that can manipulate spreadsheets anywhere? Yes, there will always be cases where spreadsheet manipulation is the only thing we want to do, but there’s also plenty of cases where embedded spreadsheet manipulation would be better for users. What will enable this is thinking in terms of capabilities, rather than in terms of applications. My opinion is that an application is the result of packaging, rather than a unit of composition.

The second comment came from Rob Eamon. He wrote:

There will always be a collection of components/services that have been designed to work together to provide some set of functionality. And there will always be multiple sets of these. And there will always be a need for these sets to interact, possibly through mediation.
Disaggregating application components and making them independently accessible in many contexts seems very appealing.
But take the issues that the typical application development/support team faces and blow that up to the enterprise level. The fragile nature of such an approach will inherently stop it from becoming a reality.
IMO, the application is still a useful and reasonable building block and allows us to break down the enterprise solution space into manageable chunks.
Some of the application components might be useful as independent entities in a larger setting, but I donít think approaching everything that way is a wise course. IMO, it would inhibit flexibility rather than promote (counter-intuitive such that may be).

I wish Rob would have went into more detail on the “issues” that the typical application development/support team faces, because I can only guess what they may be. My first guess is that application teams currently have to deal with ever changing requirements and needs. If you increase the number of parts, you now have smaller components with new relationships associated with their use, and if we don’t manage it well, chaos will ensue. It should be noted that I’ve never said that this type of shift would be easy, and if anything I’d say it’s going to incredibly difficult. I’ve reflected on this in the past, specifically in this post, and wonder if it will take some company approach IT like this from the beginning, without any baggage to create the impetus to change.

Rob went on to state that in his opinion, the application is still a useful and reasonable building block. Here’s where I disagree, for the same reasons I used in Bex’s comments. An application is typically a “packaging” exercise, and those packages aren’t what we want for composition. The only part of an application that still has significant potential for being a “stand-alone” entity is the UI. I’d be happy to see an IT organization that makes an organizational/funding separation between development of UI code from development of services that are used by the presentation tier, much as Jeff Schneider suggested in this post from early last year.

Where I’ll agree with Rob is that this is not a change for the weak of heart. If a new CIO came in and reorganized the organization along these lines, the chaos that might ensue could certainly result in the perception that IT is even less agile than they were before. So, this isn’t a change that will occur in a year. This is a gradual, evolutionary change that will take time, but it will only happen if we’re committed to making it happen. I think a key to that is to get away from the application mindset.

Thanks to Rob & Bex for the great comments, and I’d love to hear from others whether you agree or disagree with my opinions.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.