Gartner AADI: SOA Governance in Action

I’m sitting in a case study session on SOA Governance. Barney Sene, VP and CTO of Ingram Micro, is currently speaking and just had an interesting slide up. Because they are a global company, one of the things they always do is examine any proposed service (documented through a Service Requirements Specification – SRS) for specialization that may be required for global requirements, versus the context in which that SRS may have been created. I now work for a global organization, and now clearly understand the need for this. I have firsthand knowledge of a business service that will have differences in how it must operate based about the geographic regions involved. Barney’s slide demonstrated how the SRS provided a significant percentage of the final definition, but these other factors like globalization chipped away at the remaining 20%.

From a governance perspective, what I liked about this was that it presented an approach that didn’t mandate, but rather encouraged appropriate behavior. If the company has stated “We are a global organization” rather than acting as a collection of regional organizations, then every project should be asking the question, “Are there any regional nuances that must be factored into the architecture and design of what I’m producing?” This example also got me thinking about what other factors should be explicitly encouraged in this same way, simply because people aren’t used to thinking that way. Globalization makes perfect sense. Security also makes sense. We have to realize that many designers and developers are primarily concerned about the functionality at hand, and need some encouragement to think about factors “outside of the box” that may influence both the non-functional areas as well as the functional areas. Because this is a shift in thinking, it must be encouraged first, enforced second. Dangle the carrot, and only use the carrot when necessary. What other items belong on the list that can take a service from an 80% solution to a 100% solution?

Gartner AADI: Patrick Lencioni Keynote

On Wednesday, Patrick Lencioni, author of The Five Dysfunctions of Team, and other management fables gave the morning keynote. It was an excellent talk. Having read both Five Dysfunctions and Silos, Politics, and Turf Wars, I recommend picking up all of his books if you have an interest in making your organization work better (and you should). I picked up his latest book, The Three Signs of a Miserable Job and Death by Meeting after his session and had a nice conversation with him about some experiences.

My EA Classification

James asked some of us EA bloggers to try to classify what we do on a daily basis. My breakdown is this, although as I grow in my new job, I would expect this to shift a bit:

  • 30% Domain Expert
  • 30% Analyst/Strategist
  • 30% Technology Expert
  • 10% Problem Resolution/Mediation

Note that I used the term “problem resolution/mediation” rather than “troubleshooting.” That was done intentionally to differentiate between troubleshooting operational systems (i.e. production support) versus mediating disagreements between teams, etc. which is far more common for my role. I also added “strategist” after “analyst”, meaning that I’m doing analysis for the purpose of strategic direction. It’s interesting to think about how this will change over time, and the fact that there’s a blurring across these categories. For example, is being an expert in a domain a pre-requisite for being a strategist? It certainly helps, but there are people who simply have a good sense for trends and can easily see where the adoption challenges will typically be, regardless of the domain. I’ll have to make a mental note to revisit this classification a year from now.

Why don’t more IT managers read blogs?

It has been a couple years since I attended a Gartner Summit, and now I’m remembering past experiences. Many of the subjects are presented at a very high level (which is appropriate for the audience at a Gartner conference), and many of the subjects are ones that I’ve spent a lot of time researching on my own. I’ve been trying to attend more sessions this time on subjects that I don’t know much about, so the experience has been better. What I’m wondering, however, is why more of the people attending (and there are easily more than a thousand of them) don’t start reading my blog and that of my colleagues in the industry to pick up this knowledge (in addition to attending conferences like Gartner Summits) and even more importantly, ask questions. Yes, reading my blog doesn’t allow you to travel to some interesting location, but I like to think that the content I provide is appropriate and meaningful.

I hear many questions getting asked over and over at these conferences, and it’s apparent that there’s a very large group out there that simply don’t look for the answers to their questions beyond these conferences. Rather than look for the information, they look for an information source. Guess what folks, this is the information age, and there’s a wealth of it out there. I built up my knowledge not only with research from Gartner, Forrester, Burton Group, ZapThink, and others, but also by reading whitepapers, articles, blogs, and anything I could get my hands on. You can do it as well. Gartner, my blog, conferences, webinars, etc. are all sources of information, and the most important thing is to find information that hits home for you and seems applicable to your environment. If it’s one case study instead of one hundred, it doesn’t matter, as long as it is relevant to your scenario, not someone else’s. Gartner is an excellent source of information, and I like to think that this blog is as well. Use both, and then some, and lets keep things progressing forward so at next year’s Gartner conferences we’re talking about new questions, rather than rehashing the same ones.

Gartner AADI: Information Architecture and BPM

I attended a session from Michael Blechar titled “The Yin & Yang of Processes and Data: Which Will Be King of the Next Generation Applications?” A big title, and as a result, Michael covered a lot of topics. While there wasn’t any new information for me, I would say that this was one of the better big picture overview presented at the conference. Some of the topics he hit on:

  • The challenge of multiple metadata systems, and how some of the vendors are approaching it, in particular IBM. He specifically touched on CMDBs (Configuration Management Database), Service Registry/Repository, Software Development Assets, Database Metadata Management, and more. IBM’s approach is to provide a collection of metadata services that reside above all of this systems, providing a federation layer. Hmmm…. this sounds vaguely familiar, I think I called it Master Metadata Management and spoke more on it here.
  • The challenges in determining when to use them versus pushing the logic up into a service layer. It discussed the importance of service ownership.
  • The importance of information modeling and its importance in SOA.
  • The importance of service ownership/stewardship.
  • The importance of enterprise architecture operating as a team, and not having silos of business architecture, technology architecture, and information architecture that don’t talk to each other.
  • Overall, this was probably a good session for many people and hopefully helped them see a bit more of forest for the trees.

Gartner AADI: Funding SOA

The Power Breakfast is over and done. If you saw me there, hopefully you got something out of the conversation, although it was unfortunate that the session wound up being four independent case studies without a lot of time for audience questions, rather than a discussion among the panelists and the audience. Despite this fact, there were some good points that came out, and I wanted to highlight the ones that struck home with me here:

  • First, as the preliminary results of the SOA Consortium funding survey have indicated, there’s no uniform initial approach to SOA. Some leverage an SOA program, some expect service development out of existing projects, etc. One of the panelists, Leo Schuster from National Bank, indicated that they had an SOA Program. In my experience, I’ve never encountered a funded program whose sole goal was to produce SOA. I’ve always worked with business projects/programs that wanted to leverage services as part of their efforts. Personally, I prefer the latter, because as Victor Harrison from CSC pointed out, programs end. Will your SOA efforts ever stop? They shouldn’t. So, while you may have solved the initial funding problem, you haven’t solved the ongoing funding problem.
  • Second, metrics are critical. Every single one of us mentioned metrics. Without them, how can we say that we’ve accomplished anything? The oft-mentioned notion of agility is frequently associated with the time to deliver a particular solution. To show that we’re more agile, we need metrics of how things have been and how things are in the future. Beyond efficiency metrics, there’s also metrics associated with service consumption: number of consumers, usage by consumers, min/avg/max response time, etc. In my own experience, merely making these metrics available, and now at a finer level of granularity than the entry points into a web application were very beneficial. This visibility didn’t exist before, so anything was better than nothing. If you’re already collecting this, now you can start to do baseline comparisons to show improvement.
  • Third, service ownership is key. It was great to hear someone else say that the project-based culture of IT can be in impediment to SOA success. While your SOA shouldn’t have a lifecycle, individual services do have a lifecycle. If you simply have someone who is responsible for service lifecycle management, you’ve got the key piece of the puzzle for many of the discussions around service granularity, funding, etc. If you don’t have a service owner independent of the consumers, then each consumer is going to try to push things in the direction that has the most benefit for their project, but yet no one will own the service. This in-fighting can be very detrimental to SOA.
  • Finally, lets get Service Portfolio Management (SPM) right from the get-go. There are many sessions here at the summit concerned about application portfolio management. We’re all behind the eight-ball here as all of these applications have been built/bought over 5, 10, 15, 25, … years without anyone managing them as a whole. The last thing we want to do is repeat the process all over again with services. We’re at the early stages of SOA and can now do it right. Anyone saying that they don’t need a registry/repository yet because they haven’t reached “critical mass” is potentially making a big mistake. They only need to look at their struggles in trying to do application portfolio management as an example of the path they are heading down.

To those who got up early for breakfast and attended, I hope you found some nuggets in the presentation that were valuable from me or from the other speakers. I’d be happy to have followup conversations with anyone who felt like their questions are still unanswered, or those who want more depth on some of the things they heard.

Gartner AADI: Maturity Assessment

I attended a session on assessing maturity in IT given by Susan Landry. She outlined their maturity model which covers eight different dimensions. The maturity model has familiar levels if you’ve looked at my work, CMMI, or COBIT. Level 1 is ad hoc, level 2 is repeatable, level 3 is defined, level 4 is quantitatively managed, and level 5 is optimizing. The eight assessment areas are:

  1. Application Portfolio Management (APM)
  2. Project Portfolio Management (PPM)
  3. Staffing, Skills, and Sourcing
  4. Financial Analysis and Budgets
  5. Vendor Management
  6. Management of Architecture
  7. Software Process
  8. Operations and Support

The most interesting part of the discussion, however, was a single slide that discussed the interdependencies between these areas. For each pair of areas, the relationship was classified as either highly interdependent or moderately interdependent. Having done a multi-dimensional maturity model before, a big challenge is in determining whether or not it makes sense to get scored high in one dimension or low in another. In my SOA maturity model, I typically found that when scores were two or more levels apart, it was probably going to cause some imbalance in the organization. If the scores were even or a single level apart, it was probably workable. What I didn’t do, however, was to explore those inter-relationships and see if that theory uniformly applied. While Gartner didn’t provide a lot of depth in the inter-relationships, they did at least make a statement regarding it which can provide a lot of assistance in determining how to interpret the scoring and what actions to take.

Gartner AADI: Agility and SOA

In this session, Frank Kenney and Val Sribar, are discussing agility and application development in the context of a fictitious fable of a web unit in a business organization that is operating in an agile manner and how it works with the traditional application development organization. While I find the fable a bit far fetched, since I’ve yet to see a business unit pushing agile processes on IT, it’s usually the opposite, there is one key point that they made. They called out that not all business units may be ready for an agile approach, simply because they don’t change that rapidly. Hopefully, this will be a common theme throughout the conference. Any approach that claims to be one size fits all is probably going to run into problems. Whether it is agile, REST, Web Services, SOA, EDA, etc., there are always scenarios where it works well, and scenarios where it doesn’t work so well. A key to success is knowing not only how to apply a particular technique, but knowing when to apply it.

Further in on the presentation, they went through some different areas and discussed how they contribute to agility. In the subject of business process actions, they called out that there should be a single business person clearly in charge of a particular process. I think this applies not only for processes, but for services as well. There needs to be a service manager for each service in the organization. An individual may manage multiple services, but without a service manager, there will be problems.

On the subject of reuse, Frank Kenney pointed out that incentives, metrics, testing, methodology, and inventory management (registry/repository) is necessary to make it a reality. I think this is a good list, although I would have liked to see marketing included. While a registry/repository and methodology do focus on the consumption side, if a service or reusable asset is placed out there in a way that makes it difficult to find, the burden is on the provider to improve it, which involves marketing.

Near the end of the presentation, Val and Frank presented an “Application Group of the Future.” Val called out breaking out groups for User Experience, Business Processes, Business Information, an Integration Competency Center, Business Relationships, Software Quality, and Information Infrastructure. These groups would all be operating in an agile manner. In addition, the traditional development groups would also exist. These groups would all leverage common environments to support their efforts. There was one thing about this picture that I disagreed with, which was the placement of SOA. They sprinkled the entire organization with SOA experience, which makes sense, but they didn’t call out the separation between organizations focused on service consumption and organizations focused on service production. If anything, service production (unless it happened to be an automated business process) was buried in the tradition development group, and I don’t think that will cut it. On the bright side, one of the things that they called out which was very interesting, was the need for dynamic budgeting. Val made a comment that the web and mobile applications are challenging the way we book business early in the presentation. This need for dynamic budgeting is an analogous example to how a new agile IT, challenges the way we traditionally do budgeting. Annual budgets won’t cut it, we’ll need to make it more dynamic and responsive in real-time.

Gartner AADI Keynote

I’m in the keynote for the Gartner Application Architecture, Development, and Integration Summit. So far, the biggest thing that the panelists have mentioned is this overlay on the traditional Gartner hype cycle which defines things as in the “Zone of Competitive Leadership (ZCL),” the “Zone of the Mainstream (ZM),” and the “Zone of the Decline (ZD).” If not apparent, things that are in the ZCL are higher risk but higher reward. Things that are in the ZM are necessary for companies to adopt to simply keep up with their competitors. Things that are in ZD are for the most risk averse. Yefim Natis put Client-Server architectures in the Zone of Decline, as an example.Yefim also called out that this coming year, “Interactive” SOA (which is Gartner’s “traditional” SOA) will move into the Zone of the Mainstream. I’ll have to go back and look up what their definition of interactive SOA is, but unless it’s just the standard integration-based approach to building the same old solutions, I’d be hesitant to say that SOA is mainstream. Using services where we were previously using EJBs or other distributed components clearly is mainstream at this point, but until we see more projects that are strictly about building services and a separation between the construction of consumers and the construction of services, I think we’re still in the ZCL.Yefim also talked about other types of SOA, including Event-Driven SOA, WOA, and Context-based SOA, which he’ll cover in a later session. There was then some brief discussion about the SOA platform, and now we’ve moved onto a discussion from Gene Phifer about Web 2.0, Cloud Computing, Computing as a Service, Social Networking, etc. It’s nice to hear them giving a decent amount of attention to this. Nothing new in the quick overview, but keynote-level attention is a good thing.Finally, Matthew Hotle is wrapping up with a discussion about governance. They’re emphasizing a very lightweight approach, with just enough decision support to ensure that the organization can still be agile. There’s a session on their Maturity Assessment for Application Development later in the day that I plan on attending. From what they showed in the keynote, it’s a five level maturity approach that sounds very similar to what I’ve discussed in the past (here, here).More to come from the conference, I hope to be blogging throughout.

In Las Vegas

Just a reminder, I’ll be part of two panel discussions in Las Vegas next week, the first at the Gartner Application Architecture, Development, and Integration Summit on Tuesday morning, the second at the Gartner Enterprise Architecture Summit on Wednesday afternoon. If you’re attending and want to chat or simply introduce yourself, stop by or drop me a line at todd at biske dot com.

Dish DVR Upgrade

Last night, my wife asked me how to get to the timers on our DVR from Dish Network. This was somewhat surprising, as we’ve had our DVR for over 6 years now, and I knew that my wife knew where the appropriate menus were. It turns out that Dish Network upgraded the software on their DVRs sometime between Monday night and Tuesday night. While the upgrade added some great features that bring the DVR much closer to technology that Tivo provides, the way it happened was a good case in poor customer service.

Focusing in on the bad news, the fact that this software upgrade was occurring was unknown to us. Dish Network has my email address, and it’s associated with my account, so there’s no reason why I couldn’t have received an email telling me that this upgrade was going to occur. I have no idea if there were any messages about the upgrade on channel 101 or any of the other channels that provide help about the Dish Network hardware, but that would have been a pretty poor way of communicating it because I can’t remember the last time I tuned to those channels. The bigger problem, however, is that the upgrade wiped out all of our timers on one of our two DVRs (the second DVR still had the timer). This meant that the show we normally record on Tuesday nights didn’t get recorded. The fact that the little red recording light wasn’t on when it should have been was what clued us in. Had we not happened to notice it, we probably would have lost Wednesday night programs as well. So, the impact was relatively minor, but still inconvenient.

On the bright side, Dish has finally upgraded their software so that it is more resilient to programming schedule changes, etc. It used to be that recording was strictly time and channel based. While it would capture the name of the show currently scheduled to be on, if that show got moved, you’d miss it. This also meant reseting all timers at the new season for shows that changed their time slot. Now, they properly leverage the program guide so you can select shows by name or by keyword, select whether you want only new episodes or all episodes, and it will automatically create events regardless of when the show is on any particular week. About the only problem that still isn’t resolved is what to do with shows that follow sporting events. I’m a big fan of The Amazing Race, which is scheduled to air at 7 PM Central Time. This past Sunday, it didn’t start until almost 8 PM due to the NFL game that was on CBS that afternoon. Because the program guide never updates to reflect these delays, the timer doesn’t shift accordingly. My solution for the problem is to record for 2 hours rather than 1. I suspect this will still be the case, but I’ll find out the next Sunday that CBS has a late game.

Now if Dish would only allow me to pull the shows off the DVR via USB or other connection for playback on my iPhone or iPod, I’d be really happy. Until then, there’s always the Neuros MPEG4 Recorder.

Disrupted by SOA

In his links post, James McGovern had some comments on my Long Tail of Applications post. He stated:

Todd Biske wants to eliminate the term “application” as it implies a monolith. I would like to point out to Todd that there is another usage of the word that still remains important which primarily indicates a funding model. It is possible and viable to build a great SOA while still letting the finance folks think in terms of applications. Removing the term from architects is a great thing but very disruptive for other parts of the enterprise.

I’m glad James brought up the whole funding model aspect of it. As he rightly calls out, the removal of the term “application” is probably not terribly disruptive for architects, it’s very disruptive for the rest of the organization. How many IT departments have organizations with application development in the title? How many project charters contain the words “…deliver an application…” in their text? If you agree with the premise that, in general, IT is not delivering at the level it could be, then something needs to change. We need a disruption. If that disruption ripples all the way up to the way projects are defined and funded, then so be it. This type of change doesn’t happen overnight, and is not going to be without many bumps in the road.

Does everyone need this type of disruption? Perhaps not, but I think that it’s a very difficult thing to determine. If you’ve read The Innovator’s Dilemma or The Innovator’s Solution, you know what I’m talking about. While I can envision what an organization that has truly embraced SOA and BPM technologies might look like, it’s far more challenging to determine what the path to that state is, which can also leave you wondering whether it’s even necessary. Ultimately, it only becomes necessary when some unknown competitor embraces it and starts beating the pants off of you. Finding that first example that justifies trying a new way of doing things is hard enough, finding the second and the third to justifying sustaining it is even tougher. I’m up for the challenge, though!

Test Driven Model Development

I had a great conversation with a colleague (thanks Andy) recently regarding BPM technology and the software development lifecycle. We were lamenting the fact that whenever a new development paradigm comes along, we run the risk of taking one (or more likely many) steps backward in our SDLC technologies.

Take, for example, test driven development (TDD). (Aside: I enjoyed the latest ARCast on TDD from Ron Jacobs.) How do you unit test when doing model driven development as is the case in most BPM tools? What are the appropriate “units” to test? For that matter, how does a continuous integration model work? Can the models be stored in a source code management system like Java or C# code? Can I run automated builds? Does the concept of a “build” even apply anymore? I have a hard time believing that the use of these new model-driven tools somehow negates the need for testing and continuous integration. It would be great to hear more on how the best practices we’ve learned in writing large systems in Java and C# now apply when programming through pictures from some of the vendors in the space. I don’t know whether any of the vendors have given any thought to this, but I have seen organizations that have run into some of these questions as they adopted BPM technologies. Anyone have some answers?

Update: InfoQ posted some similar thoughts under the heading, “Why do Java developers hate BPM?”

The importance of continual learning

James McGovern responded to my Certified Architect

post with one of his own. He made a great point at the end of his entry which was something I wanted to discuss in my original post but didn’t find a way to work it in. He stated:

I guess the point that I am attempting to make is that certifications are neither good nor bad, and it is important to look at each within the context of the role you expect this individual to play. It is my belief that certifications don’t prove hands on skills at all, but having multiple at least says that there is evidence as an Enterprise Architect that you have the ability to learn as well as the desire…

Personally, I think the ability to learn, and learn quickly, is a critical skill for an architect. While some may think that architects spend all of their time up in clouds working on strategies and reference architectures, many of the architects I know are the go-to people when projects are struggling with decisions and direction. An architect needs to be able to quickly understand what it is that project is trying to do, determine what the most critical factors are, and provide appropriate guidance. At other times, it may not be the tactical project, but rather the senior executive asking the question, “should we be paying attention to any of this Web 2.0 and Social Networking hype?” The architect must quickly develop enough knowledge to make good decisions. Probably all architects will get stung at some point by some lower-level detail, but the good architects probably also take it in stride because they’re always learning.

When I interview people, I always ask a question about how the individual stays current, and it’s amazing how many people struggle with that question. If you’re not taking the time to learn, you’ll get left behind. I hope the readers of this blog learn things, and that’s a big reason why I do it. It’s even better when I get comments, links, or trackbacks, because I’m able to learn as well.

Certified Architect?

I received a press release from The Open Group a couple of weeks ago regarding their IT Architecture Certification (ITAC) program. This is a subject that I haven’t discussed on my blog until now.

Personally, I’ve never been a huge fan of certifications. Back when I was assisting in the interviewing process for Java developers, I saw quite a few developers who were best described as “book smart.” That is, they had memorized sufficients amounts of the certification guides from Sun, but taken outside of that certification context, their ability to contribute value was questionable. It seemed to me that these certifications were really only of value to consulting/contracting firms, because it gave them some kind of a stamp to indicate that the people they were providing had some level of qualification. Of course, the organizations doing the certification gained value as well, as people paid for the training/testing/etc.

When people discuss a need for certification, it ultimately comes back to a need to make the process of finding suitable candidates easier. In my experience in corporate IT, there’s always been at least one group that does pre-screening of candidates before they ever show up on the desk of the team performing the interviews. It’s likely that this includes HR, and also likely that it includes a number of contracting/placement firms. After all, the value proposition of these placement firms is that they provide “qualified” candidates, versus just opening the flood gates to resumes from the Internet. The real problem in all of this is that certifications are disconnected from how people work on a day-to-day basis.

Take the P.E. (Professional Engineer) designation as an example. In the context of a civil engineer, there are certain things that only a P.E. is allowed to do regarding designs, otherwise the company producing that design puts themselves at significant legal risk should that design fail. No respectable civil engineering firm is going to have designs that haven’t been signed off by a P.E. The same thing does not exist in the world of IT. Not only is there no standardized “sign-off” process, but there also isn’t any kind of liability (other than potentially losing your job) if you deliver an IT solution that fails.

If there is no standardization in the way that we perform the work, how can any of the potential value in certifications be realized? Where I see organizations and individuals struggle is where there’s a mismatch between the culture of the organization and the desired culture of the individual. I’ve seen people that are viewed as industry experts struggle mightily because the way that they like to operate simply doesn’t match with the way that the company operates. I’ve also seen teams that may not have had the strongest technical people be far more successful because they simply work better as a team.

At the architect level, my experience has shown that the successful architects tend to be the ones that excel at leadership, influence, and communication. They have to be strong technically, but technical knowledge alone does not make a successful architect. From a technical perspective, it’s probably even more critical that the architect can learn quickly and focus in on the critical aspects of the solution, rather than simply having significant technical depth and experience. How do we assess this? While technical knowledge can be tested, the interpersonal skills are very subjective. On top of that, not only may two people judge interpersonal skills differently, the aspects of what an organization finds important in interpersonal skills will vary. This factor makes it very difficult to come up with a certification process that will really add the value that the proponents claim is possible.

I won’t go so far as to say that certifications have no value, but in terms of IT solution development, they’re simply another tool. A blanket rule that sifts out any resume that doesn’t include certification Foo is probably going to result in you missing out on many good candidates. Because of the lack of standardization in job descriptions and processes, certifications are simply another data point to be factored into the equation. James McGovern has commented on how he likes when potential candidates have blogs. The lack of a blog shouldn’t rule out a good candidate any more than the lack of a certification does. The existence of a blog, certifications, speaking experience, etc. all must be examined.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.