Assessments and evaluations
James McGovern recently posted on the subject of self-evaluation. I’m guessing that it must be annual review time at his place of employment. He makes some generalizations that I agree with, stating that it is “human nature to either trivialize and/or underestimate capabilities in other folks that they do not possess.” I believe the opposite is true as well: people tend to over-estimate and over-emphasize capabilities that are similar to their own.
While James’ chose to focus on how ineffective the typical evaluation and assessment process is for employees, I’d like to take it a different direction. While I agree that most people aren’t very good at assessing others, and probably even worse at assessing themselves (it can go either way, overly negative or overly positive), that doesn’t mean the practice should be abandoned, particularly that of the self-assessment.
One of the things that I’ve seen stated in articles and books, as well as experienced first hand in discussions, is the relationship of a solid self-understanding to being successful. You need to know your strengths and you need know your weaknesses, and you need to be objective about them. The successful leaders don’t go out and find people with the exact same strengths as their own. They go out and find people that complement their strengths. For example, I consider myself a big picture thinker, probably a common trait among architects. At the same time, I recognize that I’m not a very detail-oriented person. While I can certainly be an active participant in a project-planning workshop putting my fair share of post-it notes, seeing the details does not come natural to me. For others, the exact opposite is true. They are able to see all the i’s that need to be dotted and all the t’s that need to be crossed, but they may not be able to tell you the paragraph that’s being written. So, when I’ve been assigned to a project in a leadership position, the first thing I always request is that the stakeholders find someone who is very detail oriented to work with me. While I’m sure that person will drive me nuts at times, the project won’t be successful without it. Do I continue to learn and improve? Sure. A key to success is to understand your strengths AND your weaknesses, and surround yourself with people who can complement you.
The other important thing that needs to be called out, which James did in his blog, is the importance of objectivity when performing an assessment. Well, what is objectivity? The truth is, there’s almost always going to be some amount of subjectivity associated with any assessment. Where does this subjectivity begin from? It will usually stem for the reference model that is used for comparison. If this reference model is documented, and both parties agree to it beforehand, you’ve at least eliminated one variable from the equation. The less that is formally documented, the more difficult the assessment becomes.
For example, if I were to come in and perform an SOA assessment on an organization, I need to have a reference model that I compare my findings against. There’s a problem, however, if the things that I deem important aren’t the things that you deem important. The process must begin with a mutual understanding of the things that will be assessed, and the criteria that they will be evaluated against. In my previous job, I was discussing a potential assessment with some vendors (consultants and product vendors), and some of them wanted to cram their model down my throat, even though it would have focused on a number of things that I knew we hadn’t done anything with yet. That would have been a waste of my time and theirs.
So how does this work for self-assessments? Again, you need to have a point of comparison when you evaluate yourself. If your reference model is out of whack, your assessment will be too. I don’t want my daughters or my son growing up and comparing themselves to what the media may portray as “important” for young women or young men. That’s a problem with the reference model. This reference model has to be independent of what your boss wants or what your company wants. Why? Without it, how will you know when it’s time to make a change? Your company has its own needs and desires. You have your own needs and desires. The best possible situation is where both needs are met. That won’t always be the case, however, that’s why people (should) change jobs or even companies. If the things you find important don’t match what the company finds important, that doesn’t mean you are a lousy employee or that your company is lousy. It just means that the needs don’t match and it’s time to make a change. That could mean a move within the company, or it may mean leaving. You can only know this if you’ve got a solid understanding of what you want to do, when you want to do it, and whether you’re capable of doing it. To do this, you have to begin with your own assessment.
My final piece of advice on this is pretty simple. Anytime you do an assessment, you should strive for some amount of balance. There should always be things that you want to call out as well done, and there should always be things that you want to improve. If everything looks rosy, or everything needs improvement, then you’ve got an unrealistic view of yourself.
In the clouds…
Joe McKendrick had a post on January 17th entitled, “SOA reaches out to the cloud – will business follow?” He discusses Dion Hinchcliffe’s prediction that in 2007, SOA will open up more to the Internet cloud. Dion had cited a McKinsey and Company survey that stated that 48% of CIOs surveyed are planning to open their SOAs “to the cloud” in 2007. Joe inquires as to what this means to the SOA business case.
My own thought is that it’s not about the SOA business case. SOA doesn’t justify opening services up to the cloud. It’s the other way around. If the business needs to communicate with external parties in a system-to-system interaction, guess what, you need SOA. I don’t think there’s anything new here, and clearly, many of the early SOA efforts that have been presented at conferences reflect this.
A friend of mine used to work for Rockwell Collins and had presented on their SOA efforts, which largely was around providing Web Services to their corporate partners who previously required a person to log into an extranet portal to interact. The business hadn’t changed. Their business partners wanted to take some of the human element out of the business processes in interacting with them, and to do so, needed services. One could even argue whether this was really SOA or not. The system that they have today, clearly is a component of an enterprise SOA, but obviously, there were not services behind their existing portal that were simply opened up. They needed to rip apart the existing system and expose services.
So, my own opinion is that I’m not all that excited about this prediction. Companies need to do business. Often times, that business requires interaction between two or more companies. Can SOA help in that interaction? Absolutely. On the flipside, however, is SOA really opening up new interactions between businesses that previously didn’t exist, or is it simply allowing those interactions to be more efficient? There may be some smaller opportunities that have gone unnoticed. A friend of mine, Fred Domke of Business Integration Technology, brought up the subject of office supplies over a lunch. Every large enterprise needs office supplies, but how many of them have optimized the supply chain with it, leveraging SOA and BPM technologies? It’s probably not something that’s high up on any CIO’s list, but I’d bet that there is some potential savings there.
I have previously posted on the topic of outsourcing and said that SOA doesn’t necessarily mean any more or any less outsourcing, but it should mean a higher rate of success. Organizations should have a better handle on the boundaries that make up their systems, and as a result, have a better handle on where the key integration points are. At the same time, this effort could open up new opportunities, but I think those will be driven by the business strategy, not by the technology.
The Service Oriented Home
I had a meeting with a vendor earlier today and at one point, he said in jest, “I’m sure you’ve got some web services around your house.” I replied, that I didn’t, which I don’t. But I certainly thought that there’s plenty of opportunities for it. I was just going to leave it at that, but then I got home and the first entry in my news reader waiting to be read was a story from The Unofficial Apple Weblog on the Indigo Home Automation and Control Server. I decided then it must be a sign.
First off, there’s absolutely no reason SOA can’t be applied in the consumer world. I’ve previously blogged about SOA for schools, but this is the first time I’ve thought about SOA for the home. I don’t have any of the home automation technology in my house (i.e. X10), but I’m at least aware of it and what it can do. While X10 is definitely a de facto standard, why shouldn’t all the device in our home be web services based? Now, it probably isn’t cost effective to embed a SOAP stack in every lightbulb. Besides, X10 works just fine for that. There are plenty of integration and process problems in the house, however. Why do you think that convergence (i.e. integration) has been a theme of CES for the last 5 years? If all of the devices in our house had a standardized interface and spoke a common language, there could be great potential. I’d love to have a programmable oven that I could push cooking instructions to from the recipe I’ve got online. What about pausing the TV or at least turning down the volume when the phone rings? How about a programmable thermostat that actually taps into the various weather services available to really optimize power consumption? The same could hold true for an automated sprinkler system.
There’s a ton of research dollars that is being poured into the “digital home” right now, but how many of the vendors doing so are actually thinking about it as a service oriented home, with open standards and open technology? Unfortunately, the focus right now is on media, which means DRM, which means closed and proprietary. The AppleTV is a nice device, but the piece that missing, in my opinion, is the ability to stream to any monitor in my house. I’m not about to pay $299 for every single TV. Give me a $299 server, and some $50 access points to hook up to the various screens in the house, and now it’s beginning to look promising. We have two DVRs in our house, and it just sucks when I want to watch a show in our living room, only to remember that we had recorded it downstairs because something else was on upstairs. So, my advice to all of the vendors out there dealing in the digital home is to start thinking about the service oriented home, and giving the consumer open services to be able to do things the way we’d like.
Design for Change?
An expression that you may have heard bantered about in SOA discussions is “design for change.” A recent exchange on a Yahoo group made me decide that I really don’t like it.
If you think about it, how can you possibly design for a change, unless you know what that change is going to be? In that case, you’re not really supporting change, you’re supporting a known quantity. The only example that I can come up with that actually makes sense is systems that deal with regulatory enforcement. For example, Intuit knows that the Tax Code changes every year. If they had some web services associated with this in their TurboTax products, it is possibly that the interfaces of these could be designed to support change. The reason they’re able to do this is because they have a good idea on how things change. They may not know what aspect of the tax code will change, however, so there’s still plenty of work to be done.
What we really should be espousing is a three-pronged approach. First, architect for change. Okay, some of you are immediately going to cry foul and say, “What’s the difference?” To me, architecture is about establishing boundaries. It is the process of splitting a problem up into independently maintainable components. To architect for change, you don’t need to know what changes will occur, you need to know where the changes may occur. If I haven’t broken out my tax calculation service from the user facing system that requires it, adding new tax laws into the product is going to be more costly because I did not establish appropriate boundaries. SOA is about architecting for change, not designing for change.
Second, design to the best of your current knowledge. In other words, don’t try to predict the future. Design is about what goes within those boundaries, but certainly does involve the boundary itself (i.e. the interface). If you try to come up with an interface that supports all currently known needs as well as trying to predict the future, you can run into all sorts of problems ranging from analysis paralysis to an interface that works for all but is equally hated by all. One point of note on this. Designing to the best of your current knowledge doesn’t mean you only accept input from service consumers currently under developed. If you know that another system is going to use your service in the future, then you should incorporate their needs into your design. If you hope that another system is going to use your service, now you’re starting to walk on thin ice.
Third, and most importantly, plan for change. It has been said that the only constant is change. Your systems, your services, your business- all of them will change. What gets us into trouble is not the fact that things have changed, it’s that it is a mad scramble when things do. Schedules have to be synchronized, resources assigned, etc. Think about how you deal with your vendors. Which would you rather deal with? A vendor that releases a version every 6 months, on schedule, without fail, or a vendor that sometimes releases versions within a month of each other, but sometimes as long as 18 months? As a consumer of that vendor, how can you possibly hope to plan your upgrades? To become a service provider, you must figure out how to effectively manage change. A standard release schedule for every service would be a great start. For an enterprise, those changes may not occur as frequently as needed for a commercial product. Perhaps there are standard dates that must be used.
The problem is not a technical one. This isn’t a debate over SOAP/WS-* versus REST. If the underlying XML message changes, the consumer and provider need to be modified, presuming the change isn’t backward compatible with the existing schema. If the organization has to scramble every time this situation occurs, that causes problems.
Why Software Sucks, part 2: Instrumentation
Continuing on my discussion about this Technometria podcast on IT Conversations, where Phil Windley and others chat with David Platt, author of a new book entitled “Why Software Sucks… and What You Can Do About It.”, there was a great item on the importance of instrumentation.
David gave the example of a web site for a library. He said how will read a review of a book online and immediately go to the website of the library to request it. He said that the UI is hard to use because there are two use cases: one where the user is in the physical library and one where the user is accessing the web site from home. When you’re accessing it from home, a user really doesn’t care if the book is in the library, where a user in the library certainly does. As a result, a design suited for use in the library doesn’t work well for people accessing it from home. David asked the team what percentage of requests for the site came from within the library, versus what percentage of requests came from outside the library. They didn’t know. Not knowing this makes it very difficult to design the site properly. David points out that you need to design for the masses and not the edge cases, although that’s so often what we do.
The importance of instrumentation has always been a soap box of mine. Back in May of 2006 there was a Business Week article that discussed the natural advantage of web-based companies like Amazon and Salesforce.com which all revolved around instrumentation (my comments here). I just recently had a posting on the power of the feedback loop. None of this can happen without instrumentation, and this doesn’t apply solely to user interfaces. Do you know which operations of your services receive the most use? Do you know when they receive the most use? Do you know where that usage is coming from? This type of information needs to be captured and fed back into the development process to create better services, and make better use of resources. If 99% of a service’s operations aren’t used, why were they built in the first place? Without instrumentation, how will you know this?
Here are two examples that I’ve seen first hand:
- A service that normally had about 10,000 requests a day, but every now and then, it would balloon up to 100,000 requests or more for two days. There was a consuming application associated with end-of-quarter reporting that hammered the service. While no problems were experienced, this could have been a disaster. This directly led to work to capture accurate usage profiles of new consumers before they went into production.
- Another service was seen to have spikes of usage first thing in the morning, over lunch, and at the end of the day. These were times when the users of these applications had time to sit down and use the application versus the other activities that went on over the course of the day. Adjustments had to be made to the infrastructure to support the spiked access pattern instead of a steady rate over the whole day.
These examples served to open the door to better instrumentation. Things that should be looked for as part of a continual improvement process are sequences of service interactions. I may see that two or more services are always called in sequence, by multiple applications. This may be an opportunity to create a composite service (or simply rewrite the first service) so it handles the entire sequence of interactions, improving performance, and making life easier for the consumer. You can only get this information through instrumentation, however.
Why Software Sucks, part 1
Phil Windley, in his latest Technometria podcast on IT Conversations, had a discussion (well, he listened a lot) with David Platt, author of a new book entitled “Why Software Sucks… and What You Can Do About It.”
While the podcast is quite lengthy, it’s very good and quite entertaining. As someone with a background in human computer interaction and usability, David’s comments certainly hit home for me. While it wasn’t anything that I haven’t heard before, it’s still something that every developer should hear. Near the end of the podcast, he gave some guidelines for both users of systems and developers of systems. One thing he said that should be done is to have someone involved in the design of the user interface that has absolutely no clue about the implementation of the system. He gave an example where this person would ask the question, “Why are those two fields next to each other?” Often times, the response would be, “Well, those two columns are next to each other in the database.” Guess what? You’re letting the implementation drive the interface.
This is by no means an easy task. I had a conversation with a good friend of mine who is a consultant on user centered design and usability, where he was asked about some of the potential conflicts between a user centered design approach and an SOA approach. My comments were that there shouldn’t be. UCD should be concerned with the user interface, period. There will be backing services that support that user interface, but those interfaces are system interfaces, not something that gets exposed to the user. The conflict only arises when that separation between interface and implementation begins to blur. Everyone knows that people tend to take the path of least resistance, so it’s very easy to design a system where the user interface unduly influences backend implementation (and hence service design), and vice versa. The challenge is to realize that there is a separation of concerns. The UI team shouldn’t be telling the service developers how to do their job, and the service developers shouldn’t be telling the UI team how to do theirs. At the end of all of this, we somehow have to come up with a system that meets both concerns. It’s not an easy task, but it’s what makes our jobs fun!
Now, keep reading to my next post with another great item from this podacst. I didn’t want to include it here, in case people didn’t have interest in this part of it.
iPhone and AppleTV
While there will be no shortages of blogs on this product, you can add my name to the list of people who can’t wait to get my hands on one. What a coup for Cingular, as well. Dana Gardner has already stated that he’s prepared to pay the huge contract termination fees with Sprint and Verizon to get one. I’ll have to change providers as well, although I’ll only be 6 months shy of my contract length, so I’ll have to do some math to see what the best option is. Hopefully, Cingular won’t be charging a premium for services on top of what the other smart phones may cost.
I’m sure that this device will have its fair share of glitches in the initial release, but it’s really refreshing to see this level of leapfrogging occur in the technology space. It’s even funnier after Bill Gates’ interview with C-Net that talks about how Apple is at a disadvantage due to their closed platform. While there are elements of truth in what Bill had to say, Apple is doing exactly what they need to do to do a successful, innovative company. While Microsoft may have a more stable revenue stream as the backbone of the Windows PC, they will be stuck in the world of commodity technology. While Robert Scoble calls out that Microsoft has competing technologies, it seems to be that it’s either in the wrong form factor or poorly marketed. I don’t want a gaming system to be able to stream video to my television. I’d rather have a device that excels at that function. I wouldn’t classify the AppleTV as innovative, but I would say that it puts user experience above all else, which is what will differentiate it, as with the iPhone.
Managing Service Development
I’ve been participating in a thread on a Yahoo group about Service Analysis and Design, and broached the subject of project management. I recently authored a paper on this for a client, and thought it was time to share some thoughts on this. Mark Griffin recently had a post titled SOA Mistakes where he discusses avoiding JBOWS: Just a Bunch Of Web Services. Why do companies wind up with JBOWS? I think a lot of it has to do with the project management culture of organizations.
What is a project manager’s primary concern? Well, it could be one of three things: delivering the solution as defined (scope), delivering the solution on time (schedule), or delivering the solution on budget (resources). Based on my own experience, the one thing that will drive a project manager nuts is when you add scope. Adding scope either lengthens the schedule or adds cost, both of which are probably the thing that the PM’s performance is being judged against. Most PMs I know aren’t evaluated on their ability to deliver what was promised, they are evaluated on whether dates were met and costs were controlled.
Now, the next assumption is tied to the project inception process. Odds are that the project was proposed because of some pain point- a symptom. It’s like going to the doctor and saying “I have a sore throat.” IT goes off and builds some user facing application to address that symptom, because that’s the information they were given, establishing the maximum scope of the project. Using my doctor analogy, it would be equivalent to the doctor simply looking at your throat and giving you a box of lozenges. Now let’s suppose that user facing application that we built required a new service. That service would be built solely to the specs of the user facing application. If you asked the project manager for resources to seek out other requirements, they’re going to tell you that’s out of scope. So, the service then becomes a part of the throat lozenge. In reality, the reason behind the pain point may have been a broader business process problem. It could also be the case that the pain point had become an epidemic and was also occurring in other parts of the enterprise. Using the doctor analogy, I could also have had a runny nose, with itchy eyes, and hives breaking out- symptoms of an allergic reaction. In this case the lozenge is going to do very little to alleviate the problem, just as the IT solution presented probably won’t do very much in the long run either.
There are two things I’d like to throw out there as suggestions that may help reduce your risk of JBOWS:
- We need to be performing analysis outside of the context of project, solely for the sake of getting a better understanding of the business and the IT solutions that support it. If we don’t have models that give a broader picture, IT (and the business) will always be in the position of treating symptoms rather than the disease. If analysis is only performed within projects, the scope of that analysis is already constrained. Projects create constraints, just as architecture creates constraints. The architectural constraints should help drive the project definition, however, not vice versa.
- Services should be broken out as independently managed efforts. I think the temptation to cut scope is simply too great when a consumer of a service and the service itself are developed within the same project. For sure, if a service is known up front to have more than one consumer, it must be managed independently of the development efforts of either consumer. Unless your organization is able to be flexible when adding scope, the risks are too great. If you want loosely coupled services, then the service development should be decoupled from the service development.
Avoiding JBOWS requires challenging the way we’ve done things in the past. If you’re still building projects the same old way that you have, odds are nothing will change. Services will be built within projects, and people will have to justify breaking a service out as a separate effort. We build to our constraints and assumptions, and if people assume it won’t be reused, they’ll probably build something that won’t be reused. Rather, you should take the opposite approach. Assume from the get go that all services will be enterprise services, and will be reused. If the organization is embracing SOA, people should have to justify why a service shouldn’t be independent, not vice versa.
I’d love to hear other people’s thoughts on this. I’ll be posting some more on the subject, specifically on managing the handoffs when consumers and services are being developed in parallel.
Let’s talk baseball
To my regular readers. For a change, this post has nothing to do with IT, SOA, or anything about technology. If you’re not a baseball fan, you can skip the rest.
I had my iPod on shuffle, as I usually do, and a song came up from 1998. It was done by a local St. Louis DJ, Craig Cornett, and called “McGwire’s Home Run.” Essentially, it’s a series of audio clips from various home runs hit by Mark McGwire in 1998, the year he hit 70 home runs to break Roger Maris’ single season record, which was subsequently broken by Barry Bonds three years later.
I live in the St. Louis area, and there’s been no shortage of articles and opinions on Mark McGwire this year, as he is eligible for the hall of fame. Based on what I read, it’s pretty clear that he won’t get elected this year. This all stems from the steroid problems within Major League Baseball.
I’ve been a baseball fan for as long as I can remember, from getting free tickets to Chicago White Sox games for getting good grades while growing up continuing all the way through the excitement of seeing my hometown Cardinals win the World Series this year. While winning the World Series this year was exciting (especially since my 4.5 year old son happened to wake up from sleeping during the ninth inning and I let him watch it with me, although I think he was still asleep), the most excitement I can ever remember in a baseball game was the night McGwire hit #62. Baseball was still struggling to come back from the strike years, and the drama of the Sammy Sosa-Mark McGwire home run chase brought baseball back into limelight. I listen to the soundclips of the late, great Jack Buck, his son Joe Buck, Mike Shannon, Joe Morgan, and Jon Miller and I still get chills. I can remember sitting with my wife watching the game. Just a few days earlier, we were fortunate enough to be at Busch Stadium and see him blast #60 off of the glass facade of the Stadium Club restaurant. The eruption of the crowd when #62 just barely made it over the wall still stirs emotions in me.
Now, hall of fame qualifications are not made with a single swing of the bat. Just as with any statistic, it’s not the statistic, but the combination of the statistic and the era in which it was done. There will be a day when many, many players will have hit 500 home runs, so I can’t say whether that achievement is enough to put Mark McGwire in the hall of fame. What I can say, however, is that Mark McGwire made baseball enjoyable for this fan. He’s always conducted himself with class, and I have more disdain for Congress for wasting my tax dollars on inquiries into a privately run professional sports league when they should have been getting something useful done instead of grandstanding. I feel bad for any person that was subpoenaed for that effort. They were placed in a situation that they didn’t deserve to be in, under a level of public scrutiny that was uncalled. Yes, professional athletes are celebrities, and must live with their lives under a microscope, but this wasn’t the paparazzi, this was the U.S. Congress. We’ll never know whether Mark McGwire took steroids or not. Personally, I don’t think Mark McGwire was “juiced” when he hit 70 home runs. He had already been under so much scrutiny in the years prior to 1998, it would have been very unlikely. Did steroid use occur earlier in his career prior to the scrutiny he faced? Who knows. Frankly, I don’t care. Unlike Barry Bonds, he carried himself with class and humility in everything he did as a member of the St. Louis Cardinals. Whether or not Mark gets into the hall of fame, his 62nd home run of 1998 will always be in my personal hall of fame of baseball moments.
The power of the feedback loop
I watched the latest video blog from Amanda Congdon, formerly of Rocketboom, now with ABC News. In this video, Amanda is in a dairy farm in Vermont. What makes this dairy farm unique is that it is entirely cow powered. They harvest the by-products of the cows, extract the methane gas, which powers a turbine that generates enough electricity for the farm with plenty leftover. The remnants of the by-products are separated into liquid by-products and solid by-products. The liquid by-products go to a lagoon, the solid by-products wind up becoming bedding for the cows. I found it very cool.
So here’s the analogy to SOA. You may find it a bit of a stretch, but everyone’s entitled to their opinion. Personally, I like to look for parallels in the real world to how our IT systems should behave. In the past world of dairy farming, I’m sure there was a time where the farmer was concerned with one thing: producing milk. Are there costs associated with it? Sure. Are there by-products? Absolutely. But in the end, the farmers really just cared about producing and selling milk. Now, the practice has progressed to where the system is a linear system with caw feed at the beginning and milk at the end, it is a closed-loop environment where even the by-products are turned around and leveraged in the process. Where are we at with IT systems today? I’d argue that most enterprises are still in the linear mode of thinking. You could even argue that it goes beyond IT, and into the business thinking, but being an IT guy, I’ll limit my assumptions there. IT produces solutions, and then forgets about them unless a user complains or some alarm goes off. If an organization takes on SOA, but still operates with this mentality, the only thing that has changed is that they are producing services instead of applications.
If an IT organization (and even the business) wants to mature and continue to wisely invest its IT dollars, the thinking has to stop being linear and start focusing on continued improvement. When a service goes into production, monitoring needs to go beyond just whether or not the service is available or not. Metrics (by-products) must be extracted from the process, and incorporated back into the planning process to continually improve the performance and behavior of IT systems. While it may begin with more operational metrics such as response time, there’s no reason that it can’t begin to involve business metrics and business events. These business events and metrics are processed by our analytics engines (business intelligence) and cause incorporated back into the IT systems themselves. Sometimes it may be a manual process where future improvements are justified through the analysis of usage metrics, other times it may be more automated where resources are automatically provisioning according to external factors that have been shown to increase demand. In any case, the IT operating model needs to be a loop, rather than a line. This isn’t anything new, as the concept of continual business improvement has been around for a long time. The thing that’s new is that it needs to be in the mindset of all of IT, all the way down to the developer writing the next service. While I’m sure those cows in Vermont don’t know that their manure is being used to keep their living space nice and cozy, the IT worker does need to know that by exposing metrics, whether IT centric or business centric, is a key to creating a feedback loop for continual improvement to the IT systems.
Services for Managing the Network
Jeff Browning, Director of Product Management for F5, wrote an article entitled “Take a SON Approach for Agile SOA” available for your reading enjoyment at FTPOnline. The article would probably have been better titled “Network Management via Services for Agile SOA” but it still makes some great points, ones that I’ve commented on in the past.
He describes the old world (or perhaps it is today’s world) as set-and-forget.
Load balancers—the network technology most relevant to application architects and developers—were basically installed, set up for round-robin load balancing, and never touched again.
This is certainly an accurate portrayal. He goes on to discuss how there is a tremendous amount of context embedded within SOAP requests that a traditional HTTP load balancer can’t leverage. He points out the importance of this information when servers get overloaded, using an example of a change in interest rate and how it may impact services traffic at a financial services firm.
He then switches to a discussion about the ability to configure network devices. He correctly calls out that most devices provide some user-facing console or command-line interface. While a CLI can be scripted, he states:
These scripts usually work for static environments and events, but the approach conflicts with the agility and flexibility that SOA enables.
He ends with an example where he states:
The network device can monitor requests, look for errors, and invoke configuration changes to alter device setup, adding more standby servers to the pool or even redirecting new requests to another data center running additional instances of the Web service. Additionally, prioritized requests based on client request ID or other factors could be sent to an entirely separate pool of servers hosting the service in a manner optimized for high-demand scenarios.
Based on business priority, the number of server resources, the priority, and dynamic changes to the configuration, automated change can be done seamlessly, with the service and network working together through more network intelligence and control.
Interestingly, this is the exact scenario I used when describing SOA to infrastructure engineers and why they should be concerned about it. I did an informal survey of about 50 of these engineers at one company that each managed a particular device or system in the infrastructure, and of those 50, only 1 product was known to have a web services interface for managing its configuration, and that was F5. I think I actually met Jeff when he stopped by for a briefing at this company. F5, is certainly not the only vendor to do this. I know that IBM DataPower devices expose all of their management capabilities available through their console as Web Services as well. It’s finally nice to see someone touting the importance of this capability, however. Let’s hope other vendors jump on the bandwagon and provide Web Services for managing their products. This means that IBM, HP, Microsoft, and Intel need to start making some progress on the converged management specification promised back in March of 2006.
P.S. Jeff, if you read this, drop me a line. I think I met you during a presentation at a previous employer. I’d love to hear more on what F5 is up to these days.
Why not more vodcasts?
There’s no shortage of webcasts these days. What’s frustrating for me, however, is that accessing them is still too inconvenient for me. It never fails that the one webcast I’m interested in will be scheduled at the same time as a meeting I have. I could have one meeting in the entire week, and it will coincide with the webcast.
If you’ve followed my blog, you also know that I’m an Apple fan, which means I have my share of iPods, including a Video iPod. Why aren’t companies leveraging video podcasts or vodcasts for delivery to consumers? While many webcasts are available on demand after the original presentation, they still tend to be streaming video, meaning I need to be connected to watch it. It would seem that someone could find a simple way to extract the appropriate marketing data when subscribing to an RSS feed so that could be captured when a particular vodcast was downloaded. Yes, iPods can only handle 640×400 now, but even that should be adequate. You can certainly send larger video if you want to view it in iTunes. Personally, I think trying to tailor slides so that they are visible on an iPod could be a good thing for presentations. It would avoid those text heavy slides that the presenter simply reads verbatim. So how about it vendors? Talk to your preferred webcast provider and start making things available in a downloadable H.264 format.
Briefings Direct Comments
Yet another good discussion from Dana Gardner’s group of independent analysts at Briefings Direct:SOA Edition. In this edition (podcast / transcript), the panelists (Dana Gardner, Steve Garone, Joe McKendrick, Jon Collins, and Tony Baer) discuss the year in review and their predictions for 2007.
The first comments that I liked were from Jon Collins of MacehiterWard-Dutton. In commenting on the consolidation in the vendor space that occurred in 2006, Jon points out “the one thing that’s been lacking so far … is integration, which is to me the ultimate irony, because it’s exactly what SOA is about as a concept.” Well said. He goes on to state that they need to become service-oriented in terms of how they put together their products. What I find great about that is that this trend toward the superplatform will make vendors think about their management interfaces. After all, this is where the integration must begin. If the integration points don’t exist, or are only available via a user-facing console, the integration cost will be higher and take longer. Prior to the superplatform, this expense was incurred by the end consumer. Now, let the vendors take that charge. Let’s hope it’s done properly however. All too often, they perform this integration, but then fail to expose these services to the end consumer. Back when I was working for a large enterprise, I had a vendor come in and tell me how their management console was built using their portal product. I said, “That’s great! Are the individuals components exposed as JSR-168 compliant portlets so I can built my own custom console with portlets from other products that operations needs?” Their answer, “Well, no…” Let’s get it right this time!
I was surprised at Tony Baer’s comments on the convergence of the Registry/Repository space. The only explanation was his comment that “at runtime, you don’t want a repository with a lot of overhead.” I’ll agree with this, but the reason for the convergence was not runtime, it was design time. Perhaps this was more intuitive to me because of the research I had done into component reuse in the 2000-2001 timeframe, and I understood the importance of metadata to support it. I think it’s only now that enterprises are starting to figure out an appropriate role for the registry/repository in the run-time environment.
In the predictions portion of the podcast, Joe McKendrick made the boldest statements, stating that “SOA as a term has crested.” He went on to discuss event-driven architecture, consistent with one of his recent blog posts, correctly calling out the ability for it to integrate with business intelligence solutions. Personally, I think an enterprise that is able to leverage SOA, EDA, and their BI systems is in the upper echelons of SOA maturity. Joe was also the most conservative regarding the rate of SOA adoption, expecting only a 20% increase from 2006.
Tying the vendor consolidation comments with Joe’s comments on the cresting of SOA, I think it is true that the superplatform vendors need to position their solutions not as enterprise SOA infrastructure, but enterprise infrastructure. As Jon suggested, a service-oriented view helps break down the infrastructure needs into capabilities. These capabilities aren’t specific to SOA, however, they are enterprise needs. For SOA to be successful, it does need to get to point where it is simply what we do, rather than being some special initiative within IT while the rest of IT continues to operate the same way it has in the past.
Predictions for 2007
Call it a wish list, or call it predictions, here are my thoughts on SOA in 2007. Largely, I think we’ll see lots of movement in the operational management space, as you’ll see in my comments.
- Vendors: Surprise, surprise, the consolidation will continue. There aren’t too many niche SOA vendors left, and by the end of 2007 there will be fewer.
- Operational Management: At least one of the major Enterprise Systems Management providers will actually come out with a decent marketing effort around SOA management along with a product to back it up. As everyone knows, Systems Management is still the ugly stepchild that is far too often an afterthought, in my opinion. Systems Management technologies, however, are exactly what the doctored ordered for create a mature practice of Service Product Management (not Service Management in the ITIL sense, but a Product Manager for a Service). Without metrics, a product manager can’t hope to be successful. Without an appropriate metric collection and reporting infrastructure, the metrics won’t be available. Without this, there is no service lifecycle other than the service development lifecycle. The service goes into the production and the development team forgets about it until a red light turns on. That’s not the way to practice Service Product Management.
- Registry/Repository: At least one players in the CMDB space will enter into the Registry/Repository space, most likely through acquisition. There’s simply too much overlap for this not to occur.
- CMDB: At least one of the super-platform vendors will see the overlap between CMDB and Registry/Repository and begin to incorporate it into their offerings, either through partnership, or through an acquisition. Interestingly, I see this as a natural happening with efforts around the adaptive enterprise and closed loop governance. While virtualization and other technologies are allowing data centers to be consolidated, it’s still too static of a process with too much manual involvement. Get the metadata into a repository, collect metrics from run-time, run some analysis, and adapt as needed. Incorporate it earlier in the development lifecycle, utilizing testing results against base comparisons from production systems and we can shift into a predictive mode. This stuff is happening in research labs, but still hasn’t gained traction from a marketing perspective.
- Events: The importance of events has received some recent press, but unfortunately got mixed up in the awful marketing message of SOA 2.0 earlier in the year. I think we’ll see renewed interest in event description, as I see it as a critical tool for enabling agility. Services provide the functional aspect, but there needs to be a library of triggers for those functions. Those triggers are events. Along with this, the players in the Registry/Repository space will begin to market their ability to provide an event library just as they can provide a service library.
- Service Development and Hosting: Momentum’s CEO, Jeff Schneider, had predicted that the business process platform would become the accepted standard as the foundation for enterprise software in 2006. Well, that didn’t happen, but it’s still trending that way. Personally, I think we’ve lost sight of the importance of service containers, partially because of the confusion created by the ESB. While it’s too early to proclaim the application server dead in 2007, I think the pendulum will begin to swing away from flexibility (i.e. general purpose servers that host things written in Java or C#) and toward specialization. A specialized service container for orchestration can provide a development environment targeted for orchestration with an execution engine targeted for that purpose. The magic question is how many domains of engines will we need? Is it simply two- a general purpose application server for Java/C# code, and a model-driven process server for orchestration/integration, or are there more? If it is simply those two, I think we’ll definitely see less usage of the general purpose application server, and more usage of the model-driven process server.
So what are your predictions for the SOA space in 2007? From a wish list perspective, my wish for you is simple: success in your SOA endeavours! I know a great company that can help make sure that happens, as well!
Kudos to the Apple Store
Just some public kudos for my local Apple Store at West County Mall in Des Peres, MO. Unfortunately, my MacBook Pro was one of the many with a fan problem. The fan on the left hand side of the machine made a nice grinding noise when the CPU got hot that was steadily getting worse (mine wasn’t this bad but the sounds were very familiar). Since I’m now a traveling consultant, I really didn’t want to be without my machine for too long, and with the holidays here, I had a week where it could go into the shop. I dropped it off on Saturday the 23rd, at about 1:30pm, knowing they’d be in their Christmas rush. The guy at the genius bar ran some tests, checked the machine in for repair, and told me they’d probably be able to turn it around by Friday the 29th or Saturday the 30th, due to the rush. That was okay with me, as I was simply hoping to get it back in a week.
Fast forward about 1 hour as I’m on my way home from the rest of my errands and my wife calls me and says, “your laptop is ready.” One hour turnaround instead of one week? At a store whose focus is retail, not repair? Two days before Christmas? I’ll take it. Thank you Apple for a very pleasant customer experience. Now let’s just hope the problem doesn’t return.