Archive for the ‘Enterprise Architecture’ Category
More on Service Lifecycle Management
I received two questions in my email regarding my previous post on Service Lifecycle Management, specifically:
- Who within an organization would be a service manager?
- To whom would the service manager market services?
These are both excellent questions, and really hit at the heart of the culture change. If you look at the typical IT organization today, there may not be anyone that actually plays the role of a service manager. At its core, the service manager is a relationship manager- managing the interactions with all of the service consumers. What makes this interesting is when you think about exposing services externally. Using the concept of relationship management, it is very unlikely that the service manager would be someone from IT, rather, it’s probably someone from a business unit that “owns” the relationship with partners. IT is certainly involved, and it’s likely that technical details of the service interaction are left to the IT staff of each company, but the overall relationship is owned by the business. So, if we only consider internal services, does the natural tendency to keep service management within IT make sense? This approach has certain risks associated with it, because now IT is left to figure out the right direction through potentially competing requirements from multiple consumers, all the while having the respective business units breathing down their neck saying, “Where’s our solution?” At the same time, it’s also very unlikely that business is structured in such a way to support internal service management. Many people would say that IT is often better positioned to see the cross-cutting concerns of many of these elements. So, there are really two answers to the question. The first answer is someone. Not having a service owner is even more problematic than choosing someone from either IT or the business who may have a very difficult task ahead of them. The second answer is that the right person is going to vary by organization. I would expect that organizations whose SOA efforts are very IT driven, which I suspect is the vast lot of them, would pick someone within IT to be the service manager. I would expect that person to have an analyst and project management background, rather than a technical background. After all, this person needs to manage the consumer relationship and understand their requirements, but they also must plan the release schedule for service development. For organizations whose SOA efforts are driven jointly with the business, having a service manager within a business organization will probably make more sense, depending on the organizational structure. Also, don’t forget about the business of IT. There will be a class of services, typically in the infrastructure domains, such as authentication and authorization services, that will probably always be managed out of IT.
On question number two, I’m going to take a different approach to my answer. Clearly, I could just say, “Potential service consumers, of course” and provide no help at all. Why is that no help? Because we don’t know who represents those service consumers. Jumping on a common theme in this blog, most organizations are very project-driven, not service or product driven. When looking for potential service consumers, if everything is project driven, those consumers that don’t exist in the form of a project can’t be found! I don’t have a background in marketing, but I have to believe that there are probably some techniques from general product marketing that can applied within the halls of the business to properly identify the appropriate segment for a service. The real point that needs to be made is that a service manager can not take the field of dreams approach of simply building it, putting some information into the repository, and then hoping consumers find it. They have to hit the pavement and go talk to people. Talk to other IT managers whom you know use the same underlying data that your service does. Talk to your buddies at the lunch table. Build your network and get the word out. At a minimum, when a service is first identified, send a blast out to current project managers and their associated tech leads, as well as those that are in the project approval pipeline. This will at least generate some just-in-time consumers. While this may not yield the best service, it’s a start. Once some higher level analysts efforts have taken place to segment the business into business domains, then the right marketing targets may be more clearly understood.
Is Identity Your Enabler or Your Anchor?
I actually had to think harder than normal for a title for this entry because as I suspected, I had a previous post that had the title of “Importance of Identity.” That post merely talked about the need to get identity on your service messages and some of the challenges associated with defining what that identity should be. This post, however, discusses identity in a different light.
It occurred to me recently that we’re on a path where having an accurate representation of the organization will be absolutely critical to IT success. Organizations that can’t keep ActiveDirectory or their favorite LDAP up to date with the organizational changes that are always occurring with find themselves saddled with a boat anchor. Organizations that are able to keep their identity stores accurate and up to date will find themselves with a significant advantage. An accurate identity store is critical to the successful adoption of BPM technology. While that may be more emerging, think about your operations staff and the need for accurate roles associated with the support of your applications and infrastructure. One reorg of operations and the whole thing could fall apart with escalation paths no longer in existence, incorrect reporting paths, and more.
So, before you go gung-ho with BPM adoption, take a good look at your identity stores and make sure that you’ve got good processes in place to keep it up to date. Perhaps that should be the first place you look to leverage the BPM technology itself!
Architect title
For all of my architect readers, give today’s Dilbert a glance. 🙂
March Events
Here are the SOA, BPM, and EA events coming up in March. If you want your events to be included, please send me the information at soaevents at biske dot com. I also try to include events that I receive in my normal email accounts as a result of all of the marketing lists I’m already on. For the most up to date list as well as the details and registration links, please consult my events page. This is just the beginning of the month summary that I post to keep it fresh in people’s minds.
- 3/3: ZapThink Practical SOA: Pharmaceutical and Health Care
- 3/4: Webinar: Implementing Information as a Service
- 3/6: Global 360/Corticon Seminar: Best Practices for Optimizing Business Processes
- 3/10 – 3/13: OMG / SOA Consortium Technical Meeting, Washington DC
- 3/10: Webinar: Telelogic Best Practices in EA and Business Process Analysis
- 3/11: BPM Round Table, Washington DC
- 3/12 – 3/14: ZapThink LZA Bootcamp, Sydney, Australia
- 3/13: Webinar: Information Integrity in SOA
- 3/16 – 3/20: DAMA International Symposium, San Diego, CA
- 3/18: ZapThink Practical SOA, Australia
- 3/18: Webinar: BDM with BPM and SOA
- 3/19: Webinar: Pega, 5 Principles for Success with Model-Driven Development
- 3/19: Webinar: AIIM Webinar: Records Retention
- 3/19: Webinar: What is Business Architecture and Why Has It Become So Important?
- 3/19: Webinar: Live Roundtable: SOA and Web 2.0
- 3/20: Webinar: Best Practices for Building BPM and SOA Centers of Excellence
- 3/24: Webinar: Telelogic Best Practices in EA and Business Process Analysis
- 3/25: ZapThink Practical SOA: Governance, Quality, and Management, New York, NY
- 3/26: Webinar: AIIM Webinar: Proactive eDiscovery
- 3/31 – 4/2: BPM Iberia – Lisbon
Perception Management
James McGovern frequently uses the term “perception management” in his blog, and there’s no doubt that it’s a function that most enterprise architects have to do. It’s an incredibly difficult task, however. Everyone is going to bring some amount of vested interests to the table, and when there’s conflict in those interests, that can create a challenge.
A recent effort I was involved in required me to facilitate a discussion around several options. I was putting together some background material for the discussion, and started grouping points into pros and cons. I quickly realized, however, that by doing so, I was potentially making subjective judgements on those points. What I may have considered a positive point, someone else may have considered it a negative point. If there isn’t agreement on what’s good and what’s bad, you’re going to have a hard time. In the end, I left things as pros and cons, since I had a high degree of confidence that the people involved had this shared understanding, but I made a mental note to be cautious about using this approach when the vested interests of the participants are an unknown.
This whole space of perception management is very interesting to me. More often than not, the people with strong, unwavering opinions tend to attract more attention. Just look at the political process. It’s very difficult for a moderate to gain a lot of attention, while someone who is far to the left or far to the right can easily attract it. At the same time, when the elections are over, the candidates typically have to move back toward the middle to get anything done. Candidates who are in the middle get accused of flip-flopping. Now, put this in the context of a discussion facilitator. The best facilitator is probably one who has no interests of his or her own, but who is able to see the interests of all involved, pointing out areas of commonality and contention. In other words, they’re the flip-floppers.
I like acting as a facilitator, because I feel like I’ve had a knack for putting myself in someone else’s shoes. I think it’s evident in the fact that you don’t see me putting too many bold, controversial statements up on this blog, but rather talking about the interesting challenges that exist in getting things done. At the same time, I really like participating in the discussions because it drives me nuts when people won’t take a position and just muddle along with indecision. It’s hard to participate and facilitate at the same time.
My parting words on the subject come from my Dad. Back in those fun, formative high school years as I struggled through all of the social dynamics of that age group, my Dad told me, “you can’t control what other people will do or think, you can only control your own thoughts or actions.” Now, while some may read this and think that this means you’re free to be an arrogant jerk and not give a hoot what anyone thinks about you, I took it a different way. First and foremost, you do have to be confident in your own thoughts and beliefs. This is important, because if you don’t have certain ideals and values on who you want to be, then you’re at risk for being someone that will sacrifice anything just to gain what it is you desire, and that’s not necessarily a good thing. Second, the only way to change people’s perception of you is by changing your own actions, not by doing the same thing the same way, and hoping they see the light. I can’t expect everyone to read the topics in this blog and suddenly change their IT departments. Some may read it and not get it at all. Some may. For those that don’t, I may need to pursue other options for demonstrating the principles and thus change their perceptions. At the same time, there will always be those who are set in their ways because they have a fundamental different set of values. Until they choose to change those values, your energy is best spent elsewhere.
Multi-tier Agreements from Nick Malik
Nick Malik posted a great followup comment to my last post on service contracts. For all of you who just follow my blog via the RSS feed, I thought I’d repost the comment here.
The fascinating thing about service contract standardization, a point that you hit on at the end of your post, is that it is not substantially different from the standardization of terms and conditions that occurs for legal agreements or sales agreements in an organization.
I am a SOA architect and a member of my Enterprise Architecture team, as you are, but I’m also intimately familiar with solutions that perform Contract Generation from Templates in the Legal and Sales agreements for a company. My employer sells over 80% of their products through the use of signed agreements. When you run $3B of revenue, per month, through agreements, standardization is not just useful. It is essential.
When you sign an agreement, you may sign more than one. They are called “multi-tier� agreements, in that an agreement requires that a prior one is signed, in a chain. There are also “associated agreements� that are brought together to form an “agreement package�. When you last bought a car, and you walked out with 10 different signed documents, you experienced the agreement package firsthand.
These two concepts can be leveraged for SOA governance in terms of agreements existing in a multi-tier environment, as well as services existing in an ecosystem of agreements that are part of an associated package.
For example, you could have one of four different supporting agreements that the deployment team must agree to as part of the package. All four could rely on the same “common terms and taxonomy� agreement that every development and deployment team signs (authored by Enterprise Architecture, of course). And you could have a pair of agreements that influence the service itself: one agreement that all consumers must sign that governs the behavioural aspects of the service for all consumers, and another agreement that can be customized that governs the information, load, and SLA issues for each provider-consumer pair.
If this kind of work is built using an automated agreement management system, then the metadata for an agreement package can easily be extracted and consumed by automated governance monitoring systems. We certainly feed our internal ERP system with metadata from our sales agreements.
Something to think about…
The Elusive Service Contract
In an email exchange with David Linthicum and Jason Bloomberg of ZapThink in response to Dave’s last podcast (big thanks to Dave for the shout-out and the nice comments about me in the episode), I made some references to the role of the service contract and decided that it was a great topic for a blog entry.
In the context of SOA Governance, my opinion is that the service contract is the “container” of policy that governs behavior at both design-time and run-time. According to Merriam-Webster, a contract is “a binding agreement between two or more persons or parties; especially : one legally enforceable.” Another definition from Merriam-Webster is “an order or arrangement for a hired assassin to kill someone” which could certainly have implications on SOA efforts, but I’m going to use the first definition. The key part of the definition is “two or more persons or parties.” In the SOA world, this means that in order to have a service contract, I need both a service consumer and a service provider. Unfortunately, the conversations around “contract-first development” that were dominant in the early days caused people to focus on one party, the service provider, when discussing contracts. If we get back to the notion of a contract as a binding agreement between two parties, and going a step further by saying that the agreement is specified through policies, the relationship between the service contract and design and run time governance should become much clearer.
First, while I picked on “contract-first development” earlier, the functional interface is absolutely part of the contract. Rather than be an agreement between designers and developers, however, it’s an agreement on between a consumer and a provider on the structure of the messages. If I am a service provider and I have two consumers of the service, it’s entirely possible that I expose slightly different functional interfaces to those consumers. I may choose to hide certain operations or pieces of information from one consumer (which may certainly be the case where one consumer is internal and another consumer is external). These may have an impact at design-time, because there is a handoff from the functional interface policies in the service contract to the specifications given to a development team or an integration team. Beyond this, however, there are non-functional policies that must be in the contract. How will the service be secured? What’s the load that the consumer will place on the service? What’s the expected response time from the provider? What are the notification policies in the event of a service failure? What are the implications when a consumer exceeds its expected load? Clearly, many of these policies will be enforced through run-time infrastructure. Some policies aren’t enforced on each request, but have implications on what goes on in a request, such as usage reporting policies. My service contract should state what reports will be provided to a particular consumer. This now implies that the run-time infrastructure must be able to collect metrics on service usage, by consumer. Those policies may ripple into a business process that orchestrates the automated construction and distribution of those usage reports. Hopefully, it’s also clear that a service contract exists between a single consumer and a single provider. While each party may bring a template to the table, much as a lawyer may have a template for a legal document like a will, the specific policies will vary by consumer. One consumer may only send 10,000 requests a day, another consumer may send 10,000 requests an hour. Policies around expected load may then be enforced by your routing infrastructure for traffic prioritization, so that any significant deviation from these expected load don’t starve out the other consumers.
The last comment I’d like to make is that there are definitely policies that exist outside of the service contract that influence design-time and run-time, so don’t think that the service contract is the container of all policies. I ran into this while I was consulting when I was thinking that the service contract could be used as a handoff document between the development team and the deployment team in Operations. What became evident was that policies that govern service deployment in the enterprise were independent of any particular consumer. So, while an ESB or XML appliance may enforce the service contract policies around security, they also take care of load balancing requests across the multiple service endpoints that may exist. Since those endpoints process requests for any consumer, the policies that tell a deployment team how to configure the load balancing infrastructure aren’t tied to any particular service contract. This had now become a situation where the service contract was trying to do too much. In addition to being the policies that govern the consumer-provider relationship, it was also trying to be the container for turnover instructions between development and deployment, and a single document couldn’t do both well.
Where I think we need to get to is where we’ve got some abstractions between these things. We need to separate policy management (the definition and storage of policies) from policy enforcement/utilization. Policy enforcement requires that I group policies for a specific purpose, and some of those policies may be applicable in multiple domains. Getting to this separation of management from enforcement, however, will likely require standardization in how we define policies, and they simply don’t exist. Policies wind up being tightly coupled to the enforcement points, making it difficult to consume them for other purposes. Of course, the organizational culture needed to support this mentality is far behind the technology capabilities, so these efforts will be slow in coming, but as the dependencies increase in our solutions over time, we’ll see more and more progress in this space. To sum it up, my short term guidance is to always think of the service contract in terms of a single consumer and a single provider, and as a collection of policies that govern the interaction. If you start with that approach, you’ll be well positioned as we move forward.
So much workflow, so little time
If your organization has begun to leverage workflow technologies as built into the typical BPM suite, you’ve probably run into one of the key challenges: when to use the BPM’s workflow engine versus the workflow capabilities of any of the other products you have in your enterprise. There’s no shortage of products that have built in workflow capabilities. Think about it. Most document management products have workflow capabilities (albeit typically associated with document approval). Your portal may have workflow capabilities. Your SDLC tools may include workflow for bug tracking. Your operations group may leverage a ticketing/work management system with workflow capabilities. So what’s the enterprise to do?
One of things that I’ve advised in the past is to consider whether or not the workflow involved requires any customization or not. Take the SDLC tooling as an example. While there’s typically some ability to change some of the data that flows through the process, the workflow itself probably doesn’t vary much from organization to organization. As a different example, imagine the workflow that supports the procurement and installation of new servers. While most organizations probably support this through a service request system, the odds are that the sequencing of tasks will vary widely from organization to organization. So, the need for customization is one factor. Unfortunately, it’s not the only one. You also have to consider the data element. Any workflow is going to have some amount of contextual information that’s carried throughout. So, while there may be a big need for process customization, that may be offset by the contextual information and schemas that may be provided as part of a tailored third party product.
All in all, the decision on when to use a general purpose workflow engine and when to use a tailored product is no different than the decisions an organization makes on when to build solutions in house versus when to buy packaged solutions. Look at the degree of customization you need in the workflow versus the pre-defined processes. Look at the work that will involved in setting up your own custom data stores and schemas versus the pre-defined databases. Ultimately, I think all large organizations will have a mixture of both. Unfortunately, that means some short term pain. All workflow systems typically have task management with them. Multiple tools means multiple task managers. Multiple task managers means multiple places to do your work. This isn’t exactly efficient, but, until we have standard ways to publish tasks to a universal task list and other standards associated with the use of workflow engines, we should strive to make good decisions on a process-by-process basis.
February Events
Here are the SOA, BPM, and EA events coming up in February. If you want your events to be included, please send me the information at soaevents at biske dot com. I also try to include events that I receive in my normal email accounts as a result of all of the marketing lists I’m already on. For the most up to date list as well as the details and registration links, please consult my events page. This is just the beginning of the month summary that I post to keep it fresh in people’s minds.
- 2/4 – 2/6: Gartner BPM Summit
- 2/5: ZapThink Practical SOA: Energy and Utilities
- 2/7 – 2/8: Forrester’s Enterprise Architecture Forum 2008
- 2/11: Web Services on Wall Street
- 2/13 – 2/15: ARIS ProcessWorld
- 2/13: ZapThink Webinar: Leverage Document Centric SOA for Competitive Advantage
- 2/19: Webinar: Integrated SOA Governance
- 2/25 – 2/28: BPTG’s Business Process Transformation – BPM Practitioner Course
- 2/25 – 2/27: Global Excellence Awards in BPM & BPM Technology Showcase
- 2/26 – 2/29: ZapThink LZA Bootcamp
Gartner EA Summit Podcast
Thanks to Gartner and the SOA Consortium, the panel discussion I was part of in December at the Gartner Enterprise Architecture Summit is now available as a podcast. I’ve referenced it as an enclosure in this entry, so if you subscribe to my normal blog feed in iTunes, you should get it. If you have difficulty, you can access the MP3 file directly here. For all the details on the session, I encourage you to read Dr. Richard Soley’s post over at the SOA Consortium blog.
Gaining Visibility
While I always take vendor postings with a grain of salt, David Bressler of Progress Software had a very good post entitled, “We’re Running Out of Words II.” Cut through the vendor-specific stuff, and the message is that routing all of your requests through a centralized process management hub in order to gain visibility may not be a good thing. In his example, he tells the story of a company that took some existing processes and, in order to gain visibility into the execution of the process steps, externalized the process orchestration into a BPM tool. In doing so, the performance of the overall process took a big hit.
To me, scenarios like this are indicative of one major problem: we don’t think about management capabilities. Solution development is overwhelmingly focused on getting functional capabilities out the door. Obviously, it should be, but more often than not there is no instrumentation. How can we possibly claim that a solution delivers business value over time if there is no instrumentation to provide metrics? Independent of whether external process management is involved, gateway-based interception is used versus agent-based approaches, etc. we need to begin with the generation of metrics. If you’re not generating metrics, you’re going to have poor, or even worse, no visibility.
Unfortunately, all too often, we’re only focused on problem management and resolution, and the absence of metrics only comes to light if something goes wrong and we need to diagnose the situation. To this, I come back to my earlier statement. How can we have any confidence in saying that things are working properly and providing value without metrics?
Interestingly, once you have metrics, the relationships (and potentially collisions) between the worlds of enterprise system management, business process management, web service management, business activity monitoring, and business intelligence start to come together, as I’ve discussed in the past. I’ve seen one company take metrics from a BPM tool and placing them into their data warehouse for analysis and reporting from their business intelligence system. Ultimately, I’d love to see a differentiation based upon what you do with the data, rather than on the mere ability to collect the data.
SOA and the Economy
Ok, so Brenda Michelson called me out today on a conference call that I hadn’t made any comments regarding the impact of the economy on SOA initiatives. Of course, that’s like dangling a carrot in front of my face, so I now feel obligated to at least say something.
As I’ve stated before, I consider myself to be very pragmatic. I try to avoid knee-jerk reactions to highs or low. So, the recent buzz around how the economy would impact SOA efforts was a bit of a non-event for me. Companies have always had to deal with their own performance, and adjust their budgets accordingly, and obviously, the general economy has a role in that performance. Depending on your industry, it could be large, or it could be insignificant. So, whether it’s an SOA project or any other IT project, the decision making process is still the same. The only things that have changed are the resources that are available. If we have resources to do it, do it. If you don’t, it gets put off. Now, if we’re discussing how SOA initiatives should be prioritized relative to other IT projects, it’s a different debate, and one that is independent of the current funding at hand.
The other angle on this discussion is whether or not successful SOA adoptions will help companies when their revenue streams are not what they’ve been in the past. If we come back to the marketing terms for SOA: reuse, agility, etc., I think these are all realized in the form of IT productivity and efficiency. In other words, I’m either getting more things done in the some standard time frame, or I’m reducing the time it takes to get any particular solution done in comparison to similar efforts in the past. I firmly believe that SOA should be doing this, and therefore, it’s a logical conclusion that companies that have successfully adopted SOA are in a better position to execute in an economic downturn than their competitors that haven’t. Of course, that assumes that information technology plays a key role in a business’ ability to execute and can be a competitive differentiator. There you go, Brenda!
More on Events
Ok, the events page is finally functional. I gave up on WordPress plugins, and am now leveraging an embedded (iframe) Google Calendar. A big thank you to Sandy Kemsley, as she already had a BPM calendar which I’m now leveraging and adding in SOA and EA events. I hope this helps people to find out the latest on SOA, EA, and BPM events.
SOA, EA, and BPM Conferences and Events
I’ve decided to conduct an experiment. In the past, I’ve been contacted about posting information regarding events on SOA, either via a comment to an existing post, or through an email request. I’ve passed on these, but at the same time, I do like the fact that I’ve built up a reader base and my goal has always been to increase the knowledge of others on SOA and related topics.
What I’ve decided to try is having one post per month dedicated to SOA events that will be occuring in the near future, in addition to this permanent page on the blog that people can review when they please. I’m not about to go culling the web for all events, but I will collect all events that I receive in email at soaevents at biske dot com, and post a summary of them including dates, topic, discount code, and link to the detail/registration page. Readers would be free to post additional events in comments, however, I don’t think very many people subscribe to the comment feed, so the exposure would be less than having it in my actual post. I’m going to try to leverage a Google Calendar (iCal, HTML) for this, which will also be publicly available, even if someone wants to include it their own blog.
While this is essentially free marketing, in reality, I’d make a few more cents by having more visitors to this site than I would if all of these events organizers threw their ads into the big Google Ad Pool with the hopes of it actually showing up on my site. If I get a decent number of events from multiple sources the first two months, I’ll probably keep it up. If I only get events from one source, I’ll probably stop, as it will begin to look like I’m doing marketing just for them and I don’t want anyone to have the perception that my content is influenced by others.
The doors are open. Send your SOA, EA, and BPM events to me at soaevents at biske dot com. Include:
- Date/Time
- Subject/Title
- Description (keep it to one or two sentences)
- URL for more detail/registration
- Discount code, if applicable
If you want to simply send an iCal event, that would probably work as well, and it would make it very easy for me to move it into Google Calendar. My first post will be on Feb. 1, the next on March 1, and so on. I will post events for the current month and the next month, and may include April depending on how many events I receive.
What happened to the OS?
As I listened to some rumblings on some podcast on the soon-to-be-released Microsoft virtualization product, a thought came to my mind- what happened to the operating system?
This isn’t the first time that this thought has popped into my head, or something related to it. When I was first exposed to virtualization technology, I had the same question. Isn’t it the job of the operating system to manage the underlying physical computing resources effectively? Of course, I also realized how much I ignored the purpose of the operating system in work work as a Java developer and designer. I had become so embedded in the world of Java application servers that I had a difficult time getting my head back on straight when trying to guide some .NET server side development. I kept trying to find something to equate to the Java Application Server that sat on top of Windows Server, when in fact, Windows Server is the application server. After all, operating systems manage the allocation of resources to system processes, which are applications.
Now this isn’t a post to knock virtualization technology. There are many benefits that it can provide, such as the ability to move applications around independent of the underlying physical infrastructure (i.e. it’s not bound to one physical server). But, if we look at the primary benefit being touted, it’s better resource utilization. Before we jump to virtualization, shouldn’t we understand why the existing operating systems can’t do their job to begin with? If our current processes for allocating applications to operating systems is fundamentally flawed, is virtualization technology merely a band-aid on the problem?
Based on my somewhat limited understanding of VMs, you can typically choose between dedicating resources to a VM Guest, allowing the VM Guest to pull from a shared pool, or some combination of the two. Obviously, if we dedicate resources completely, we’re really not much better off than physical servers from a resource utilization standpoint (although I agree there are other benefits outside of resource management). To gain the potential for improved resource utilization, we need to allow the VM Host to reclaim resources that aren’t being used and give them to other processes. At this extreme, we run the risk of thrashing as the VM Guests battle for resources. The theory is that the VM Guests will need resources at different times, so the theoretical thrashing that could occur, won’t. So, we probably take some hybrid approach. Unfortunately, we still now have the risk of wasting resources in the dedicated portion.
The real source of the problem, in my opinion, is that we do a lousy of understanding the resource demands of our solutions. We use conservative ballpark estimates, choose some standard configuration and number of app servers, do a capacity test and send it out into the wild. When it’s in the wild, we don’t collect metrics to see if the real usage (which is probably measured in site visits, not in resources consumed) matches the expected usage, and even if it comes in less, we certainly won’t scale back because we now have “room to grow”. If we don’t start doing this, we’re still going to have less than optimal resource utilization, whether we use VMs or not. I don’t believe that going to a 100% shared model is the answer either unless the systems get much more intelligent and take past trends into account in deciding whether to take resources away from a given VM guest or not.
Again, this post isn’t a knock on virtualization. One area that I hope virtualization, or more specifically, the hypervisor, will address is the bloat of the OS. Part of the resources go to the operation of the OS itself, and one can argue that there’s a lot of things that we don’t need. While we can try to configure the OS to turn these things off, effectively a black-listing approach (everything is on unless those things appear on the black list), I like the white-list approach. Start with the minimal set of capabilities needed (the hypervisor) and then turn other things on if you need them. I expect we’ll see more things like BEA’s WebLogic Virtual Edition that just cut the bloated OS out of the picture. But, as I’ve said, it only gets us so far if we don’t do a better job of understanding how our solutions consume resources from the beginning.