SOA Governance Podcast

I recorded a podcast on various SOA Governance topics with Bob Rhubart, Cathy Lippert, and Sharon Fay of Oracle as part of Oracle’s Arch2Arch Podcast series. You can listen to part one via this link, or you can find it at Oracle’s ArchBeat site here.

Governing Anonymous Service Consumers

On Friday, the SOA Chief (Tim Vibbert), Brenda Michelson, and I had a conversation on Twitter regarding SOA governance and anonymous service consumers. Specifically, how do you provide run-time governance for a service that is accessed anonymously?

If you’ve read this blog or my book, you’ll know that my take on run-time SOA governance is the enforcement and/or monitoring of compliance with the policies contained within the service contract. Therein lies the biggest problem: if the service consumer is anonymous, is there a contract? There’s certainly the functional interface, which is part of the contract, but there isn’t any agreement on the allowed request rates, hours of usage, etc. So what do we do?

The first thing to recognize is that while there may not be a formal contract that all consumers have agreed to, there should always be an implied contract. When two parties come to the table to establish an agreement, it’s likely that both sides comes with a contract proposal, and the final contract is a negotiation between the two. The same thing must be considered here. If someone starts using a service, they have some implicit level of service that they expect to receive. Likewise, the service provider knows both the capacity they currently can handle as well as what how they think a typical consumer will use the service. Unfortunately, these implied contracts can frequently be wrong. The advice here is that even if you are trying to lower the barrier for entry by having anonymous access, you still need to think about service contracts and design to meet some base level of availability.

The second thing to do, which may seem obvious, is to avoid anonymous access in the first place. It’s very hard to enforce anything when you don’t know where it’s coming from. Your authorization policy can simply be that you must be an authenticated user to use the service. Even in an internal setting, having some form of identity on the message, even if there are no authentication or authorization policies, becomes critical when you’re trying to understand how the systems are interacting, perform capacity planning, and especially in a troubleshooting scenario. Even services with low barriers to entry, like the Twitter API, often require identity.

The next thing you should do is leverage a platform with elasticity. That is, the available capacity should grow and shrink with the demand. If it’s anonymous, and new consumers can start using it simply by getting the URLs from someone else, you have no control over the rate at which usage will scale. If the implied level of availability is that the service is always available, you’ll need on-demand resources.

Finally, you still need to protect your systems. No request is completely anonymous, and there are things you can do to ensure the availability of your service against rogue consumers. Requests will have source IP addresses on them, so you can look for bad behavior at that level. You can still do schema validation, look for SQL injection, etc. In other words, you still need to do DoS protection. You also should be looking at the usage metrics on a frequent basis to understand the demand curve, and making decisions accordingly.

The Role of the Service Manager

Tony Baer joined the SOA Consortium on one of its working group conference calls this week to discuss his research on connections between ITIL and SOA. Both he and Beth Gold-Bernstein have blogged about the call, Beth focusing on the broader topic of SOA and ITIL, and Tony talking about the topic of service ownership, as these topics were the meat of the conversation between Beth, Tony, and myself.

I’ve spent the past few years thinking about all things SOA, and recently, I completed the ITIL v3 Foundations certification and have been doing a lot of work in the ITIL/ITSM space. When you move away from the technology-side of the discussion and actually talk about the people and process side of the discussion, you’ll find that there are significant similarities between ITIL/ITSM adoption and SOA adoption. Tony had a diagram in his presentation that illustrated this that Beth reproduced on her blog. Having looked at this from both the SOA world of the application developer and the ITIL/ITSM world of IT operations, there’s a lot that we can learn from ITIL in our SOA adoption efforts. Foremost, ITIL defines a role of Service Manager. Anyone who’s listened to my panel discussions and heard my answer to the question, “What’s the one piece of advice you have for companies adopting SOA?” you’ll know that I always answer, “Make sure all your services have owners.” I’ve decided I like the term “Service Manager” better than “Service Owner” at this point, but if you refer to past posts of mine, you can think of these two terms synonymously.

So what does a service manager do? Let’s handle the easy one. Clearly, service management begins with the initial release of the service. The service manager is accountable for defining this release and putting the project in motion to get it out the door. This involves working with the initial service consumer(s) to go over requirements, get the interface defined, build, test, deploy, etc. Clearly, there’s probably a project manager, developers, etc. helping in the effort, but in a RACI model, it’s the service manager who has accountability. The work doesn’t end there, however. Once the service is in production, the service manager must be receiving reports on the service utilization, availability, etc. and always making sure it meets the needs of the consumer(s). In other words, they must ensure that “service” is being provided.

They must also be defining the next release of the service. How does this happen? Well, part of it comes from analysis of current usage, part of it comes from external events, such as a merger, acquisition, or new regulations, and part of it comes from seeking out new customers. Some consumers may come along on their own with new requests. Reading between the lines, however, it is very unlikely that a service manager manages only one service. It is more likely that they manage multiple services within a common domain. Even if it is one service, it’s likely that the service has multiple operations. The service manager is the one responsible for the portfolio of services and their operations, and trying to find the right balance between meeting consumer needs and keeping a maintainable code base. If there’s redundancy, the service manager is the one accountable for managing it and getting rid of it where it makes sense. This doesn’t negate the need for enterprise service portfolio management, because sometimes the redundancy may be spread across multiple service managers.

So what’s the list? Here’s a start. Add other responsibilities via comments.

  • Release Management (a.k.a. Service Lifecycle Management)
  • Production Monitoring
  • Customer (Consumer) Management
  • Service Management
  • Marketing
  • Domain Research: Trends associated with the service domain
  • Domain-Specific Service Portfolio Management

Think hard about this, as it’s a big shift from many IT organizations today. How many organizations have their roles strictly structured around project lifecycle activities, rather than service lifecycle activities? How many organizations perform these activities even at an application level? It’s a definition change to the culture of many organizations.

SOI versus SOA

Anne Thomas Manes’ “SOA is dead” post back at the beginning of the year sparked quite a debate, which is still going strong. On the Yahoo SOA group, the question was asked on exactly what Anne meant by SOI, or Service-Oriented Integration. Here’s my response:

SOI, service oriented integration, is probably best stated as WSOI- Web Services Oriented Integration. It’s simply the act of taking the same integration points that arise in a project and using web services or some other XML over HTTP approach to integrate the systems. Could this constitute a service-oriented application architecture? Absolutely, but in my mind, there is at best incremental benefits in this approach versus some other integration technology.

Because the scope is a single application, it’s unlikely that any ownership domains beyond the application itself were identified, so there won’t be anyone responsible for looking for and removing other redundant service implementations. Because the scope of the services involved didn’t change, only the technologies used, it’s unlikely that the services will have any greater potential for reuse than they would with another integration technology except that XML/HTTP will be more interoperable, than say, Java RMI, if that’s even a concern. To me, SOA must be applied at something larger than a single application to get anything better than these incremental gains. Services should be defined along ownership domains that create accountability for driving the redundancy out of the enterprise where appropriate.

This is why an application rationalization effort or application/service portfolio management is a critical piece of being successful. If it’s just a “gut feel” that there is a lot of waste in the IT systems, arbitrary use of a different integration technology won’t make that go away. Only working to identify the areas of redundancy/waste, defining appropriate ownership domains, and then driving out the redundancy through the use of services will make a significant difference.

Kindle 1 vs. Kindle 2

Those of you that follow me on Twitter know that my Kindle 1 recently suffered an untimely demise. I had the option of purchasing a refurbished Kindle 1, or getting the new Kindle 2, and I opted for the latter. I thought I’d highlight some of the differences that I’ve noticed for those of you that are considering upgrading and giving your Kindle 1 to another family member or friend.

Ergonomics. Like many Kindle 1 owners, I frequently would pick the device up and hit the next page button, or have it in its case and open it up to find that I had pressed the menu button a few times. That same feature, however, was a plus when I was actually using it. You can hit those buttons just about anywhere and they will respond. In addition to those buttons, the power switch and wireless switch on the back of the device were simply inconvenient. Outside of the buttons, the device had a bit of a flimsy feel to it. While I never had any problems with it, durable would not be the word that would come to your mind. At the same time, the actual shape of the device and its weight was very book-like, which was appealing.

The Kindle 2 is very different. It is much thinner and feels much sturdier. At the same time, there’s a lot more “whitespace” around the screen, which is essentially wasted space. I would have preferred to add thickness rather than width. There’s no problems with accidentally hitting the next page buttons, and the power switch was moved to the top of the device, making it accessible when the device is in its case. The wireless switch was removed entirely and must now be controlled through a menu (I preferred having the physical switch). On the downside, the buttons aren’t as easy to press as on Kindle 1. I was accustomed to hitting the outside edge of the button, which works very well when on an elliptical trainer in the gym, and that won’t work with Kindle 2. You have to press the face of the button. Second, the changes in shape do make the device less book-like, especially when it’s not in its case. With the case on (the Amazon one, which must now be purchased separately), it was less of an issue. Finally, while it is an extra purchase, the latching mechanism for hooking it into the new case is much better. I have not had any issues with it falling out of the case.

Usability/Performance. I really didn’t have any issues with the performance of my Kindle 1. Yes, there’s the flash associated with page turns, but that’s an artifact on any e-reader that uses the eInk technology. Some people felt that there would be too much page flipping, but it didn’t bother me at all. The Kindle 2 performance is noticeably faster, but as I often tell people when discussing performance, the Kindle 1 was already good enough, so this wasn’t a big deal. The second improvement on the Kindle 2 is better grayscale support. If you’re using the Kindle to read technical documents, which I do, then I think this is something that you might find important. The Kindle 1 could only do 4 shades of gray, the Kindle 2 can do 16, and this does make a different. For reading fiction, this is less of an issue. Finally, the Kindle 1 had a mirrored scrollbar that ran parallel to the vertical axis of the screen. You used a scroll-wheel to position it, and clicked it to select. The Kindle 2 replaced the scroll-wheel with a joystick, and did away with the mirrored scrollbar. I assume it’s because the performance of the screen improved, so they felt the scrollbar wasn’t needed. Personally, I liked the scrollbar better. Again, it’s not a huge deal though.

Overall, the Kindle 2 verified my initial thoughts from the original announcement. It’s definitely an incremental improvement, but I don’t think the feature set associated with it is compelling enough for someone to ditch/sell their Kindle 1. There are still some things to work out, such as getting the ergonomics around those page buttons a bit better so they’re still very convenient, but not easily clicked by mistake. If you’re considering a Kindle 2 as your first e-reader, I absolutely recommend it. I love the reading experience on it, I love being able to manage my documents via Amazon, I like that it syncs up where you are within a book if you also have the iPhone Kindle app, and the convenience of the wireless modem for purchasing new content whenever and wherever (if you’re in the US) is great.

Is Twitter the Cloud Bus?

ToddPoken.jpg

Courtesy of Michael Coté, I received a Poken in the mail as one of the lucky listeners to his IT Management and RIA Weekly podcasts. I had to explain to my oldest daughter (and my wife), what a Poken is, and how it’s utterly useless until I run into someone else in St. Louis who happens to have one or go to some conference where someone might have one. Oh well. My oldest daughter was also disappointed that I didn’t get the panda one when she saw it on the website. So, if you happen to own a Poken, and plan on being in St. Louis anytime soon, or if you’re going to be attending a conference that I will be at (sorry, nothing planned in the near future), send me a tweet and we can actually test out this Poken thing.

Speaking of the RIA Weekly podcast, thanks to Ryan Stewart and Coté for the shout-out in episode #46 about my post on RIAs and Portals that was inspired by a past RIA Weekly podcast. More important than the shout-out, however, was the discussion they had with Jeff Haynie of Appcelerator. The three of them got into a conversation about the role of SOA on the desktop, which was very interesting. It was nice to hear someone thinking about things like inter-application communication on the desktop, since the integration has been so focused on the server side for many years. What really got me thinking was Coté’s comment that you can’t build an RIA these days without including a Twitter client inside of it. At first, I was thinking about the need for a standard way for inter-application communication in the RIA world. Way back when, Microsoft and Apple were duking it out over competing ways of getting desktop apps to communicate with each other (remember OpenDoc and OLE?). Now that the pendulum is swinging back toward the world of rich UI’s, it won’t surprise me at all if the conversation around inter-application communication for desktop apps comes up again. What’s needed? Just a simple message bus to create a communication pathway.

In reality, it’s actually several message buses. An application can leverage an internal bus for communication with its own components, a desktop/VM-based bus for communication with other apps on the same host, another bus for communication within a local networking domain, and then possibly a bus in the clouds for communication across domains. Combining this with Coté’s comment made me think, “Why not Twitter?” As Coté suggested, many applications are embedding Twitter clients in them. The direct messaging capability allows point-to-point communication, and the public tweets can act as a general pub-sub event bus. In fact, this is already occurring today. Today, Andrew McAfee tweeted about productivity tools on the iPhone (and elsewhere), and a suggestion was made about Remember The Milk, a web-based GTD program with an iPhone client, and a very open integration model which includes the ability to listen to tweets on Twitter that allow you to add new tasks. There’s a lightweight protocol to follow within the tweet, but for basic stuff, it’s as simple as “d rtm buy tickets in 2 days”. Therefore, if someone is using RTM for task management, some other system can send a tweet to RTM to assign a talk to a Twitter user. The friend/follower structure of Twitter provides a rudimentary security model, but all-in-all, it seems to work with a very low barrier to entry. That’s just cool. Based on this example, I think it’s entirely possible that we’ll start seeing cloud-based applications that rely on Twitter as the messaging bus for communication.

SOA Governance RefCard Now Available

I’m happy to announce I’ve now published a RefCard (reference card) on SOA Governance based on the content in my book from Packt Publishing. If you want to get a taste of what the book has to offer, follow this link over to DZone.com to download it for free.

Don’t Go On an IT Diet, Change Your Behavior

I’ve refrained from incorporating the current economic crisis into my posts… until now. In a recent discussion, I compared the current situation to what many, many people do every new year. They make a resolution to lose weight, go on some fad diet or start going to the fitness center, maybe lose that weight, but then go right back to how their behavior was a few months prior and gain that weight (and potentially more) right back.

Enterprises are in a similar state. Priorities have shifted to where cost containment and cutting are at the top of the list. While the knee-jerk reaction is to stop investing in any long-term initiatives, this could be a risky approach. If I don’t eat for 4 days, I may quickly drop the weight I need to, but guess what? I still need to eat. Not eating for 4 days will only make me more unhealthy, and then when I do eat, the weight will come right back.

These times should not mean that organization drop their efforts to adopt SOA, ITIL/ITSM, or any other long-term initiative. Most of these efforts try to achieve ROI through cost reduction by eliminating redundancy in the enterprise, which is exactly what is needed today! The risk, however, is that these efforts must be held accountable for the goals they claim to achieve. They must also be prepared to adjust their actions to speed up the pace, if it is possible. No one could have predicted the staggering losses we’re seeing, and sometimes it is necessary for a company’s survival to adjust the pace. If these efforts are succeeding in reducing costs, however, we shouldn’t kill them just because they take a longer time to achieve their goals, otherwise we’ll find ourselves back in the same boat when the next change in priorities or goals happen.

The whole point of Enterprise Architecture, SOA, and many of these other strategic IT initiatives is to allow IT to be more agile- to respond more quickly to changes in the business objectives. Guess what? We’re in the middle of a big unprecedented change in our lifetime. My guess is that the best survivors of this meltdown will be organizations that don’t go on a starvation diet, but instead simply recognize that their priorities and goals have changed and execute without significant disruption to the way they utilize IT. If your EA team, SOA efforts, ITIL efforts, or anything else are inefficient and not providing the intended value, then you’re at risk of being cut, but you were probably at risk anyway, now someone just happens to be looking for targets. If EA has been adding value all along, then you’ll probably be a strategic asset that will help your organization weather the storm.

Most Read Posts for 2008

According to Google Analytics, here are the top read posts from my blog for 2008. This obviously doesn’t account for people who read exclusively through the RSS feed, but it’s interesting to know what posts people have stumbled upon via Google search, etc.

10. Governance Does Not Imply Command and Control. This was posted in August of 2008, and intended to change the negative opinion many people have about the term “governance.”

9. To ESB or not to ESB. This was posted in July of 2007, and gave a listing of five different types of ESBs that exist today and how they may (or may not) fit into your environment.

8. Getting Started with SOA Governance. This was posted in September of 2008, just before my book was released. It emphasizes a policy first approach, stressing education over enforcement.

7. Dish DVR Upgrade. This was posted in November of 2007 and had little to do with SOA. It tells the story of how Dish Network pushed out an upgrade to the software on their DVRs that wiped out all of my existing timers, and I missed recording some shows as a result. The lesson for IT: even if you think there’s no chance that a change will impact someone, you still should make them aware that a change is occurring.

6. Most popular posts to date. This is rather humorous. This post from July of 2007 was much like this one. A list of posts that Google Analytics had shown as most viewed since January of 2006. Maybe this one will show up next year. It at least means someone enjoys these summary posts.

5. Dilbert’s Guide to Governance. In this post from June of 2007, I offered some commentary on governance in the context of a Dilbert cartoon that was published around the same timeframe.

4. Service Taxonomy. Based upon an analysis of search keywords people use that result in them visiting my pages, I’m not surprised to see this one here. This was posted in December of 2006, and while it doesn’t provide a taxonomy, it provides two reasons for having taxonomies: determining service ownership and choosing the technical implementation platform. I don’t think you should have taxonomies just to have taxonomies. If the classification isn’t serving a purpose, it’s just clutter.

3. Horizontal and Vertical Thinking. This was posted in May of 2007 and is still one of my favorite posts. I think it really captures the change in thinking that is required for more strategic solutions, however, I also now realize that the challenge is in determining when horizontal thinking is needed and when it is not. It’s not an easy question and requires a broad understanding of the business to answer correctly.

2. SOA Governance Book. This was posted in September of 2008 and is when I announced that I had been working on a book. Originally, this had a link to the pre-order page from the publisher, later updated to include direct links there and to the page on Amazon. You can also get it from Amazon UK, Barnes and Noble, and other online bookstores.

1. ITIL and SOA. Seeing this post come in at number one was a surprise to me. I’m glad to see it up there, however, as it is something I’m currently involved with, and also an area in need of better information. There are so many parallels between these two efforts, and it’s important to eliminate the barriers between the developer/architecture world of SOA and the infrastructure/operations world of ITIL/ITSM. Look for more posts on this subject in 2009.

Thank you!

I just happened to check my FeedBurner statistics and see that as of the first business day of 2009, I had over 1,000 subscribers to this blog for the first time. With a nice push of new subscribers from serverside.com due to Jack van Hoof’s review of my book that was posted there, I’m now over 1,100. While my posting frequency has slowed a bit, I hope to continue to provide useful information to all of you. As a corporate practitioner, I always enjoy hearing what peers are doing, so if there’s something you’d like me to talk about that may be relevant to your work, drop me an email or direct message on Twitter, and if it’s something I’ve thought about or worked on, I’ll do my best to share what I can. Again, thanks for your readership.

Defining the Technical Service Record

Here’s a topic for which I’d really like some community input, and I think it’s something that many of my readers have probably had to do, are doing, or would be interested in the result. If you’re adopting SOA, you’re likely using a Service Registry/Repository of one form or another. It can range from a set of scribbled notes on a whiteboard or post-its in some architect’s office/cube, to Excel, to one of the many vendor products available for this purpose. So, assuming you are actually using one of these mechanisms, what are you recording about your services, the consumers of those services, and how/where are you capturing the relationship between the two? In this post, I’m going to start with the first question, the answer to which constitutes what I call the technical service record. Please note that the focus of this is on services that have a programmatic interface, and not the broader business service or ITIL service space, although I am very interested in the overlap between this record and the service record that would existing in an ITIL v3 Service Portfolio.

Here’s a list of items that could be recorded about a service to get the discussion started. For each item, I’ve provided a description of what that item is, whether it is optional or not, the visibility of that item (public, consumers only, service manager only, etc…). Please contribute your thoughts on other attributes that could/should be captured along with its optionality (is that a word?) and visibility.

Attribute Description Required Visibility
Name Human-readable name of the service Yes Public
Description Human-readable description of what the service does Yes Public
Owner/Manager The person accountable (in the RACI sense) for the service. At a minimum, this is the person to contact in order to begin using the service. Yes Public
Question: Should the owner be public, or only visible to registered consumers? A registry/repository could facilitate interaction with a potential consumer without publicly revealing the owner’s name.
Interface Type (or should it be types?) The technical interface type, such as SOAP, REST, POX/HTTP, etc. Yes Public
Internal/External Is the service exposed internally, externally, or both? Yes Public
Note: External users can only see services exposed externally.
Service Type Taxonomy classification for purposes of mapping to technology platform Yes Internal Only
Production WSDL URL URL for the production WSDL (Required for Web Services) No * Consumers
Deployment Platform On which logical platform is the service hosted? No * Internal Only
Deployment Location What is the physical location(s) of the service? Preferably, this should be a link into the CMDB. No * Internal Only
Test Plan/Scripts A link to a test plan or specific test scripts for the service as provided by the provider. No * Internal Only
Performance Profile The expected resource utilization of the service. No * Internal Only
Development Cost The cost incurred in creating the service. No * Internal Only
Estimated Integration Cost Expected cost for consumers to integrate service usage. No * Internal Only
Current ROI Current development ROI generated based upon development cost, cost to integrate, and current number of consumers No * Internal Only
Status Status of the service: Planned, in development, in production, decommissioned) Yes See below
The visibility of this is directly tied to the state. For internal services, status is open to the public. For external services, a service should only be visible if it is in production.
Version The version of the service associated with this record. Yes Public
Created Date The date this record was created. Yes Internal Only
Modified Date The date this record was last modified. Yes Internal Only

Of course, now that I attempted to put this list down with some simple attributes, I’ve realized that whether or not things are required or visible to particular parties are dependent on the status of the service, whether it is exposed externally or not, the interface type, etc. It’s just hard to make that fit into an HTML table and still have this entry be readable. Anyway, if there isn’t anything proprietary or confidential about the structure of your service records, consider sharing it here. I promise to publish the end result of this effort here for all to share for free. This isn’t limited to Web Services, either. If you’re using REST, what information would you provide about the collection of resources that comprise the service to potential users of those services? I would guess that many of the above attributes would still apply, and could certainly be accessed themselves through a REST interface, since a serivce record is a resource in and of itself.

Thanks for your participation! If you’d prefer to send me your information directly without publicly posting it here, send me an email at todd at biske dot com or you can send me a direct message on twitter at toddbiske.

RIAs and Portals

In a RIA Weekly podcast, Michael Coté and Ryan Stewart had a brief conversation on the role of RIAs in portals. They didn’t go into much details on it, but it was enough to get me noodling on the subject.

In the past, I’ve commented on the role of widgets/gadgets ala Apple’s Dashboard and Vista’s Sidebar and how I felt there was some significant potential there. To date, I haven’t seen any “killer app” on the Mac side (I have no idea about Vista given that I don’t use it at home or at work). One thing that I found curious, however, was that when I went looking for a decent Twitter client for the Mac, there was no shortage of dashboard widgets, but actually very few desktop apps. I wound up choosing Twirl initially, and am now using TweetDeck. Both of these are Adobe AIR applications.

So what does this have to do with portals? Well, my own view is that your desktop is a portal. A portal should contain easy access to all of things you need to do to do your job. The problem with desktops today, however, is that the typical application is so bloated, that the startup/quit process is very unproductive, and if you leave them open all the time, you need dual monitors (or a really big monitor) and a boatload of memory (even though most isn’t getting used). For this reason, I still really like the idea of these small, single-purpose widgets that do one thing really well. The problem with it right now, however, is that Dashboard and Sidebar fall into the out-of-sight/out-of-mind category. I want my Twitter client in a visible portion of my desktop at all times, or at least with the ability to post a visual notification somewhere. If I leverage a Dashboard widget, it’s invisible to me unless I hit a function key. It’s out-of-band by intent. There are things that belong there. That being said, the organizational features of Dashboard could easily be applied to the desktop, as well. If I had a bunch of lightweight widgets that I used to do the bulk of my work always available on my desktop, that would be great. It had better perform better than the current set of applications that I have set to start at login, however.

Where does RIA fit in? I don’t know that I’d need portability from my desktop in a browser-based portal environment. I’m sure there a people out there that do everything they need to do on a daily basis via Firefox and a whole bunch of plugins. I’ve never tried it, nor do I have any interest in doing so, but for people in that camp, common technology between a desktop portal and a browser-based portal could be a good thing for them. For me, my primary interest is simply getting a set of lightweight tools for 80% of my day-to-day tasks that aren’t so bloated with stuff I don’t need. I thought a bit about portability of my desktop environment across machines (i.e. the same TweetDeck columns at work and at home), but I think that’s more dependent on these widgets storing data in the cloud than it is on storing the definition of my desktop in the cloud somewhere, but that might be of interest, as well.

The gist of all of this is that I do believe there are big opportunities out there to make our interaction with our information systems more efficient. Can RIAs play a role? Absolutely, but only if we focus on keeping them very lightweight, and very usable.

Conferences for Enterprise Architects

Brenda Michelson asked the blogosphere, “What does a ‘would & could attend’ IT conference look like?” In her post, she suggested some items that are ones that are required for establishing initial interest (i.e. things that make us say, “I would like to attend that), including credible speakers, compelling topics, peer interaction, immersive experience, participatory programs, etc. She then called out some constraints that come into play when answering whether or not we could attend. Those constraints include cost, proximity, dates, etc. The premise is that the finding the right intersection of attributes creates the “would & could attend.”

First, let me describe why I attend conferences. I don’t normally use conferences to learn about new areas. Instead, I go to conferences to extend my knowledge in an areas. Sometimes it may be an effort to go from “100-level” knowledge to “200-level” and sometimes it may be in areas where I know a lot, and I’m just hoping to find some nugget through sharing experiences. Given that, the conference sessions that interest me the most are almost always ones that involve a panel of practitioners. By practitioners, I mean corporate IT employees and not consultants, analysts, or vendors. This doesn’t mean that I don’t think that consultants, analysts, and vendors have anything good to contribute, it just means that their presentations have less potential value for me. While any speaker should view the effort as a marketing opportuntity, it obviously has more of an impact on the bottom line for consultants, analysts, and vendors. A practitioner must understand that their speaking does have an impact on recruiting efforts for their employer, however, it’s typically not a primary concern and unlikely that anyone is tracking the number of recruiting leads that came out of the speaking engagement. The practitioner is there to share best practices and hopefully engage in conversations with peers about their efforts in the same space. Unfortunately, these are frequently few and far between.

Other factors that come into play on the “would” portion are the agenda. I’ve never attended an “un-conference,” and I think this would be a bit more difficult to pull off in the EA space than it would be in the general development space. I’m not against the concept, but I think you need to have a very strong base of people committed to ensuring that conversations on interesting topics will happen. My experience with items in the middle, like birds-of-a-feather sessions are similar. Unless there’s someone in the discussion committed to keeping the conversation going, the sessions are duds. At the same time, there’s a risk that such a person becomes the sole presenter. A facilitator that ensures discussion, rather than presentation, happens is critical. I’d err on the side of having defined topics, pre-planned questions, but then structuring the sessions in a way to allow lots of time for interaction. Here, the moderator/facilitator is key. If the audience isn’t willing to participate, the facilitator must fill the time with relevant questions. This is a big risk, because for every 1 person I find that is willing to share experiences, there are probably 10 or 20 who are only interested in receiving, whether due to their own personality, level of knowledge, restrictive information sharing policies of their employer, or one of many other reasons.

The other challenge with all of this is that someone needs to pay for all of this. Practitioners don’t have a marketing budget to fund IT conferences like a vendor, consultant, or analyst firm might. As a result, I think you’re more likely to find these type of conversations through local user groups, however, the issue I have with those is that they always occur during evenings, time which I spend with my family. I’d rather be doing this during my work hours, as these conferences are work-related. Addditionally, unless you work in a very big city, there may not be enough participants to sustain the discussion. I live and work in the St. Louis metro area, and there are still many large organizations here that don’t have an EA practice, so sustaining something at a local level would be difficult. Therefore, I’m willing to sacrifice some portion of the conference time to allow vendor, analyst, or consultant presentations that would offset the costs to me. That being said, I’d like to see at least 50% of the sessions be from practitioners, and I’d be willing to give up frills (meals, conference schwag, evening entertainment, etc.) to keep that balance.

As for other factors, location, dates, costs, etc. all of them have been less of a decision factor for me. Obviously, in today’s economy, the cheaper the better, and it’s always nice when I can consider bringing my family with me and let them be entertained by the area while I go learn things, but it usually all comes down to whether or not I’m going to learn something and have some facilitated interaction with my peers. By the way, I also think that so-called “networking sessions” where they group people at a meal according to their industry vertical or some other attribute don’t cut it. While, you may have a good conversation about the weather at the conference site or current events, and may meet some nice people, they’re unlikely to result in information sharing relevant to the conference topic unless someone steps in as a facilitator.

Note: I just read James McGovern’s response to Brenda’s post, and I like his idea of a “Hot Seat” question. I would have no problem being asked questions without knowing the questions in advance, with the appropriate restrictions on discussing intellectual property and keeping questions on the topic at hand.

Jack van Hoof Reviews my SOA Governance Book

Jack van Hoof posted a review of my SOA Governance book on his SOA and EDA blog. In it, he states:

Reading this book felt like taking a hot shower. As professional architects, we all understand what Todd has written (or don’t we?). But owning one handy book of hardly 200 pages with all those thoughts structured and combined at an appropriate level of understanding feels like possessing a jewel.

Thanks for the review, Jack. You can read his full review here.

Great iPhone 3G Car Stereo

A break from the enterprise IT posts with this one. Since I did quite a bit of googling on it prior to Christmas without great results, I wanted to make sure I posted an entry about my new car stereo, the Pioneer DEH-P4000UB. It comes with a USB port (accessible via a cable that’s threaded into your glove compartment), and is iPod-compatible. Even better, it’s also iPhone 3G compatible, although your iPhone will initially report that the connected device may not supported and will ask you if you want to go into airplane mode. Answer no. From there, you can now play and charge your iPhone 3G through the car stereo with far better quality than an FM transmitter or a cassette adapter. You can control playback either through the stereo controls, the remote control for the car stereo, or the iPhone. I’ve been very, very pleased with the unit. My only complaint is that the “universal controller” knob on the stereo is very non-intuitive, so you’ll need to read the owner’s manual to figure out how to preset radio stations, etc. I found the iPod/iPhone integration to be easier to navigate when using the remote control than using the universal controller, but I tend to just use the iPhone’s controls to choose a podcast.

So, if you’re out there looking for a new head unit for your car and want to be able to charge your iPhone from it, and playback your music or podcasts, take a look at the Pioneer DEH-P4000UB. There’s a video review on Crutchfield, and you can buy it at Best Buy or Amazon.com. This head unit does not include bluetooth capabilities, but I’m pretty sure that Pioneer sells a bluetooth add-on for it. I can’t comment on that, since I didn’t get it. Hopefully, however, this review will help others that are looking for a car stereo that will work natively with the iPhone 3G as I was.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.