Archive for the ‘SOA’ Category

External Events in Action

I received a press release in email from Xignite entitled “Partnership Delivers Financial Professionals Responsiveness, Collaboration Via Timely Earnings Data.” In this release Xignite announced their partnership with Wall Street Horizon, a provider of earnings event and calendar information to the investment industry. Xignite will redistribute Horizon’s earnings and events calendar content as part of its street-event driven series on-demand financial web service.

While I normally don’t try to be a recycler of press releases from vendors, as I’d much rather comment on things more directly associated with work as a practicing architect, I’d be very happy to see more and more of these types of releases. Why? In the past, I’ve talked about the importance of events, such as this post. One of the challenges, however, is that I don’t really feel that there are good sources of events, especially ones that come in from outside of the enterprise (although there are times that I think that outside sources are more likely that internal sources…). Here’s a press release that shows that external sources are appearing and through partnerships, trying to increase their audience. It would be great if some of the industry consortiums for specific verticals would develop some standards in the event space.

Podcast on RIA and more

Another Briefings Direct SOA Insights podcast has been posted by Dana Gardner in which I’m a panelist. In this edition, Dana, myself, Joe McKendrick, and indepdent blogger Barb Darrow discussed the role of RIA and rich media with SOA and the impact of associated technologies, such as Flash, AJAX, and Silverlight on the space. You can find a full transcript here or listen to it here. You can also subscribe via iTunes.

Back in the High Life

Okay, well maybe not the “High Life”, but I’ve had that Steve Winwood song in my head. On Monday, I am returning back to corporate life after nearly a year with MomentumSI. In a nutshell, a year as a consultant has shown me that the corporate world is where I am most comfortable, and best suited for my career goals. MomentumSI treated me very well, and I’m very impressed with their team and their offerings. I learned a lot from the excellent team that they have, and do plan on keeping in touch with them, offering insight from the corporate practitioner’s perspective as they continue their success. I certainly thank Jeff, Alex, Tom, and the rest of the MomentumSI team for the opportunity.

I’m not going to reveal where I’m going, other than to say that it’s a Fortune 500 company in the St. Louis Metro area where I reside, and I’m not returning to A.G. Edwards/Wachovia (AGE isn’t a Fortune 500 company, anyway). I’ll be an enterprise architect, involved with SOA, and other cool architecture topics. While I’m sure people will figure out where I’m working, this blog represents my own personal thoughts and opinions, and not that of my employer or anyone else (and there’s a disclaimer on the right hand side of the blog that states exactly that). I’m very happy that I’m going somewhere that doesn’t mind that I’m a blogger, and I fully intend on adhering to their policies regarding it. So, it’s back to the world of big IT and corporate politics for me, and I’m looking forward to it. While my colleague James McGovern has lamented about the lack of corporate EA bloggers in the past, he can add me back to the list!

Is it about the technology or not?

Courtesy of Nick Gall, this post from Andrew McAfee was brought to my attention. Andrew discusses a phrase which many of us have either heard or used, especially in discussions about SOA: “It’s not about the technology.” He premises that there are two meanings behind this statement:

  1. “The correct-but-bland meaning is ‘It’s not about the technology alone.’ In other words a piece of technology will not spontaneously or independently start delivering value, generating benefits, and doing precisely what its deployers want it to do.”
  2. “The other meaning … is ‘The details of this technology can be ignored for the purposes of this discussion.’ If true, this is great news for every generalist, because it means that they don’t need to take time to familiarize themselves with any aspect of the technology in question. They can just treat it as a black box that will convert specified inputs into specified outputs if installed correctly.”

In his post, Nick Gall states that discussions that are operating around the second meaning are “‘aspirational’ — the entire focus is on architectural goals without the slightest consideration of whether such goals are realistically achievable given current technology trends. However, if you try to shift the conversation from aspirations to how to achieve them, then you will inevitably hear the mantra ‘SOA is not about technology.'”

So is SOA about the technology or not? Nick mentions the Yahoo SOA group, of which I’m a member. The list is known for many debates on WS-* versus REST and even some Jini discussions. I don’t normally jump into some of these technology debates not because the technology doesn’t matter, but because I view these as implementation decisions that must be chosen based upon your desired capabilities and the relative priorities of those capabilities. Anne Thomas Manes makes a similar point in her response to these blogs.

As an example, back in 2006, the debate around SOA technology was centered squarely on the ESB. I gave a presentation on the subject of SOA infrastructure at Burton Group’s Catalyst conference that summer which discussed the overlapping product domains for “in the middle” infrastructure, which included ESBs. I specifically crafted my message to get people to think about the capabilities and operational model first, determining what your priorities are, and then go about picking your technology. If your desired capabilities are focused in the run-time operations (as opposed to a development activity like Orchestration) space, and if you developers are heavily involved with the run-time operations of your systems, technologies that are very developer-focused, such as most ESBs, may be your best option. If your developers are removed from run-time operations, you may want a more operations focused tool, such as a WSM or XML appliance product.

This is just one example, but I think it illustrates the message. Clearly, making statements that flat our ignore the technology is fraught with risk. Likewise, going deep on the technology without a clear understanding of the organization’s needs and culture is equally risky. You need to have balance. If your enterprise architects fall into Nick’s “aspirational” category, they need to get off their high horse and work with the engineers that are involved with the technology to understand what things are possible today, and what things aren’t. They need to be involved with the inevitable trade-offs that arise with technology decisions. If you don’t have enterprise architects, and have engineers with deep technical knowledge trying to push technology solutions into the enterprise, they need to be challenged to justify those solutions, beginning with a discussion on the capabilities provided, not on the technology providing them. Only after agreement on the capabilities can we now (and should) enter a discussion on why a particular technology is the right one.

Composite Applications

Brandon Satrom posted some of his thoughts on the need for a composite application framework, or CAF, on his blog and specifically called me out as someone from which he’d like to hear a response. I’ll certainly oblige, as inter-blog conversations are one of the reasons I do this.

Brandon’s posted two excerpts from the document he’s working on, here and here. The first document tries to frame up the need for composition, while the second document goes far deeper into the discussion around what a composite application is in the first place.

I’m not going to focus on the need for composition for one very simple reason. If we look at the definition presented in the second post, as well as articulated by Mike Walker in his followup post, composite applications are ones which leverage functionality from other applications or services. If this is the case, shouldn’t every application we build be a composite application? There are vendors out there who market “Composite Application Builders” which can largely be described as EAI tools focused on the presentation tier. They contain some form of adapter for third party applications, legacy systems, that allow functionality to be accessed from a presentation tier, rather than as a general purpose service enablement tool. Certainly, there are enterprises that have a need for such a tool. My own opinion, however, is that this type of an approach is a tactical band-aid. By jumping to the presentation tier, there’s a risk that these integrations are all done from a tactical perspective, rather than taking a step back and figuring out what services need to be exposed by your existing applications, completely separate from the construction of any particular user-facing application.

So, if you agree with me that all applications will be composite applications, then what we need is not a Composite Application Framework, but a Composition Framework. It’s a subtle difference, but it gets us away from the notion of tactical application integration and toward the strategic notion of composition simply being part of how we build new user-facing systems. When I think about this, I still wind up breaking it into two domains. The first is how to easily allow user-facing applications to easily consume services. Again, in my opinion, there’s not much different here than the things you need to do to make services easily consumable, regardless of whether or not the consumer is user-facing or not. The assumption needs to be that a consumer is likely to be using more than one service, and that they’ll have a need to share some amount of data across those services. If the data is represented differently in those services, we create work for the consumer. The consumer must translate and transform the data from one representation to one or more additional representations. If this is a common pattern for all consumers, this logic will be repeated over and over. If our services all expose their information in a consistent manner, we can minimize the amount of translation and transformation logic in the consumer, and implement it once in the provider. Great concept, but also a very difficult problem. That’s why I use the term consistent, rather than standard. A single messaging schema for all data is a standard, and by definition consistent, but I don’t think I’ll get too many arguments that coming up with that one standard is an extremely difficult, and some might say impossible, task.

Beyond this, what other needs are there that are specific to user-facing consumers? Certainly, there are technology decisions that must be considered. What’s the framework you use for building user-facing systems? Are you leveraging portal technology? Is everything web-based? Are you using AJAX? Flash? Is everything desktop-based using .NET and Windows Presentation Foundation? All of these things have an impact on how your services that are targeted for use by the presentation tier must be exposed, and therefore must be factored into your composition framework. Beyond this, however, it really comes down to an understanding of how applications are going to be used. I discussed this a bit in my Integration at the Desktop posts (here and here). The key question is whether or not you want a framework that facilitates inter-application communication on the desktop, or whether you want to deal with things in a point-to-point manner as they arise. The only way to know is to understand your users, not through a one-time analysis, but through continuous communication, so you can know whether or not a need exists today, and whether or not a need is coming in the near future. Any framework we put in place is largely about building infrastructure. Building infrastructure is not easy. You want to build it in advance of need, but sometimes gauging that need is difficult. Case in point: Lambert St. Louis International Airport has a brand new runway that essentially sits unused. Between the time the project was funded and completed, TWA was purchased by American Airlines, half of the flights in and out were cut, Sept. 11th happened, etc. The needs changed. They have great infrastructure, but no one to use it. Building an extensive composition framework at the presentation tier must factor in the applications that your users currently leverage, the increased use of collaboration and workflow technology, the things that the users do on their own through Excel, web-based tools, and anything else they can find, how their job function is changing according to business needs and goals, and much more.

So, my recommendations in this space would be:

  1. Start with consistency of data representations. This has benefits for both service-to-service integration, as well as UI-to-service integration.
  2. Understand the technologies used to build user-facing applications, and ensure that your services are easily consumable by those technologies.
  3. Understand your users and continually assess the need for a generalized inter-application communication framework. Be sure you know how you’ll go from a standard way of supporting point-to-point communication to a broader communication framework if and when the need becomes concrete.

SOA and ROI

I finally decided to post regarding the Nucleus Research/KnowledgeStorm study that many of the SOA bloggers have been commenting about. In the InfoWorld article by Paul Krill, David O’Connell, senior analyst at Nucleus, is quoted as saying, “Only a minority of companies are getting a return on investment of SOA.”

Like many others, I don’t put a lot of faith in anything that tries to associate ROI and SOA. ROI should be addressed at the business initiative level, i.e. something with quantifiable business benefit. Opening up a new store location has quantifiable business benefits. The elimination of certain paperwork associated with an employee hiring process can have quantifiable business benefits. SOA isn’t a project, but rather it’s a way of approaching the technical solutions within a project. I can apply the principles of SOA to the automation of employee hiring processes. I can apply the principles of SOA to the technology pieces associated with opening a new store.

If we want to narrow the discussion to something more closer to where SOA can have an impact, we can look at development costs. As has been stated by others, however, this really only deals with the area of reuse. How do you capture the ability of IT to work closely with the business and save time on analysis because of the better mutual understanding of how technology and business come together? It should be reflected in lower development costs/quicker time to market. At this level, however, things get fuzzy. Ultimately, however, the more important point is whether or not business value is being produced. The development cost of the technology is just one piece of the puzzle in implementing a business solution. IT should always be striving to improve itself and continue to bring those costs down. Do you even know what these costs are? Are you collecting metrics about your development processes to know whether things are getting better or worse? Even if you want to attach ROI to SOA, if you don’t have the before and after numbers, all you’re getting is someone’s gut feeling.

This entry isn’t meant to say that we should simply ignore ROI and just do SOA because it’s the right thing to do. That sort of blind adoption can be just as damaging as not doing anything. The point is that discussions around ROI at the executive level should be about business benefits. You can’t just define and run an SOA project. You run business projects, and apply SOA within it. The ROI for the executives is based upon the ROI of the business project, of which SOA is just one piece.

Providing good service

Beth Gold-Bernstein had a great post entitled, “The Second S in Saas” that outlined her experience in trying to get a backup restored from an online survey site.

This is clearly important when you’re dealing with external service providers, but I’d like to add that it is equally important for the services that you build in house. The typical large enterprise today is rife with politics, with various organizations battling for control, whether they realize it or not. SOA strikes fear into the heart of many a project manager because the success of their effort is now dependent on some other team. Ultimately, however, success is not defined by getting the project done on time and on budget, success can only be determined by meeting the business goals that justified the project in the first place. If something goes wrong, what’s the easiest course of action? Point the finger at the elements that were outside of your control.

I experienced this many times over when rolling out some new web service infrastructure at an organization. Teams building services were required to use it, and whenever something went wrong, it was the first thing that was blamed, usually without any root cause analysis. Fortunately, I knew that in order to provide good service for the teams that were leveraging this new infrastructure, I needed to be on top of it. I usually knew about problems with services before they did, and because the infrastructure put in place increased visibility, it was very easy to show that it wasn’t the new infrastructure, and in fact, the new infrastructure provided the information necessary to point to where the problem really was. Interestingly, this infrastructure was in the middle, between the consumer and the provider. Arguably, the teams responsible for the services should be looking at the same information I was, and be on top of these problems before some user calls up and says it’s broken.

If you simply put services into production and ignore it until the fire alarm goes off, you’re going to continue to struggle in achieving higher levels of success with SOA adoption, whether you’re a SaaS provider or a service developer inside the enterprise.

The bad side of SOA?

You never know what Google alerts will turn up. I got this headline:

SOA Instructors Jailed for Involvement in Colombian Drug Cartel

Boy, you’d think there was enough demand for SOA that they wouldn’t have to turn to illegal activities. 🙂 For the record, this has nothing to do with the SOA that stands for Service Oriented Architecture. This SOA is the School Of the Americas. If you’re interested, here’s the link.

Is this an “enterprise” service?

A conversation that I’ve seen in many organizations is around the notion of an “enterprise” service. Personally, I think these conversations tend to be a fruitless exercise and are more indicative of a resistance to change. I thought I’d expound on this here and see what others think.

My arguments against trying to distinguish between “enterprise” services and “non-enterprise” services are this:

Classifications are based upon knowledge at hand. Invariably, discussions around this topic always come back to someone saying, “My application is the only one that will use this service.” The correct statement is, “My application is the only one I know of today that will use this service.” The natural followup then is whether or not the team has actually tried to figure out whether anyone else will use that service or not. Odds are they haven’t, because the bulk of projects are still driven from a user-facing application and are constrained from the get-go to only think about what occurs within the boundary of that particular solution. So, in the absence of information that could actually lead to an informed decision, it’s very unlikely that anything will be deemed “enterprise.”

What difference will make? To someone that makes the claim that their service is not enterprise, does it really give them tacit permission to do whatever they want? A theme around SOA is that it approaches things with a “design for change” rather than a “design to last” philosophy. If we follow this philosophy, it shouldn’t matter whether we have one known consumer or ten known consumers. I think that good architecture and design practices should lead to the same solution, regardless of whether something is classified as “enterprise” or “not enterprise.” Does it really make sense to put a service into production without the ability to capture key usage metrics just because we only know of one consumer? I can point to many projects that struggled when a problem occurred because it was a big black box without visibility into what was going on. If there’s no difference in the desired end result, then these classifications only serve to create debate on when someone can bend or break the rules.

What I’ve always recommended is that organizations should assume that all services are enterprise services, and design them as such. If it turns out a service only has one consumer, so what? You won’t incur any rework if only one consumers uses it. The increased visibility through standard management, and the potential cost reductions associated with maintaining consistency across services will provide benefits. If you assume the opposite, and require justification, then you’re at risk of more work than necessary when consumer number two comes along.

It’s certainly true that some analysis of the application portfolio can help create a services blueprint and can lead to a higher degree of confidence in the number of consumers a service might have. This can be an expensive process, however, and projects will still be occurring while the analysis takes place. Ultimately, the only thing that will definitively answer whether a service is “enterprise” or not is time. I’d rather set my organization up for potential success from the very beginning than operate in a reactionary mode when it’s definitive that a service has multiple consumers. What do others think?

SOA in the home

I’ve previously posted on SOA in the home. Well, Peter Rhys Jenkins of IBM is doing it. I heard him speak once in St. Louis and he was very entertaining. Anyway, here’s the article on what’s he doing in his house.

SOA India

In his daily links post for the 21st, James McGovern made mention of SOA India 2007 and suggested that they should have gotten me as a speaker. For the record, I was invited to speak at SOA India, but I had one major reason to decline. The conference is over Thanksgiving here in the USA. More so than any other holiday, families get together for Thanksgiving and mine is no exception. I certainly appreciated the invitation, however.

On a slightly related note, in the future these conferences should look to technology to bring in speakers. Flying is expensive, and there aren’t too many conferences that are willing to pay all of the expenses of their speakers. I would have no issue, however, with doing a video conference. While it’s not quite the same as seeing the actual person there, it’s got to be better than not getting speakers at all due to the travel involved.

Integration at the Desktop

One of my email alerts brought my attention to this article by Rich Seeley, titled “Desktop Integration: The last mile for SOA.” It was a brief discussion with Francis Carden, CEO of OpenSpan Inc. on their OpenSpan Platform. While the article was light on details, I took a glance at their web site, and it seems that the key to the whole thing is this component called the OpenSpan Integrator. Probably the best way to describe it is as a Desktop Service Bus. It can tap into the event bus of the underlying desktop OS. It can communicate with applications that have had capabilities exposed as services via the OpenSpan SOA Module, probably through the OpenSpan Studio interrogation capability. This piqued my interest, because it’s a concept that I thought about many years ago when working on an application that had to exist in a highly integrated desktop environment.

Let’s face it, the state of the art in desktop integration is still the clipboard metaphor. I cut or copy the information I want to share from one application to a clipboard, and then I paste it from the clipboard into the receiving application. In some cases, I may need to do this multiple times, one for each text field. Other “integrated” applications, may have more advanced capabilities, typically a menu or button labeled “Send to ABC…” For a few select things, there are some standard services that are “advertised” by the operating system, such as sending email, although it’s likely that these are backed by operating system APIs put in place at development time. As an example, if I click on a mailto: URL on a web page, that’s picked up by the browser, which executes an API call to the underlying OS capabilities. The web page itself can not publish a message to a bus on the OS that says, “Send an email to user joe@foobar.com with this text.” This is in contrast to a server-side bus where this could be done.

In both the server-side and the desktop, we have the big issue of not knowing ahead of time what services are available and how to represent the messages for interacting with them. While a dynamic lookup mechanism can handle the first half of the problem, the looming problem of constructing suitable messages still exists. This still is a development time activity. Unfortunately, I would argue that the average user is still going to find an inefficient cut and paste approach less daunting than trying to use some of the desktop orchestration tools, such as Apple’s Automator for something like this.

I think the need for better integration at human interaction layer is even more important with the advances in mobile technology. For example, I’ve just started using the new iPhone interface for FaceBook. At present, there is no way for me to take photos from either the Photos application or the Camera application and have them uploaded to FaceBook. If this were a desktop application, it isn’t much better, because the fallback is to launch a file browser and require the user to navigate to the photo. Anyone who’s tried to navigate the iPhoto hierarchy in the file system knows this is far from optimal. It would seem that the right way to approach this would be to have the device advertise Photo Query services that the FaceBook app could use. At the same time, it would be painful for FaceBook if they have to support a different Photo Query service for every mobile phone on the market.

The point of this post is to call some attention to the problem. What’s good for the world of the server side can also be good for the human interaction layer. Standard means of finding available services, standard interfaces for those services, etc. are what will make things better. Yes, there are significant security issues that would need to be tackled, especially when providing integration with web-based applications, but without a standard approach to integration, it’s hard to come up with a good security solution. We need to start thinking about all these devices as information sources, and ensuring that our approach to integration handles not just the server side efforts, but the last mile to the presentation devices as well.

Future of SOA Podcast available

I was a panelist for a discussion on the Future of SOA at The Open Group Enterprise Architecture Practitioner’s Conference in late July. The session was recorded and is now available as a podcast from Dana Gardner’s BriefingsDirect page. Please feel free to followup with me on any questions you may have after listening to it.

Revisiting Service Versioning

A fellow member of the SOA Consortium, Surekha Durvasula, EA Manager for Kohl’s, recently posted her perspective on service versioning on the SOA Consortium’s blog. At the end, she asked five questions, to which I thought I’d respond.

  1. How many versions of a service should be “live” at any given time? How many versions are too many?
    I’ve actually blogged on this one, so make sure you read this entry before reading the rest of my answer. I’ve yet to see this put into practice, however, for two reasons. First, many organizations are still focused on getting version one out the door, so the pressure isn’t on to solve this problem. Second, organizations are still learning how to manage the service consumer and service provider relationship. The approach I outline requires significant maturity in managing consumers and product lifecycles. In the typical project-based culture in IT, this level of maturity is hard to find. In the absence of maturity, I typically see an arbitrary number set (usually 3). Setting a limit certainly recognizes that versioning must be dealt with, but only time will tell whether that number is right or not.
  2. Do you use the service mediation layer to achieve backward compatibility?
    I’ve certainly advised companies that this can be done, but again, haven’t seen it put into practice yet. If we have a policy that says two or more versions must be able to co-exist in production, that implies that we have the ability to route to either of them. If we can do this, there shouldn’t be a need to leverage a mediation layer unless some consumer can’t change in time. This is why I think lots of companies are picking 3 as the “right” number of versions to support. It assumes that when version 2 comes out, consumers should be able to move to v2 by the time v3 is ready. If you choose to only have one version running in production (the latest), and leverage the mediation layer, you may eventually run into a type of change that isn’t easily mediated at which point you need to fall back to having multiple versions running at once. Knowing this, I’d make sure I knew how to run multiple versions at the same time first, and then try to leverage mediation where it makes sense.
  3. Where do you determine the service version – the mediation layer, the service consumer, or a service provider?
    Let’s handle the easy one. If you handle it at the service consumer side, you’re effectively saying that consumers must explicitly specify a version somehow. If this is the case, versioning effectively goes away, the explicit specification would treat a different version as if it were a different service. You could certain leverage a mediation layer and apply some transformations to send a V1 request to V2, but eventually you won’t be able to mediate for backward compatibility. If the version desired is not explicit, now we have to handle it through a mediation layer or at the service provider. My personal preference is to leverage the mediation layer for this routing. Otherwise, your service implementation will now be littered with logic to determine what version the request represents, map that to the appropriate implementation, etc. I just don’t think that belongs in code. That belongs in a policy repository. If you externalize this, the service developer can just focus on building services.
  4. Do you have governance models and policies around transitioning existing consumers to new versions? Who is responsible for executing that transition?
    Again, I’ve yet to personally see this in practice from an SOA standpoint. I have seen it put into practice from a shared infrastructure standpoint, however, such as rolling out a new version of an app server. Governance is critical. When the governance processes did not allow appropriate priority for these versioning efforts, it was a nightmare. It’s not just about funding for the upgrade, but it’s also about funding the impacted consumers and ensuring they have the resources to do their part. Second, the service provider must make it a consumer-friendly process as possible. Before that new version is ever put into production, the service team should have figured out exactly how their consumers will be impacted. In the infrastructure example, the team responsible for the app server worked with every single application hosted on the platform to make the process as painless as possible. They presented this approach when they requested funding for the effort, and the IT governance process made sure the impacted teams were onboard before it was approved. As a result, it went very, very smoothly. Planning these upgrades is tricky however, as unless things are properly coordinated and bundled, the entire IT project portfolio can get bogged on with independent versioning efforts. There does need to be some master planning over the whole portfolio.
  5. What is the most graceful way of “retiring” or “decommissioning” a service?
    If you’ve tackled the first question of how many versions, this shouldn’t be an issue. If you’re enforcing your policy, all consumers of V1 will have migrated to V2 or V3 before V4 goes in. V4 goes live, V1 is shut down. Of course, this is where management must come into play. Whether you perform authorization or not, all service requests must have identity attached to them. If you don’t, you’ll never know whether all the known consumers have migrated. There shouldn’t be any unknown consumers, and if there are, some process wasn’t followed long before this decommissioning effort. In addition to identity, you also need to know the usage profile. I had one consumer of a service that was only run once a quarter, as it was part of quarterly financial processing. If I relied on daily usage counts of 0 to judge when to decommission, this could be devastating for a consumer that is run once every 90 days.

I look forward to the day where these conversations are applicable to the mainstream. Clearly, there are thought leaders and some companies that are already facing these issues. The majority will be following in the future, so now is the time to start thinking about this.

Latest SOA Insights Podcast

Dana Gardner has posted the latest episode of his Briefings Direct: SOA Insights series. In this episode, the panelists (Tony Baer, Jim Kobielus, Brad Shimmin, and myself) along with guest Jim Ricotta, VP and General Manager of Appliances at IBM, discuss SOA Appliances and the recent announcements around the BPEL4People specification.

This conversation was particularly enjoyable for me, as I’ve spent a lot of time understanding the XML appliance space in the past. As I’ve blogged about in the past, there’s a natural convergence between software-based intermediaries like proxy servers and network appliances. I’ve learned a lot when working with my networking and security counterparts in trying to come up with the right solution. The other part of the conversation on BPEL4People was also fun, given my interests in human computer interaction. I encourage you to give it a listen, and feel free to send me any questions you may have, or suggestions for topics you’d like to see discussed.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.