More on Music eReader

With the added attention on eReader’s with Amazon’s Kindle announcement, I thought I’d actually do a little bit of research and see if an eReader for sheet music actually exists. It turns out there is one, the MusicPad Pro Plus from Freehand Systems. Unfortunately, it costs $899, which seems horribly overpriced to me, especially for a device with only 192MB of memory. You could buy a cheap laptop for less than this. While it has a touch screen and some features that are specific to reading sheet music, it still doesn’t justify it costing 2-3 times that of the book eReaders from Amazon and Sony. While some professional musicians could afford this as shown in their case studies, it doesn’t work for the Sunday singer like me. I’ll at least keep an eye on it and hope the price comes down.

Great TechNation Podcast

I found this podcast fascinating. Dr. Moira Gunn speaks with Sandra Blakeslee about the body maps our brain creates and gives some very interesting anecdotes regarding virtual reality, out of body experiences, phantom limbs, and even treatment for anorexia. I plan on getting Sandra’s book after this fascinating interview.

SOA and Communications

Nortel and IBM recently announced joint technology for integrating business applicaitons with communication services (InfoWorld article, SearchSOA article). Personally, I’m glad to see this announcement. I first had conversations around communications services with a colleague back in early 2006. He was looking at the future state communications infrastructure and came to me wanting to know how to make sure it would fit in with SOA. I had never thought about this before, but it made perfect sense. Communications, clearly, is a capability, so why shouldn’t those technical capabilities be exposed as services? Kudos to Nortel for having a press release about this and really emphasizing how this can play in a company’s SOA is a win in my book. While I’m sure other vendors in the communications space also have these capabilities, they’re not emphasized. As a result, it creates an atmosphere for more silo-based thinking around point-to-point integrations, rather on how this capability fits into the broader collection of enterprise capabilities.

The field of communications would also probably make a great case study or research project. If someone were to try to define communications services 10 years ago versus today, you’d have a very different collection. Would a presence service even be mentioned? Would it have been voice only, or would it have involved text/instant messaging, email, and/or video as well? It certainly makes the case for active service lifecycle management versus defining the services once and then moving on to the next project. As you define your service domains, you have to recognize that the definitions of those domains and even the collection of domains themselves will change.

The Long Tail of Applications

I recently had a conversation with Ron Schmelzer of ZapThink and we started talking about how the nature of the entry point for enterprise users to interact with the information technology will change in the future. You’ll notice that I didn’t use the term “application” in that sentence and there’s a reason for that. Personally, I want to get rid of it. To me, it implies a monolith. It’s a collection of functionality that by its very nature goes against the notion of agility. When I look at a future state where we’re leveraging BPM, SOA, and Workflow technology, I see very small, lightweight entry points that are short and to the point. I’ve mentioned this before in connection with Vista Gadgets or MacOS X Dashboard Widgets.

Ron brought up a ZapFlash that came out over a year ago that he wrote called “SOA: Enabling the Long Tail of IT.” I didn’t make the connection at the time, but it makes perfect sense now. In the ZapFlash, Ron describes the “Long Tail” this way:

The Long Tail, a term first coined and popularized by Chris Anderson, refers to the economic phenomenon where products that are of interest to only small communities, and thus result in low demand and low sales volume, can collectively result in a large aggregate market. This large collection of small markets can significantly exceed the more traditional market that the most popular and high volume sales items can generate. For example, Amazon.com generates more business in aggregate from its millions of books that each only sell a few copies than they do from the top 100 best sellers that might each sell tens of thousands of units.
One quick way of summing up the Long Tail is by saying that thereís more opportunity in catering to a mass of niche markets than a niche of mass markets. Large enterprises in particular are composed of masses of such niches, operating in different geographies and business units, catering to specific demographics with tailored solutions to meet the needs of all constituents. And yet, the centralized IT organization that serves the needs of the entire organization is typically woefully unprepared to serve these masses of niches: large numbers of users with widely varying IT needs. How, then, can IT support the needs shared in common with all the business groups without overextending its centralized resource to meet the specific needs of each of the individual groups?

Fundamentally, we’re both talking about the same thing. What I describe as very lightweight user-facing entry points are the “long tail” of applications. They’re small, niche solutions that get the job done. Underlying all of this is a robust SOA that are the enablers of these solutions which is loosely-coupled from the user-facing needs. If you think about it, the long tail of application development today is the business user using Excel because they could get done what they needed quickly. I’ve even done this myself, and even progressed up to getting a simple database setup to do a bit more. We shouldn’t be on a quest to squash these out, but rather to figure out how to enable it in a manageable way. The problem is not that somebody’s Excel macro pulling data out of Oracle exists, the problem is that we’re not aware that it exists. Clearly, someone had a need to put it together, and if we can find a way to enable this to where we’re aware of it and our systems support it easily, even better. Personally, I think the technologies we have at our disposal today are on track for making this a reality.

Followup on VP of SOA

I received two comments already on the Driving SOA post, one from Jason at ZapThink and one from Mike Kavis via his blog. Mike picked up that I suggested many of the responsibilities belong with the Chief Architect and correctly called out that this can be a lot of work to juggle. That being said, there’s absolutely no reason that the Chief Architect can’t delegate responsibilities to people on his or her team. Of course, if there is no architecture team, or if the architects are matrixed in from other organizations to where they have less-than-enterprise oversight, then this won’t work. This is where it gets back to a point I made at the end of my blog which asks whether the organization is biting off more than it can chew. This question needs to be asked before another body is brought in.

Jason specifically had some comments on the separation of responsibilities between the Chief Architect and the COO. He stated:

But a key point I’m making is that this person should be responsible for both the business process and architecture leadership. By separating the process responsibilities and assigning them to the COO, many organizations maintain the IT-centricity of their architecture efforts, which I see as a problem. That’s what I’m trying to address by combining the roles.

This is a good point, and if this is occurring in your organization, I think it indicates that the business/IT relationship is not where it needs to be. If the business and IT are simply all operating as “the business,” where they are seen as colleagues, peers, and partners, rather than IT being an order taker, I think the separation of responsibilities makes sense. If this partnership does not exist, an organizational change is one way of addressing it, but again, it is debatable (as we’re doing!) as to whether this approach will be successful or not. I’ve always said that organizations can both hinder or help large scale initiatives. What I’ve also found is that organizational changes take a long time to make up for a lack of trust in a company. The people have to be committed to working as partners, and if they aren’t, putting someone above them doesn’t always fix the problem unless there’s a big stick involved.

More on Oslo

In the latest BriefingsDirect SOA Insights Edition, Dana Gardner, Jim Kobielus, Neil Macehiter, and Joe McKendrick discussed, among other things, Microsoft’s recent announcements. The conversation started very similar to some of my own comments on the subject with this sense of deja vu. Neil Macehiter made a great point, however, that shows that this isn’t simply a rehash of model-driven architecture. He stated:

…they are actually encompassing management into this modeling framework, and they’re planning to support some standards around things like the service modeling language (SML), which will allow the transition from development through to operations. So, this is actually about the model driven life cycle.

This reminded me of my trip to Redmond in 2005 for the Microsoft Technology Summit. At the summit, we were shown an internal tool, I think from the Patterns & Practices group, that presented a deployment model of a solution. I recall a number of us going, “we want that.” If Microsoft has taken steps to integrate these models into the development and run-time management tooling, this is an excellent step, and certainly something beyond the typical model-driven development of the BPM suites. At a minimum, these capabilities should be enough for people to at least track the ongoing progress of the Oslo effort.

The second thing that came up, which again was consistent with some recent blogs of mine (see Registries, Repositories, and Bears, oh my! and Is Metadata the center of the SOA technology universe?), was the discussion around the metadata repository at the heart of Microsoft’s strategy. Dana pointed out that “there really aren’t any standards for unifying modeling or repository for various models” with some comments from Neil that this is very ambitious. First, I’d have to say that Microsoft trumped IBM on this one. Remember when IBM announced WebSphere Registry Repository and stated that they’d be coming out with their own standards for communication with it? They were slammed by many analysts. Microsoft, rather than trying to operate in the narrow space of the SOA registry/repository, are talking about the importance of metadata in general. The breadth of models and associated metadata when talking about full IT product lifecycle (development and management), is far broader than what is typically discussed in the SOA space. AS a result, there are no standards that cover this completely, so the lack of standards-based integration is a non-issue, and Neil nails it by saying Microsoft is trying to get out in front of the metadata federation problem and drive others to comply with what they do.

Give the entire podcast a listen here, or read the transcript here.

Driving SOA

Jason Bloomberg of ZapThink, in their latest ZapFlash, put a new spin on their old concept of the SOA champion and put forth a job description for VP of SOA. While he certainly suggested a good base salary for the position, I question whether a permanent, executive position around SOA makes sense?

If you look at the job responsibilities listed, the question I ask is not whether these tasks are needed, but rather, whether a dedicated person is needed for all of them. Let’s look at a few of them:

Provide executive-level management leadership to all architecture efforts across the enterprise. The directors of Business Architecture, Enterprise Architecture, Technical Architecture, Data Architecture, and Network Architecture will all be your direct reports.
Don’t we already have this? It sounds like a Chief Architect to me.

Drive all Business Process Management (BPM) initiatives enterprisewide. Coordinate with process specialists across all lines of business, and drive architectural approaches to business process.
Again, to me, this sounds like the responsibility of the COO.

Establish and drive a SOA governance board that incorporates the existing IT governance board.
This is the only one that I simply disagree with. If we’re speaking in terms of IT governance as defined in Peter Weill’s book, I think the IT Governance Board should be factoring SOA strategy and governance into their decisions. That is, IT Governance subsumes SOA governance, not vice versa. Of course, there are also aspects of SOA governance that are implemented at a much lower level within projects to ensure consistency of service definitions, etc. Once again, however, this should be handled by the existing technology governance processes. We’re merely adding some additional criteria for them to be applying to projects.

Establish and lead the SOA Center of Excellence across the enterprise to pull together and establish core architectural best practices for the entire organization. Develop and enforce a mandate for conformance to such best practices.
This gets into the technical governance I just mentioned. I certainly agree that a SOA COE can be a good thing, but you certainly don’t need a new position just to manage in it. Why wouldn’t the Chief Architect or a delegate lead the COE as part of their day to day responsibilities?

Manage a budget that is not tied to individual line-of-business projects, but is rather earmarked for cross-departmental SOA/BPM initiatives that drive business value for the enterprise as a whole.
The question here is whether or not the current organizational structure prevents cross-departmental initiatives from being funded and managed properly. It implies that the individual LOBs will be more concerned about their own needs, and not that of the broader enterprise. If the corporate governance principles and goals have stated that the use of more shared, cross-cutting tehcnologies are needed, then it’s really a governance problem. While an organizational change can assist, you still need to ensure that LOB managers are making decisions in line with the corporate goals, rather than that of their individual LOBs.

Work closely with the VP of Project Management to insure close cooperation between architecture and project management teams, and to improve project management policies.
Once again, why can’t this be done by the existing architectural leadership?

There were additional items, but my thoughts kept coming back to “shouldn’t someone already be doing this?” If the answer to this is “no,” then you must ask yourself whether or not you’re really committed to SOA adoption. If you feel that you are, but don’t have anyone assigned to these responsibilities, does the creation of a new position make sense? For example, if the organization struggles with getting LOB managers to produce cross-departmental solutions, will throwing one more person into the mix fix the problem, or just add another point of view?

As with many of the ZapThink ZapFlashes, there’s always a bit of controvery but lots of goodness. In this case, the articulation of the responsibilites associated with managing SOA adoption are excellent. Do you need a new position to do it? As with any organizational decision, it depends. If you are committed to SOA, but can’t make it happen simply because all of the possible candidates are simply to busy to take on these new responsibilities, then it might make sense (although you’ll probably need to add staff elsewhere, too, if people really are that busy). If you’re trying to use the position to resolve competiting priorities where there isn’t universal agreement on what the right thing to do for the enterprise is, a new position may not resolve it.

Music eReader

I was just reading a review of the new Sony Reader, and it reminded me of what I’d really like to see: an eReader for sheet music. I sing in a church choir every Sunday, and lug two large choral books, plus a paperback choral book with me. It never fails, however, that my father-in-law will pick out a song that morning for which I have an octavo with parts, but that octavo is sitting in a file cabinet back home. Personally, I would love to be able to simply download octavos/sheet music, stick in the memory of an eReader, and just carry that to church every Sunday. It should be able to store far more songs that the typical hymnal, and always allow me to have the music I need.

There are two challenges with this given what I read about the Sony technology. First, the two+ second delay to turn a page would be unacceptable. When you’re singing, you can’t pause for two seconds. Second, the device is not music aware. That is, if I’m singing from an octavo, the refrain may be on the second or third page, and there may be a repeat symbol that sends me back to it from page 6 or 7. While I’ve never used one, I’m guessing that the Sony Reader primarily supports single page turns. I would need something where I could simply tap on the repeat symbol and have it automatically jump back to the page where it needs to go. Of course, the more likely solution would be to simply store music without repeats (i.e. cut and paste the refrain to everywhere it appears), because there’s really no way for the eReader to know what verse you happen to be singing. Without knowing that, it wouldn’t know when to take first or second endings, for example. I’m sure I could work around this as the technology evolves, though.

So, I’m giving up my idea in the hopes that someone may read this and actually try to build such a device. It would certainly be worth it to me, and I’m guessing there a lot of Sunday church singers that could find it valuable as well!

Gift idea: Neuros MPEG 4 Recorder

It’s holiday shopping season, so I thought I would give some kudos to a product I picked up earlier this year, the Neuros MPEG 4 recorder. It’s a small device that accepts composite video (sorry, no hi def) via the included cable and encodes it to a MPEG-4 file on a CF or SD card. I then simply drag it into iTunes for playback on my iPhone or iPod. While I was initially using it for video on plane trips, now I use it for video while I churn away on the elliptical machine in the fitness center. My DVR doesn’t allow access to the files, so this fills the gap. I considered getting an EyeTV, but in my house, the satellite DVRs are the “master video source.” The only glitch I had is that UltraII CF cards don’t work well. Once I switched to a Kingston CF card, it has worked like a charm. Check it out at Amazon or at Neuros Technology.

Build it yourself?

Robert Annett had a good post asking the question, “Do you need to build it yourself?” He states:

I’ve worked in many organisations and one thing that never ceases to amaze me is how often teams build unnecessary tools and frameworks … As a software/systems architect it’s your job to deliver value to the business you work for. Whenever someone suggests creating a bespoke tool ask yourself the following:

  • Can I buy something to do this?
  • Can I use or tailor an open source product?
  • Can I use or create a template for a standard tool like Eclipse, Word or Excel?

I think that this is a challenge for corporate IT departments. I managed a frameworks team at a previous employer, and it’s extremely difficult to figure out where to invest your time. It was a continual game of guessing as to what things the vendors would add in the next release, what things would take off in open source community, and what things would be woefully underserved by third party products, whether open source/closed source or freeware/commercial products. It’s also not limited to low level frameworks, either, although the higher you go up toward business applications, the fewer choices there are and the more they cost.

Every company is going to be different with how they approach this problem, and a lot of it is dependent on company size. If there are IT resources to burn, you may be able to afford having a frameworks team, potentially even allowing developers to contribute to open source frameworks on company time. If IT resources are scarce, however, you have to balance the cost of a third party product and the cost of not having resources working on business solutions where custom coding is required.

I expect that to many of my readers, this is nothing new, but I do think that it a reminder now and then is good. Our responsibility is to deliver the best business solutions for appropriate costs, whether it is something off the shelf or something custom built. It is our job to be good corporate stewards and do our jobs wisely.

Funding SOA

At the upcoming Gartner Application Architecture, Development and Integration Summit, I’ll be part of a panel discussion on funding SOA. I’ve previously posted on this in this entry, but I thought I’d bring it up again with the upcoming conference.

I’m very interested in hearing the experience of others on this topic. While there’s a lot of discussion about funding models, I still have yet to run into an organization that has had to actually implement one of these models. More often than not, I’ve seen one of two things:

  1. A program of large enough scale where a number of services will be created with some in use by more than one consumer
  2. Project-level SOA where a single project develops both the consumer and the service

There’s nothing wrong with either of these, but the thing to note is that these efforts did not require any change to the way funding of IT efforts occurs.

In discussing this with some colleagues, it seems that changes in how projects are funded really only comes about when reuse gets involved. In many ways, I feel that it is no different than dealing with shared (reused) infrastructure except that it is a bit more difficult to figure out how to partition the responsibilities.

A key question in all of this is how many services will be reused? If only 5% or 10% of services are reused, it is hard to justify changes to the funding model. But is this a chicken versus the egg scenario? Perhaps it is the way IT projects are defined to begin with which hampers the organization’s ability to identify services with reuse potential.

The point of this is that we’re still in the very early stages of SOA adoption. My sample base is very small, so I’m very interested in whether what I’m seeing is just an artifact of my small sample base, or if SOA funding is still a topic ahead of its time. This topic came up in a recent SOA Consortium call, and in order to reach out to a broader sampling base, I assisted in the development of a survey on the topic. It’s relatively high level and shouldn’t take very long to complete. I, and the other members of the SOA Consortium would certainly appreciate the input. It is intended for corporate SOA practitioners. You can access it here. Thanks!

Oslo, DSLs, MDD, and more…

It’s amazing how long it can take for some things to become a reality. Back in the corn fields of central Illinois in the early 90’s, I was in graduate school working for a professor that was researching visual programming languages. While the project was focused on building tools for visual programming languages, and not as much on visual programming itself, it certainly was interested. Here we are, almost 15 years later, and what’s the latest news from Microsoft? Visual programming languages. Okay, they’re calling it model driven development (which isn’t new either).

It will be very interesting to see if Microsoft can succeed in this endeavor. I suspect that they will, simply because Microsoft has a unique relationship with their development community, something that none of the Java players do. While there were competing tools for quite a few years, you’d be hard pressed to find an organization doing Microsoft development that isn’t using Visual Studio. You’d also be hard pressed to find an organization that isn’t leveraging the .NET framework. While the Java community has Eclipse, there’s still enough variation of the environment through the extensive plugin network that it’s a different beast. So, if Microsoft emphasizes model driven development in Visual Studio, it’s safe to say that a good number of developers will follow suit. As a point of comparison, BEA did the same thing over 5 years ago when they introduced WebLogic Workshop in February of 2002. This article stated:

BEA is calling WebLogic Workshop the first integrated application development environment with visual interfaces to Java and J2EE… “We’re radically changing the way people development applications,” said Alfred Chuang, president and CEO of BEA… Chuang said WebLogic Workshop could improve the application development and deployment cycle by as much as 100 times.

Hmmm… I’m getting a sense of deja vu. I’m hopeful that Microsoft can achieve better successes than BEA did. Point of fact, I don’t think it had anything to do with the quality of BEA’s offering. At the time, I had someone working for me look into Workshop. I was very surprised when the answer was not, “This is just a bunch of fluff, we need to keep writing the code” and instead was, “Wow, this really did improve my productivity.” Unfortunately, many developers will take their text-based editor to the grave with them so they can continue to write code.

In a similar vein, Phil Windley had a post on domain specific languages or DSLs. He points out that he is “a big believer in notation. Using the right notation to describe and think about a problem is a powerful tool.” Further on, he states, “GPLs (General Purpose Languages) are, by definition, general. They are designed to solve a wide host of problems outside the domain I care about. As a result, expressing something I can put in a line of code in a DSL takes a dozen in most GPLs.” Now Phil focuses on textual notations in his post, but I’d argue that there’s no reason that the notation can’t be symbolic. It may make the exercise of writing a parser more difficult, but then again, I’m pretty sure that the work I did back in grad school wound up having some in-memory representation of what was graphically shown in the editor that would have been pretty easy to parse. Of course, the example I used in my thesis wound up being music notation, which didn’t have such an internal representation, but I’m getting off the topic. If there are any grad students reading this blog, I think a parser for music notation is an interesting problem because it doesn’t follow the more procedural, flow chart like approach that we all too often see.

Anyway, back to the point. I 100% agree with Phil that a custom notation can make a smaller subset of programming tasks far more efficient. So much of what we do in corporate IT is just data in/data out type operations, and a language tuned for this, whether graphical or textual can help. The trend right now is toward graphical tools, toward model driven development. There will always be a need for GPLs, and we’ll continue to have them. But, given the smaller subset of typical corporate IT problems, it’s about time we start making things more efficient through DSLs, and model driven development. Let’s just hope it doesn’t take another 15 years.

Constrain, Mediate, or both?

InfoWorld published a collection of end-user stories on SOA this week, and the discussion on Leapfrog Enterprises caught my attention. In the article, Galen Gruman states that:

Leapfrog had many of the same goals that typify a typical SOA initiative: greater reuse of code, faster development time, and easier integration. But the company did not want to approach SOA as simply a changing of the guard for development tools and integration platforms. Instead it wanted to free its developers from conforming to a platform’s idea of best practices so they could focus on the applications’ functionality and use a wide range of development technologies as best for each job.

[Eugene] Ciurana [director of systems infrastructure for Leapfrog] did not want to force developers to all use the same transport. “The transport doesn’t matter,” he says. He chose to use the open source Mule ESB as a messaging backbone, relying on it to deal with transport interfaces. That way, “developers could focus as little as possible on the implementation of services,” he explains. Instead, their focus is on the functionality they are trying to achieve. The result is that developers tend to use HTTP as their transport mechanism, but some use REST (Representational State Transfer) and SOAP — “whatever works best or they’re most comfortable in.”

This caught my attention because it appears to be contrary to what I’ve seen at other organizations. Typically, the organization wants to constrain the platform to ease the integration problem and get away from the notion of “any-to-any” integration hubs. This may just be my misinterpretation, but it does raise an interesting question. How many constraints should be put in place? Interestingly, I’ve yet to run into an organization that has had to drive adoption of XML-based integration via enterprise architecture. The developers have slowly migrated away from whatever distributed object technology they were using and toward XML. The bigger challenge has been whether or not the XML contained the right information for broad consumption, not on whether or not XML was used. That being said, many EA teams are focused on the latter constraint (when to use XML or not). Knowing that there’s only so much that can be governed, what are the critical factors that EA teams should be making sure we get right as compared to the broad spectrum of things that we could be governing? Is it worth the pain to create policies regarding SOAP/HTTP versus XML/HTTP? The approach draws parallels to the big government/small government discussions that go on between the Democrats and the Republicans. The right answer is very dependent on the culture within IT, as I’ve stated often in this blog. Personally, I’m not a big fan of the integrate any-to-any approach. That being said, I also recognize that not everything is going to adhere to the constraints. I think it’s important to differentiate between your connectivity (mediation) infrastructure and your service enablement activities. Connectivity is about connecting two parties that adhere to the constraints for communication that have been adopted by the organization. Unless there’s only one way to communicate, there will be a need for mediation, for example, HTTP to JMS bridging. It’s important that the set of technologies be small, however. Service enablement is about taking something that doesn’t adhere to the standards and exposing it in a way that it does. We should strive to reduce the amount of service enablement over time, but the need for connectivity and mediation will always be there.

Book Review: SOA and WS-BPEL

I was recently sent a copy of “SOA and WS-BPEL” by Yuri Vasiliev and Packt Publishing for review. The book is subtitled, “Composing Service-Oriented Solutions with PHP and ActiveBPEL.” After reading the book, the subtitle is much more accurate than the primary title. The book is first and foremost a guide for constructing Web Services using the SOAP extension for PHP and building BPEL processes using ActiveBPEL. Secondarily, there is a discussion behind the principles of SOA and WS-BPEL. Clearly, the right audience for this book is the developer community. If you’re removed from day to day coding, the book may not be as valuable.

If you’re looking for hands-on examples, this book has plenty of them. It includes all of the building blocks necessary from building your first service in PHP to creating an orchestrated process using BPEL. I felt that there was more emphasis on the coding efforts than necessary, however, and not enough on some of the theory behind it. This was evident in some of the early examples. In the sections on PHP, the examples result in a service that stores an XML representation of a purchase order in a database. The examples in the book take the incoming XML, create a PHP array representation of the XML, then convert it back to a DOM representation for storage in the database. While I do not know whether this approach was due to a limitation of the SOAP extensions, as an architect, it left me shaking my head. If the service is simply a pass-through to a database, there’s no reason to take all the time to parse the XML, bind it to some internal data structure, and then turn that internal data structure back into XML. Continuing on, the example then added XML schema validation to the mix, but it was performed by a stored procedure in the database. If you understand the role XML Schema validation has in security, odds are that XML schema validation will have occurred long before we even reach the back end database.

The section on WS-BPEL followed a similar vein. The bulk of the book was simply focused on walking you through the steps necessary to perform the actions in ActiveBPEL, rather than on building a sound understanding of WS-BPEL. There were pages upon pages of instructions on what menus to select, items to click, etc. I was very surprised at the lack of screenshots or graphical representations of the processes in the book. More often than not, they recommended using the Source tab in ActiveBPEL to compare the BPEL document in the book to what should have appeared after going through all the actions. My personal view on BPEL is that I don’t ever want to see it. I want to leverage the modeling capabilities of any BPM tool, and let the tool worry about BPEL behind the scenes.

Overall, for a book heavily focused on the developer community using PHP and ActiveBPEL (somewhat of a narrow audience, in my opinion), it’s certainly a good walkthrough to get your feet wet. It’s not going to give you the architectural skills you need, but it will move you through a lot of material very quickly. For people who are quick studies and just want some solid examples, you may find this a decent investment. For someone looking for more theory and architectural principles behind SOA and WS-BPEL, I’d probably look elsewhere.

Disclosure: This book was provided to me at no cost for the purposes of reviewing it. If you’re interested in having me review a book, please contact me at todd at biske dot com.

Registries, Repositories, and Bears, oh my!

Okay, no bears, sorry. I read post from my good friend Jeff Schneider regarding SAP’s Enterprise Service Repository (ESR). He states:

At the core of the SAP SOA story is the Enterprise Service Repository (ESR). It is actually a combination of both registry and repository. The registry is a UDDI 3.0 implementation and has been tested to integrate with other registries such as Systinet. But the bulk of the work is in their repository. Unlike other commercial repositories, the first thing to notice is that SAP’s is pre-populated (full, not empty). It contains gobs of information on global data types, schemas, wsdl’s and similar artifacts relating to the SAP modules.

This now brings registry/repository into the mix of infrastructure products that SAP customers must make decisions regarding adoption and placement. Do they leverage what SAP provides, or do they go with more neutral products from a pure infrastructure provider such as BEA, HP, SOA Software, or SoftwareAG/WebMethods? The interesting thing with this particular space is that it’s not as simple as picking one. Jeff points out that the SAP ESR comes pre-populated with “gobs of information” on assets from the SAP modules. Choose something else, and this metadata goes away.

I hope that this may bring some much needed attention to the metadata integration/federation space. It’s not just a need to integrate across these competing products, but also a need to integrate with other metadata systems such as configuration management databases and development lifecycle solutions (Maven, Rational, Subversion, etc.). I called this Master Metadata Management in a previous post.

Back when Gartner was pushing the concept of the ESB heavily, I remember an opening keynote from Roy Schulte (I think) at a Web Services Summit in late 2005. He was emphasizing that an organization would have many ESBs that would need to interoperate. At this point, I don’t think that need is as critical as the need for our metadata systems to interoperate. You have to expect that as vendors of more vertical/business solutions start to expose their capabilities as services, they are likely to come with their own registry/repository containing their metadata, especially since there’s no standard way to just include this with a distribution and easily import it into a standalone RR. It would be great to see some pressure from the end-user community to start making some of this happen.

Ads

Disclaimer
This blog represents my own personal views, and not those of my employer or any third party. Any use of the material in articles, whitepapers, blogs, etc. must be attributed to me alone without any reference to my employer. Use of my employers name is NOT authorized.