My Favorite UML Diagram Type
I was recently working on a presentation and putting together a diagram in Powerpoint, when it occurred to me that the diagram was quickly turning into something familiar, a UML Context Diagram. Why was this realization important enough that I decided to write a blog about it? Looking back through my career history, it’s not the diagram type I’ve used the most (that would be a sequence diagram), but it is the diagram that I’d say is the one I miss the most when it doesn’t exist.
A UML context diagram is really simple. You put one big oval in the middle that represents the entirety of the system that is going to be created/changed in the effort, and everything else that needs to interact with it is represented outside of the circle/oval with an annotated line indicating the interactions that happen.
There’s one simple reason why this is so important: boundaries. It immediately sets the context of what’s within the scope of the effort, and what isn’t. It makes it clear what data flows in and what data flows out. From an API perspective, the annotations on those lines are a great starting point for “jobs to be done” needed for solid API design following James Higginbotham’s ADDR process.
How many projects have you been on where boundaries like this haven’t been clear, resulting in significant time spent debating what’s in scope, what isn’t, who’s going to do the work, and who isn’t. While it is designed as a project artifact, it’s also very useful as a system artifact, clearly showing what those original boundaries were. Given how long systems stick around these days, preserving that information can be really valuable.
So, when you find yourself in cycles of endless meetings where it seems like the same things get hashed again and again, something it’s good to step back and provide a little context.
Want a successful API program? Think like a product manager.
Kin Lane, the API Evangelist, had a really good post on maturing an API program, with the not-so-brief title of “I Have An API Deployed, And A Base Presence Established, What Can I Do To Help Me Get The Word Out?” You should definitely go read that because there’s some really good advice there.
What was very clear to me is much of what Kin and others talk about is essentially turning your API into a product and applying the discipline of product management. Set goals, identify your prospects, create marketing material, highlight the success of your customers, understand your competitors, provide good support, etc. I think it’s important for the technical audience to understand that these concepts aren’t new, even though they might be new to the technical crowd. As I know from my own experience, we technologists will flock to new technology just because it’s a shiny new thing to try out. Unfortunately, that doesn’t make for a good product strategy. Just as a blog of mine a long time ago on communications suggested bringing a communications expert onto your IT team, it’s also a good idea to have someone with product management experience work with you on your API program efforts.
The one thing in Kin’s post that I had a slight disagreement with was his section on goals. While his goals were valid, these are really secondary goals to what is absolutely the number one goal: revenue. Now, I’ve read enough of his other posts that I know he gets this, but I don’t think it can be emphasized enough. I began my career in development and have always been on the IT side of the house, and for many, many reasons that I won’t go into in this post, too many people in IT really don’t understand the revenue models of their companies. So, if you don’t understand how your API program will impact revenue, go back and figure it out. You may be able to charge directly for API use and fund your own operations. It may be less direct revenue, such as how Walgreens’ photo APIs eventually result in revenue through in-store photo printing, rather than a fee for API use. Growth in new users might be great, but if there isn’t a revenue model, it will eventually become a cost sink. One only needs to look at the number of press releases about public APIs being shut down to understand the importance of this.
All in all, Kin’s post is really, really good. It calls out a number of specific things to do when your product is an API, so follow these things but also complement your efforts with some general purpose product management knowledge and you’ll be in a position to make good decisions.
API Design: Compartments
I’ve been reviewing the FHIR (Fast Healthcare Interoperability Resources, http://www.hl7.org/fhir) specification and they have an interesting concept called a compartment. Per the spec:
Each resource may belong to one or more logical compartments. A compartment is a logical grouping of resources which share a common property. Compartments have two principal roles:
- Function as an access mechanism for finding a set of related resources quickly
- Provide a definitional basis for applying access control to resources quickly
Let’s look at these statements one at a time. First, the component concept provides an access mechanism for finding related resources. One very common compartment in the specification is Patient. Other resources, like Condition, clearly have a relationship with Patient. So, if I want to find all conditions that a particular patient has, I actually have two paths for doing this.
- GET /Patient/[id]/Condition
- GET /Condition/?patient=[id]
[id] is the unique identifier in question. In this case, both of these requests should return the same thing. But it’s not quite that simple. Take another resource, Communication, which deals with secure messages sent as part of patient care. In this case, we have:
- GET /Patient/[id]/Communication
- GET /Communication/?subject=[id]
- GET /Communication/?sender=[id]
- GET /Communication/?recipient=[id]
The first example returns any communication that involves the identified patient, whether to, from, or about. The Communication specific inquiries only allow for inquiry by the attribute of the resource where a Patient identifier can be specified. It just so happens that in the earlier case, the relationship within Condition is represented in a patient attribute.
Independent of whether you think this is a good or bad thing, this approach where there are two ways of getting to the same resources creates a decision point for the organization. In a large enterprise, it’s entirely possible that the implementation for different resources may be handled by different teams. With two (or more) different ways of doing this, it creates the risk of two (or more) different implementations. It also creates a situation where a resource that can be a compartment needs to make sure that any time a new related resource is defined and implemented, they also need to make a modification to provide the compartment-based inquiry. Once again, if this is a separate team, this means coordination. Anyone who’s worked in an enterprise knows that the more teams that get involved, the more challenging it becomes.
These are not insurmountable difficulties by any stretch of the imagination. In the case of the implementation, the compartment resource should simply act like a façade and make the appropriate calls to the resource (i.e. the implementation of the first URL in the examples above simply turns around and makes the call(s) below them to complete the inquiry, such as Patient calling Condition, or Patient calling Communication). In the case of the coordination, that’s a matter of education and oversight to make sure it happens. The greater risk is probably that too many things get defined as a sub-structure within the compartment resource, rather than defined as standalone resources. This can be avoided by recognizing when a proposed resource has multiple compartments. Take the following requests:
- GET /Practitioner/[id]/Condition
- GET /Condition?asserter=[id]
These inquiry would give me a collection of all conditions that a particular practitioner has ever dealt with. If Condition wasn’t a standalone resource, and instead a sub-structure within Patient, how would I go about forming this query? It can be done, but it’s probably not going to look as simple as what is shown above. This is where I see the hidden strength of this compartment concept. By recognizing where we can have multiple ways of organizing a particular collection of data and traversing relationships, we can then make good design decisions on what our resources should be.
Finally, FHIR also mentions that the compartment concept can also play a role in access control. I haven’t dug into this one as much, but I think it may have some potential. The challenge lies with data that really has multiple owners. As a patient, I may want to use an OAuth model to grant access to my health records to a mobile app I’ve downloaded. My doctor may want to do the same thing for an application he or she uses as part of my care. The compartment approach could give independent access paths for each of these channels with their own policies. Again, I need to give this one more thought, but I can definitely understand why HL7 put the bullet point about access control in their specification.
What are your thoughts about this notion of compartments? Good thing? Bad thing? Have you implemented a similar approach? What were the pros and cons of it? Let’s start the discussion.
API Design Challenges: Competing Demands
Working inside an enterprise is constantly a challenge to balance competing demands and chart the best course forward. Unfortunately, typical corporate IT culture is one where everything is ruled by project delivery metrics: on time and on budget. Based on behaviors I’ve observed, this results in two common things:
- Efforts to minimize the teams involved. More teams means more coordination which creates risk to the schedule. If you think about this, concepts like micro services and two-pizza teams all really have to do with the fact that it’s very hard to control the output of a team of 20 to 40 people.
- Where multiple teams must be involved, each team will argue to minimize the work that they need to do (they don’t want to be perceived as the ones who blew the budget). I see it more frequently from the consuming side (e.g. user interface team saying, “Can’t you aggregate that data for me so I can just plop it in this table?”), but it can come from either side. Sometimes these questions will be masqueraded as “performance concerns”, even though in reality, we may just be shifting work (i.e. doing the exact same orchestrations, just out of different components that are owned by different teams).
While I could write a whole blog (and probably have) about the impact of focusing on project metrics, that’s not the point of this post. The fact is, these pressures exist and will continue to exist, so we need to have a plan on how to deal with them.
On the effort to minimize the teams involved, the first thing you must realize is that the decisions you make are about the next project, not this project. So, if you are an API developer, what design decisions can you make on this project such that when the next project comes along, they can simply use your API rather than having you involved in their project to modify the API?
To solve this problem, you need an API that is focused on breadth of use. To understand the breadth of use, don’t look at the demand side (what your consumers ask for), look at the supply side (what you have to offer). You can try to predict what consumers might need, but odds are you’ll be wrong. If, instead, you look at your backing information stores, and come up with an API that makes all of that information available, there’s no use case that won’t be covered. If you do this, you now empower consumers to get at what they need without your involvement (at least from a development perspective).
But what about performance?
This is where the competing demands can rear their ugly head, and it’s definitely more common to internal efforts? Why? Because most of the people in IT have some knowledge of what’s going on behind the scenes. They can ask the question, “why can’t I access your database directly?” because they know that database exists. Ask them this question: “If this were an Internet-based API, would you be asking the API provider to let you run SQL against their backing database?” Of course not. Having some insider knowledge can be a dangerous thing that can get us into traps where teams debate best performance versus good-enough performance. Will removing service layers and allowing direct database access yield faster applications? Probably, but at what cost and risk? Are you willing to put the integrity of your data on the line to do this? Are you willing to create risk that some business change could result in a change to all of those systems now hitting that database? There are more goals than just performance, and we need an approach that strikes the right balance on all goals, not just over-achieving on performance. Remember, Internet-based APIs continue to power many innovative web applications with perfectly acceptable performance.
The answer to the breadth-of-use versus performance debate, in my opinion, is to have a two-layer API strategy. This is consistent with what James Higginbotham wrote about in this blog on the Front-End / Back-End API Design conflict.
Layer one is an API focused on breadth of use, where design is all about exposing as much information as possible in a high performing way. Keep in mind that you should still need to define appropriate API boundaries, and the view of performance here is within your boundary of ownership. Know what things influence the performance of your API, and give your consumers control over those decisions. So, while your API may be capable of exposing all information from within your domain, that shouldn’t be the default behavior. If there’s a collection of information that is more expensive to retrieve, give the consumers the power to decide whether they want to incur that expense or not through a parameter in your API. I like to call this layer the Business API, but you may prefer Internal API.
Layer two is an API focused on context-specific use. I like to call this the Interaction API. While the most common use cases will be for a front-end, that’s not the only case where context comes into play. Merely exposing something externally versus internally is a context change. While you may want an API for all of your information internally, you’re not going to want to expose all of that information externally. Additionally, aggregations across boundaries that can be done efficiently inside the firewall might not be so efficient from outside the firewall. The Interaction API is the place where optimizations geared toward more specific contexts are done, if they are needed. In more cases than not, this layer probably is needed, but don’t view it as mandatory. If what you have from your Business/Internal API is perfectly suitable for direct access by the end consumer, just use it.
The other facet of this layer is that it puts ownership of logic in the right spot. If it’s a particular front end that needs an API highly tailored toward their UI design that can be called out of the browser, let that team build it! The particular combination of data is probably unique to their situation, so let them build it on top of the Business API that has been provided to them.
This two-layer approach to your API can hopefully help you avoid the debate around competing demands and instead get a solution that provides proper balance across both objectives.
Dynamic Data in REST
I had an interesting conversation with some colleagues around resource design that I thought would be helpful to share.
The starting point was a simple question:
Should price generation be a HTTP POST or HTTP GET?
There’s solid reasoning for either of them. Let’s start with HTTP GET.
From a consumer’s perspective, a GET probably seems very intuitive. For how most people think about prices, it’s static information, so why wouldn’t it just be an attribute included on a GET of some other resource, like a Product, right?
For the purposes of this conversation, however, price is something that is computed at the time of the request. In other words, some supporting static information exists (like list price), but the actual “price” charged to the customer is dependent on other contextual parameters. Cases where this exists are the end price you pay at Amazon after taking into account shipping preferences, account status (e.g. Prime member?) or the price you pay when you buy a car. These prices are determined on the fly and may only be good for a limited amount of time, because the contextual information is subject to change. Hopefully, you can also see that “price” is actually a complicated piece of information.
Is HTTP POST beginning to sound better? Where this fits very well is that the “Price” is really a custom resource generated for that particular context. The customer, even though they may use the phrase “get me a price,” is really saying, “generate me a quote.” At this point, we’re creating a new resource.
But there’s one more thing. What if price calculation is expensive? If I make this a POST and generate this every time, won’t my costs go through the roof? Well, they don’t have to. There’s no reason that a subsequent POST with the same data can’t return a resource from cache in this scenario. In reality, you are probably updating the expiration date of the resource, so POST still makes sense. Furthermore, if you provide a unique ID for the calculated price resource, HTTP GET can be used to retrieve it again, it just shouldn’t update the expiration policy.
So, out of this, I came up with the following guiding principles on deciding whether calculated/derived data should be its own resource:
- Can the data stand on its own? That’s always a question for any resource.
- Does the calculation require contextual data from the consumer to perform the calculation, or are all the parameters already part of the potential parent resource?
- Is there value in keeping the calculated data around for some time to avoid re-calculation?
Hopefully these guiding principles will help you out. If you have other suggestions on factors that help this design decision, please feel free to share in comments or via your own blog post.
Microservices Architecture versus SOA
TechTarget has published another one of my “Ask the Expert” columns. In this one, I offer up my thoughts on the differences between a Microservices Architecture and a SOA. In a nutshell, I think the microservices trend has moved things in the right direction, a direction that many of the SOA pundits were espousing back in the mid-2000’s. Regardless of what we were saying, however, there’s no denying that the reality of SOA back then was still more of service enabled architecture then service oriented architecture. Give my thoughts a read, and feel free to post comments and questions in the discussion section over there.
Z Ride for Hope
A break from my normal tech-related posts.
In September, I will participating for the second time in the Pedal the Cause bicycle ride in St. Louis. This great event raises money for cancer research at St. Louis Children’s Hospital and the Siteman Cancer Center in St. Louis.
I filled out my rider profile in late June and dedicated my ride to Zac Epplin, my brother-in-law’s nephew. In other words, he is part of my family. Little did I know that on July 1, we would receive news that would make this ride much more personal for me.
Zac was diagnosed in August 2013 after noticing a lump on his side during a soccer practice. Zac’s team went on to win the state championship that year, and despite having just gone through several rounds of chemotherapy, Zac took the opening kickoff of their championship match, even though he could barely stand, let alone walk or run on the soccer field. His teammates wanted him there.
Here’s a story the St. Louis Post-Dispatch ran on Zac during the tournament run.
The community rallied behind Zac, holding several fund raising events to help out with his care. My brother-in-law told me that these events weren’t so much about the money, they were about hope. Each time the community came together, it gave Zac hope. That hope gave Zac strength to keep fighting and to live his life beating cancer. Stuart Scott of ESPN said, “you beat cancer by how you live while you live.” That’s exactly what Zac did.
On July 1st, Zac’s battle with Ewing Sarcoma ended. He was only 19 years old. The Belleville News-Democrat published a great article about him with quotes from his soccer coach and teammates.
So, I am asking all of you to share this post and consider donating to me on my ride, donating to some other cancer charity, or doing anything you can to give cancer patients hope. If you share this, please use the hashtag #ZRideForHope. I’d like nothing more than to see this go viral, so that his family knows that even though Zac’s battle is over, the message of hope will not stop and we will all do what we can to find a cure.
You can donate to Pedal the Cause here. 100% of your donation goes to cancer research.
“I can do all things through Christ who strengthens me.” -Philippians 4:13, Zac’s “watch words” during his battle.
The Age of The Micro-UI
In this article from the Wall Street Journal, author Christopher Mims quotes mobile analytics company Flurry’s data that 86% of our time on mobile devices are spent in apps, and just 14% is spent on the web. While Christopher’s article laments that this is the “death of the web”, I’d like to put a different spin on this. We are now entering the age of what I call the “micro-UI”.
The micro-UI represents a shift toward very targeted user experiences focused on a much smaller set of capabilities. A phrase I’ve used is that we are now bringing the work to the user, rather than bringing the user to the work. It used to be that you only had access to a screen when you were in the den of your house with the desk with the built-in cabinet for your “tower” (why do they still sell those?) with a wired connection to your dialup modem, or your computer on your desk at work. Clearly, that’s no longer the case with smart phones, tablets, appliances, your car, and many more things with the capability to dynamically interact with you. I just saw rumors today about the screen resolution of the new Apple Watch, and I think it has higher resolution than my original Palm Pilot back in the late 90’s. On top of that, there are plenty of additional devices that can indirectly interact through low power bluetooth or other tethering techniques.
In this new era, the focus will be on efficiency. Why do I use an app on my phone instead of going to the mobile web site? Because it’s more efficient. Why do notifications now allow primitive actions without having to launch the app? Because it’s more efficient. It wouldn’t surprise me to even see notifications without the app in the future.
For example, how many of you have come home to the post-it on your door saying “FedEx was unable to deliver your package because a signature is required.” Wouldn’t it be great to get a notification on your phone instead that asks for approval before the driver leaves with your package in tow? But do you really want to have to install a FedEx app that you probably will never open? Why can’t we embed a lightweight UI in the notification message itself?
In the enterprise, there are more hurdles to overcome, but that should be no surprise. First, the enterprise is still filled with silos. If it were up to me, I would ban the use of the term “application” for anything other than referring to a user interface. Unfortunately, we’ve spent 30+ years buying “applications,” building silos around them, and dealing with the challenges it creates. If you haven’t already, you need to just put that aside and build everything from here on out with the expectation that it will participate in a highly connected, highly integrated world where trying to draw boundaries around them is a fruitless exercise. This means service-based architectures and context-launchable UIs (i.e. bring the user to the exact point in the user interface to accomplish the task at hand). Secondly, we need to find the right balance between corporate security and convenience. All of this era of connected devices rely on the open internet, but that doesn’t work very well with the closed intranet. Fortunately, I’m an optimist, so I’m confident that we’ll find a way through this. There are simply too many productivity gains possible for it not to happen.
I believe all of this is a good thing. I think this will lead to new and better user experiences, which is really what’s most important. Unlike Christopher’s article, I don’t see this as the death of the web, as without the web as the backing store for all of this information, none of this would be possible. It is a reduction in the use of a browser-based UI, and he’s correct that there are some good things about the web (e.g. linking) that need to be adapted (app linking and switching) to the mobile ecosystem. On the other hand, however, this increased connectivity present opportunities for higher productivity. Apple (e.g. Continuity), Google, Microsoft, and others are all over this.
Project Governance Tips
My latest article for SearchSOA has just went live. It gives a series of tips for making project governance more efficient. You can read it here.
Think Enterprise First
Think enterprise first. Such a simple statement, but yet it is so difficult to do. Admittedly, I am an enterprise architect, so it’s my job to think about the enterprise. In reality, it’s not just my job. If you are an employee, it’s your job, too.
Why am I bringing this up? I believe that having a simple, clear statement that embodies the change that we’re looking for is critical for making that change occur. so, when I sat back and thought about my experiences over the years and tried to think of a general, common problem that needs to change, what became clear was the very project-centric culture of most IT organizations.
Projects are necessary to ensure that delivery occurs, but let’s face it, have you ever been on a project where scope was the least flexible thing? From day one, schedule and resources are always less flexible than scope. As a result, we have an IT culture that is obsessed with on-time, on-budget delivery, that will always sacrifice scope.
While schedule and resources will always wind up being the least flexible thing by the end of the project, it shouldn’t begin that way. The change that must occur is to start out thinking “enterprise first.” The simplest example I can think of this is in service development. What’s the behavior in your organization? Do teams build “their” application and only create services where a clear opportunity for reuse exists (or when a governance team forces them to), or do your teams define their projects as services first (regardless of any known opportunities for reuse) and then add in whatever application-specific stuff is necessary. The latter is an enterprise-first thinking, the former is an example of project-first thinking.
The argument you may have is, “isn’t this going to result in bloated , over-engineered solutions?” It shouldn’t. Making something a service doesn’t require surveying the enterprise for requirements. it means we place the proper ownership and management around that service so that it is positioned for change. We can only design to the knowledge that we have based on past experience and known requirements. We can’t predict what changes will come, we can only make sure we are properly prepared for that change. project-first thinking doesn’t do that, enterprise-first thinking does. Think enterprise first.
New Compilation Book and Possible EA Book
While I have not yet embarked on writing another book, I have been published in a second book. The publisher of my book on SOA Governance, Packt Publishing, has released their first compendium title called, “Do more with SOA Integration: Best of Packt.” It features content from several of their SOA books and authors, including some from my book on SOA Governance. If you’re looking for a book that covers a more broader perspective on SOA, but has some great content on SOA Governance as a bonus, check it out.
On a related note, I’ve been toying with the idea of authoring another book, this time on Enterprise Architecture. There are certainly EA books on the market, so I’m interested in whether all of you think there are some gaps in the books available. If I did embark on this project, my goal would be similar to my goal on my SOA Governance book: keep it easily consumable, yet practical, pragmatic, and valuable. That’s part of the reason that I chose the management fable style for SOA Governance, as a story is easier to read than a reference manual. If I can find a suitable story around EA, I may choose the same approach. Please send me your thoughts either by commenting on this post, or via email or LinkedIn message. Thanks for your input.
Clouds, Services, and the Path of Least Resistance
I saw a tweet today, and while I don’t remember it exactly, it went something like this: “You must be successful with SOA to be successful with the cloud.” My first thought was to write up a blog about the differences between infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS) and how they each relate to SOA until I realized that I wrote exactly that article a while ago as part of my “Ask the Expert” column on SearchSOA.com. I encourage you to read that article, but I quickly thought of another angle on this that I wanted to present here.
What’s the first vendor that comes to mind when you hear the words “cloud computing”? I’m sure someone’s done a survey, but since I don’t work for a research and analysis firm, I can only give you my opinion. For me, it’s Amazon. For the most part, Amazon is an infrastructure as a service provider. So does your success in using Amazon for IaaS have anything to do with your success with SOA? Probably not, however, Amazon’s success at being an IaaS provider has everything to do with SOA.
I’ve blogged previously about the relationship between ITIL/ITSM and SOA, but they still come from very different backgrounds, ITIL/ITSM being from an IT Operations point of view, and SOA being from an application development point of view. Ask an ITIL practitioner about services and you’re likely to hear “service desk” and “tickets” but not so likely to hear “API” or “interface” (although the DevOps movement is certainly changing this). Ask a developer about services and you’re likely to hear “API,” “interface,” or “REST” and probably very unlikely to hear “service desk” or “tickets”. So, why then does Amazon’s IaaS offering, something that clearly aligns better with IT operations, have everything to do with SOA?
To use Amazon’s services, you don’t call the service desk and get a ticket filed. Instead, you invoke a service via an API. That’s SOA thinking. This was brought to light in the infamous rant by Steve Yegge. While there’s a lot in that rant, one nugget of information he shared about his time at Amazon was that Jeff Bezos issued a mandate declaring that all teams will henceforth expose their data and functionality through service interfaces. Sometimes it takes a mandate to make this type of thinking happen, but it’s hard to argue with the results. While some people will still say there’s a long way to go in supporting “enterprise” customers, how can anyone not call what they’ve done a success?
So, getting back to your organization and your success, if there’s one message I would hope you take away from this, it is to remove the barriers. There are reasons that service desks and ticketing systems exist, but the number one factor has to be about serving your customers. If those systems make it inefficient for your customers, they need to get fixed. In my book on SOA Governance, I stated that the best way to be successful is to make the desired path the path of least resistance. There is very little resistance to using the Amazon APIs. Can the same be said of your own services? Sometime we create barriers by the actions we fail to take. By not exposing functionality as a service because your application could just do it all internally, in-process, we create a barrier. Then, when someone else needs it, the path of least resistance winds up being to replicate data, write their own implementation, or anything other than what we’d really like to see. Do you need to be successful with SOA to be successful with the cloud? Not necessarily, but if your organization embraces services-thinking, I think you’ll be positioning for greater success than without it.
Deciding “Yes” on EA
On the Forrester Enterprise Architecture Community site, Randy Heffner asked the question, “What should EA do for business agility?” In my two responses in the discussion, I emphasized that EA is all about decision support. Yes, you may create a future state roadmap, but what the organization winds up with is completely dependent on what projects the organization decides to execute, and then on how those efforts are executed. EA influences those decisions, but we’re not the ones making them.
So why is this post titled, “Deciding ‘Yes’ on EA”? In that same discussion, William El Kaim added the following:
Let me be real provocative, and state: EA is dead … It has been killed by architect themselves leaving in their ivory tower and their beautiful EA drawing tool that nobody uses and that contains outdated data when they are published.
You can read the rest of what William had to say on the Forrester site, but I don’t think it’s anything any of us practicing EA’s haven’t heard before. But there’s a very important point in William’s statement. If nobody uses what EA produces but EA themselves, that’s a big problem. Put simply, if we provide poor decision support, the organization will ultimately decide against EA.
Like most things in this world, there are far more ways to fail than there are to succeed. So what are some best practices for providing excellent decision support so that the organization will decide “yes” on EA?
- Figure out who makes the decisions. Sounds simple, right? Not quite. I’d love to see a Forrester or Gartner survey on this one, but I’m willing to clarity and consistency on the decision making process is not a strength for most organizations. Regardless of the state of your decision making process, if you don’t have access to the people making the decisions, you have little to no chance of influencing them.
- Figure out how they make their decisions. Note that I didn’t add, “and make them better.” Remember that they’re the one making the decisions, not you. Your role is to give them added information so that they can make the best decisions possible. In some cases, the whole reason for having the discussion may be so you can learn and incorporate that decision maker’s information into your guidance for other decision makers.
- Make your information relevant to them. Don’t give them a bunch of models that are only meaningful to another EA. In the case of upward decisions, this usually means that the architecture implications have to have financial ties, or clearly alignment with business objectives. I’ve had success using capabilities in these discussions, and I think the current research would back that up. You must tailor your information to their needs. If they don’t understand it, it’s your problem, not theirs. They’re making the decision, not you.
- Emphasize added insight, not oversight. This is very important for interactions with project teams. All too often, EA is positioned as the enforcer. Come before the review board and we shall assess your worthiness. I’m sorry, but a guy who spends 80% of his time writing code each day should be far more aware of the latest frameworks than the average EA. The role of the EA is bring enterprise and/or domain perspective to the effort. As soon as the project gets established, the project blinders go up, and it’s the job of EA to remove those blinders and add enterprise insight into the effort.
- Don’t rely solely on artifacts, and where you must, make sure they are easily digestible. While many factors in an organization lead us toward email-based interactions of documents, try to have a face-to-face conversation about the guidance whenever possible. At a minimum, by walking someone through it, you at least knowing they’re actually reading some part of it. When you create the artifacts, get them to the point.
- Be cautious about consulting models for EA.A consulting model for EA is great, right? When someone needs more information to make a decision, what do they do? They hire a consultant. So EA should be internal consultants, right? Well, not really. That may work in the short term, but it is a “I’m here when you need me” model, when you really want to always be a part of the process. Don’t turn down the consulting approach, as it can get your foot in the door, but make sure you turn it to something more systemic.
What other best practices (or worst practices) do you recommend in firmly establishing EA as a valuable resource in the decision making processes in the organization?
Org Charts and Architecture Management
Every organization has one. For some, it can lead directly to a path of enlightenment. Others may use its rigid structure to create an impenetrable fortress of strength. For the unfortunate, it becomes an inescapable labyrinth of hopelessness. Yes, it’s the org chart.
Okay, let’s be fair, it’s actually not the chart that’s the real challenge, it’s the organizational structure. Any organizational structure creates boundaries, and those boundaries create opportunity for divergence, whether in strategy, opinions, processes, or just about anything else. The challenge is figuring out how to structure the organization to diverge where you need divergence, but to be consistent where you want consistency. It’s no surprise that organizational structure is considered by some to be part of the enterprise architecture. Just as we try to organize our technology portfolio in the best manner to achieve the desired business goals, we need to organize our human resources as well. More importantly, just as management may choose to reorganize its human resources as the business goals and operating context changes, the way we organize our technology portfolio needs to change as well.
The organizational structure poses a particular challenge for the practice of architecture, particularly when it comes to solution architecture. I’ve seen at least three different models for organizing the architecture practice. First, is the centralized model where all architects report up to a single Chief Architect or Director of Architecture. There may be some middle management in there, but there is always a solid line leading to the top. As you might guess, this usually leads to a high degree of consistency, but can have challenges in scaling to meet demand, retaining business domain knowledge, and of course, ensuring that the centralized resources actually get used by projects and avoiding “rogue architecture.” The overuse of the term “architect” in job titles these days makes this even more problematic, as senior or lead developers may now have the title of Java/.NET Architect. It may also create delays in decision making, because the solution delivery has one reporting structure, while the solution architecture has a different reporting structure. If there is a disagreement, these two management structures must come together to resolve the difference, or it must be escalated up the chain.
A second model is a separation of the Enterprise Architecture function from the Solution Architecture function. Enterprise Architects have a solid line to the Chief Architect/Director of Architecture, Solution Architects report through a different structure, perhaps having a dotted line back into the EA organization in a matrixed structure. Consistency becomes more of a challenge, because Solution Architects will likely receive more direction from their management structure than from the EA team. It also creates challenges for the EA function, because now the EA team is at risk of being completely disconnected from solution delivery. Even in the centralized model, the bulk of the input into the solution comes from the solution management side of things. Now, that push will be even stronger, with architectural management struggling to maintain a voice. That voice is sometimes mandated through an architectural review board, but if that’s the only time that architectural management has the solution architect’s ear, the effort is likely to struggle with EA being seen as an ivory tower, rather than a necessary contributor to solution success. I’ve seen this model more than either of the others, however.
The third model would be the completely decentralized model. In this case, there is still a practice of architecture in the organization, but it is completely distributed. Solution architects, and perhaps domain architects, are scattered throughout the organization. A virtual team may exist, and there may even be a Chief Architect/Director of Architecture, but the role may largely be one of information sharing and coordination, and not really about architecture management. What’s good is that there’s not much risk of being perceived as an ivory tower, but there is significant risk of poor architectural alignment. If the boundaries of diversification are based upon an assumption that business units do not share customers, what happens if the situation changes and they do? Even ignoring the potential for this situation, decisions on centralization versus a matrixed approach are likely made locally within each business unit.
So what model is right? First, a completely decentralized approach is really only suitable for companies with a completely diversified operating model (see this book if you don’t know what that means). So, it really comes down to centralized versus matrixed, and that will either be applied at the business unit level for a diversified company, or at the enterprise level for other operating models. Both centralized and matrixed can work, but there is a catch. I’ve used the term “architecture management” in this blog. As I wrote this, I kept thinking about parallels to project management and a PMO. I’ve seen centralized PMOs where all project managers report into a single organizational group, and I’ve seen decentralized PMOs where individual project managers report into lines of business while a core group of resources that look at things from a portfolio perspective report into a Director of Product Management. The catch is consistency. If each project manager did things their own way, it is going to be extremely difficult for anyone to manage things at a portfolio level. Without establishing some minimum level of consistency that produces the necessary metrics and information at the right times for portfolio level management to happen, you’re sunk. Fortunately, for project management, the need for this is a well accepted practice. No one wants to wait to find out that a project is $100M over budget until after it happens. If you have that consistency, you can make either model work.
In the case of architecture management, things are still maturing. The problem is that we are at risk of focusing solely on consistency, without properly understanding the outcome that consistency is supposed to create. While finding out that you’re $100M over budget after it’s been spent is well understood as a bad thing, is finding out that someone has built a component that already existed elsewhere in the company a bad thing? Not necessarily. Those decisions need to be made in the context of both the project’s goals and the enterprise’s goals to make that distinction. Pursuing enterprise consistency when there are only project goals involved in decision making puts you at risk of being perceived as an ivory tower. At the same time, it may be necessary to pursue some base level of consistency prior to establishing that enterprise context, otherwise the context may be perceived as irrelevant. This can happen when the practice of solution architecture really isn’t being practiced at all.
My final advice is this: a centralized path will definitely lead to the most consistency, but you have to be rock solid in your justification of the need for an enterprise viewpoint, because that centralized model creates management overhead, risk of resource availability, and the potential for conflicting direction. A decentralized model is at less risk of having resource availability issues, but makes consistency more difficult and is more prone to sacrificing enterprise direction for project delivery. Ultimately, it will come down to whether your organization has been successful with matrix management approaches or not. If it has, you should be able to make a decentralized approach work. If you’ve never been successful with matrix management, then a centralized approach will likely be necessary. Finally, if you go with a decentralized approach, but have very inconsistent architecture practices, I strongly recommend that you establish an architecture mentoring/facilitator practice. In this, members of the centralized EA team facilitate one or two day architectural workshops. This can ensure that things are done in a consistent manner, the voice of the enterprise is brought into the solution architecture process, but the risks associated with a completely centralized model are mitigated.
The End of Apps? Not.
Amazon released their HTML5 Kindle reader this week, and I couldn’t keep myself from commenting on all of the talk of people saying/hoping/proclaiming that this was the beginning of the end for apps and Apple’s AppStore.
Hogwash.
I think it’s great that Amazon has released the HTML5 version of the Kindle reader, complete with integration into the Amazon Kindle store. I don’t see Amazon pulling their Kindle app from the app store though, and I think there would probably be a big revolt if they did.
It seems that a lot of the pundits out there think that all of the developers out there are just waiting to jump on a single technology that will support any device, anywhere. For developers that aren’t making any money on their products, that may be the case. I’m willing to bet that the lack of profits has less to do with having multiple code bases to maintain and more to do with the app just not being popular enough.
Sure, any development manager should be concerned about development costs, but developers sure don’t have a good track record of sticking with a single technology. You may get rid of your Objective-C code, replaced by HTML5 and a Java backend, and then all of a sudden the Java backend becomes a Ruby backend, and then a JavaScript/node.js backend, etc. You get my point. On top of that, most developers I know who are really passionate about developing enjoy learning the new technologies, so in reality, having multiple platforms to support may actually help from a job satisfaction standpoint.
But all of this isn’t even my main point. To me, the thing that drives it all is customer experience. When the iPhone first came out, everything was Mobile Web. Apple then backtracked, and came out with the App Store. I don’t know about you, but I can’t think of a single App that had a mobile web or iPhone-optimized web experience that was on par with the native apps that were created. Granted, part of that was due to Edge connectivity, but the real thing is that it was all about my experience as a customer. While HTML5 is very powerful, I still don’t believe that it is going to be able to provide the same level of experience that a native application can. Yes, it can work offline and utilize local storage to make it as app-like as possible, but it’s still based on an approach that’s more about a connected, browser-server paradigm.
There will be classes of applications for which HTML5 will be just fine. I’m willing to bet many of them will be replacements for applications that are already free in the app stores. That’s an optimization for the development team, since revenues are clearly coming from another source, whether it be advertising or eCommerce. For paid applications, though, customers are paying for the experience, and if the experience isn’t as optimized as possible, there are way too many alternatives out there.
All we need to do is look back at history to know that Apps are here to stay. Java did not result in companies dropping proprietary development languages for Windows, Mac, or Linux platforms. Yes, some cross-platform products did arrive and continue to thrive, but there’s still a thriving marketplace for native applications on the major desktop platforms. Will we see many mobile applications solely available via HTML5? Absolutely, but the native app store for iOS and Android will continue to thrive.