You Say Vanilla, I Say Value

Paul Wiedel Headshot
Written by Paul Wiedel
on May 27, 2014

Vanilla software development

Vanilla software development, for the purpose of this post, is the most common and mainstream way to make a decision.

If software architecture were the savanna, vanilla would be in the middle of the herd of thundering wildebeest. Vanilla is so common that developers are not excited about it. Vanilla isn’t the absolute newest version of anything. Vanilla doesn’t have the new features that everyone is talking about. Vanilla doesn’t have the word beta in it. Vanilla is plain. Vanilla is safe. Vanilla is capable. Vanilla is predictable. Vanilla is what we are used to. Vanilla is what we are tired of. Vanilla is boring. Vanilla is vetted. Vanilla is ubiquitous. Vanilla is understood.vanilla.jpg

What vanilla isn’t is exotic. Exotic technology is new. Exotic is what people are talking about doing. Exotic is risky. Exotic is exciting. Exotic is dangerous. Exotic is highly customized. Exotic is specialized. Exotic is rolling your own software your own way. Exotic is coveted. Exotic is the technology that people say, “If only we could figure out a way to get this to work with our system.” Exotic has the promise to be a game changer. Exotic will make working fun… we think. Exotic dazzles us with its capabilities and promise.

Winter Scene Vanilla Blog

Evaluating technologies to adopt

However, as the title may suggest, the purpose of this post is to caution against hastily adopting exotic technologies and practices. It seems like every day something exciting comes along that flashes promise to make it easier for people to create amazing software. Some of them yield fantastic results, change the rules of the game, and truly are bound to be part of the standard set of technologies as its adoption and refinement continues.

As software professionals, the unifying essence of our raison d’être is improving the end product and the means of our work. We achieve this through improving how software is implemented, improving how software is defined, improving the process around building the software, and also improving the technologies that we use to build the software. The intended result of this effort should be delivering software that adds value.

An aspect of exciting new technologies and ideas that is often overlooked, however, is its total cost of ownership. The total cost of ownership for a software technology is a measure of how much time and resources it takes for an organization to utilize, and continue to utilize, a technology.

Some of this cost may be realized in licensing and support fees.

The greatest cost is the opportunity cost while resources are becoming competent in a technology. This can be measured as the sum of training expenditures, which includes the cost of employing people who are expected to become competent enough to contribute with the technology and the opportunity cost of the work that could have been accomplished by those people in the time that it takes to train on the new technology.

Every new technology decision has this Transitional Technical Cost, or TTC. For every individual who is added to a role that is expected to be competent with a technology and for every technology that is added to an individual’s expected area of competency, the TTC must be paid.

When a technology is different than what the people within an organization are accustomed to, it will take more time for those people to adjust to the new technology. If a technology is a complete paradigm shift from the established competencies of a group, one should not expect that the group will readily be productive or even like that technology.

An extreme example of this type of paradigm shift might be telling a group of developers who are competent in COBOL programming that they are going to start writing their code in Java. I once spoke with a manager in a large organization that tried transitioning COBOL developers to Java. He said that a ratio of one in ten was successful in making the transition.

An organization may avoid directly paying the TTC by adding individuals who are already competent with that technology. To bridge the gap between an organization’s current technical capability and a desired capability might be to bring on some contractors who are competent with that technology.

One way the TTC can be controlled is by taking the availability of competent resources as well as the capability of the technologies into account when selecting technologies. A big advantage of going the vanilla route is the TTC is either non-existent or minimized by the ubiquity of the technology and mindshare among the available resources. Various synergies can also be realized as the vanilla technologies are more heavily used and vetted. There are more vast knowledge bases available for people to guide their use of the technology. The more well-worn path of technologies should attract better tools and better understanding.

A vanilla base makes for a wonderful dessert. Usually a scoop of vanilla ice cream will hit the spot. Sometimes we want more.

When is it appropriate to use technologies that aren’t the ones that everyone else is using? For every organization and every situation the specific answer will be different. My opinion is as follows: only consider an exotic technology when the conservatively estimated realizable value of that choice will provide a competitive advantage that exceeds the difference in total cost over the vanilla choice. In dollars, the choice could be made by evaluating both the addition of revenue or reduction of expenses that the delivered software will yield versus the cost of technical transition for the people who are expected to build and care for that software. If it looks like adopting the technology will do no better than breaking even, why take on extra risk instead of sticking with something that is safe? If there isn’t a strategic reason why one would still make that choice, my advice would be to not make that choice.

This is not to say that there is no place for using non-vanilla technologies. The choices that are the least difficult to make are selecting the technologies that provide capability while keeping the cognitive delta from the vanilla path at a minimum. An example of this is using Gradle to replace Maven as a build tool. Gradle keeps the process and structure familiar while removing pain points and adding capability.

So how do you know what the right vanilla choice is? What is even on the vanilla path?

At SDG, our open source practice regularly meets to evaluate technologies that we believe reflect the current state of open source software development in the markets that we service. In our “openDNA” document, we rate various technologies on a spectrum of “Investigate,” “Prototype,” “Use,” “Maintain,” and “Retire.” The “Use” category aligns well with the concept of vanilla.

On either side of the “Use” category you’ll find technologies that aren’t quite vanilla. On the “Investigate” and “Prototype” side are technologies that we feel show promise, but aren’t quite ready to bet the farm on. On the “Maintain” and “Retire” side are the technologies whose time we believe is passing.

“Nothing gold can stay.” – Robert Frost

As one may be able to draw from our openDNA document, the technologies that we identify in the “Maintain” and “Retire” category were once vanilla. An example of this might be a web application that is implemented in Apache Struts 1. At one time, a Struts application was as vanilla as they came. The application may still be capable for its purpose, but the number of people who are competent with the technology is declining. In a similar way that the latest and greatest thing is exotic, the trailing tail is what once was vanilla, but is becoming exotic.

Knowing that even the most appropriate choice for now or the near future may one day become a liability, making choices that are changeable down the road may truly be the safest choices of all.