There are a number of versioning approaches that we regularly come across in the the IT world including:
- Semver (http://semver.org)
- is based on using the versioning information in a build to inform the consumer of the nature of the changes
- Calver (http://calver.org)
- calver is based on informing more about release schedules and supportability
- And even sentimentalver (http://sentimentalversioning.org)
- probably the default position of many developers but again is based on using versioning to indicate something about what the new version is compared to the previous ones
These strategies have been borne out of software development projects where we control every aspect of the development. They have been extended to work within eco-systems such as node.js to support automated build and dependency management and have worked with some success. In-fact version control based on source code changes has many advantages and should be actively encouraged as:
- It can be used by automated scripts to attempt to ensure compatibility of software when releasing software,
- It helps in support when you need to know when a bug was reported and what code was actually live at the time
- It can help you with bug fix releases
- You can create version dependency matrices showing what versions of the software are currently deployed and how they relate to each other.
There are often attempts to utilise these strategies for internal services (e.g. SOAP services) and also external API’s. How ever none of the ‘BIG’ api publishers use any of these versioning strategies and even software runtimes such as Java and .NET actually offer you a completely different strategy.
The key reason for this discrepancy is that the code versioning strategies listed above can be considered ‘selfish’ versioning, they assume that consumers will be able to and want to react to the API changes that we as a software provider make and furthermore that they are willing to take on the costs associated with these changes. Whilst this may work to some success in small development teams or controlled software ecosystems it creates some significant challenges when applied to enterprise services or external API’s.
One of the biggest challenges of designing enterprise integration strategies that utilise many different technologies and ecosystems, is the cost and management of change. Change even internally for complex enterprise software utilising these versioning strategies is already a problem. This can be evidenced by the many and various ‘dependency build management’ solutions out there that attempt to keep your build up to date with the latest releases of builds by the various development teams.
In an enterprise landscape where we have to align maven, cocapods, npm, rpm etc. along with all the other package management suites these challenges surface in any number of ways:
- COTS packages running old versions of libraries or runtimes
- This often causes security issues as we cannot patch to the latest release of software
- Change impact is large and unpredictable.
- Even minor changes to services and APIs cause consumers to be re-compiled, re-tested and re-deployed
- Release management and dependency management becomes unwieldy
- Often we have more than one version of a service running as a result of a project not completing properly or transitional architectures
- Test servers become a bottleneck as automated build/test/deploy services become bogged down with re-testing consumers needlessly
- Deployment becomes a major part of the IT cost
- We get bogged down in ‘dependency hell’
Traditional approaches to versioning of software API’s could be summed up as ‘inform the user of change and let them deal with the impact’.
The Big API vendors all seem to have recognised that the traditional approaches to versioning do not cut it when you have no control over the landscape or technologies utilised by the consumer. This is similar to the situation we face in enterprise landscapes, were we typically do not have control over the consumers (as they are a mix of COTS packages of varying ages) but we also have little control over service providers (same deal!).
An alternative to forcing the service consumer deal with the impact of change is to make the service provider commit to the API. This does not mean that the API should be ‘supported’ for 9 months from publication then you have to upgrade, it means that the the API we are using is the latest release with the latest patch fixes and functionality should we wish to use and will continue to support our use case for a number of years for example 5+. This allows us to achieve benefits from a software solution for a significant time before having to change it, and ideally never have to change it. This rule can be applied to 3rd party API’s (Facebook, Twitter, Github etc. already do this!) we can also work with COTS providers (or hide their implementation using ESB’s and adaptors) and we can enforce this for internal development teams.
To make this approach work we have to come up with a new versioning strategy – one that is similar to the big API service strategies lets call it the ‘minVer’ strategy, and is based on the presumption not that we will change and let the consumer know, but that we will do our very best never to create a new version and therefore not ask our consumers to accept change.
- Only one version of any API will be offered at any one time and that version will be version X, no major or minor versions and no build numbers, X will align to the current technical operation model which is defined as:
- The Canonical Messaging model – the set of data structures that can be passed across the API set for a given enterprise
- The calling patterns across API’s, we can create chains of API calls in the same technical operation model in the knowledge that the individual API’s will work happily together. This includes but is not constrained to supported calling protocols, naming conventions, return values, and error handling.
- The only time we can break this rule is when we are transitioning to technical operation model X+1 through a deprecation model (X becomes deprecated for a certain amount of time and should be removed.)
- All changes to API’s within a technical operation model will be backwardly compatible, and ideally forwardly compatible.
- Forward compatibility becomes very useful in an enterprise setting where deployment order between development teams can cause delays.
- This will make a significant positive impact to complex roll outs and transitional architectures
- We accept that no breaking changes are allowed. In order to facilitate this we will accept that no business rules will be validated at the API level.
- This means that there is a separation between business mandatory attributes and technically mandatory attributes. Allowing you to transition to new business rules in a technically controlled manner.
- API validation should be at a similar level to function call – ensure basic data types and values required from a technical view point are present and correct
- We accept that from time to time we may have to offer a non-optimal upgrade to an API to maintain backward compatibility.
- This is an acceptance that backward compatibility and minimising the impact of change is better than producing the cleanest API implementation.
- The point where these imperfections become unwieldy for a single or a subset of use cases we should consider a deprecation pattern
- The point where these imperfections become unwieldy across the API as a whole is where we consider looking at technical operational model X+1
- We allow for deprecation of entities in the canonical model or in API patterns but only in extenuating circumstances.
- We accept that we will deign our API’s to be good enough to support our use cases for the next x years (where x is in the region 5+)
- Note initial release cycles may be shorter than this until we initially go live.
By implementing this model over a more traditional versioning approach we can achieve many benefits including:
- Stability of the enterprise through change
- Change will occur but the impact of any change is always minimised
- There will be no unnecessary deployments due to chasing the latest version of a service.
- Cost of change and time to change will be significantly lowered
- Easier development
- Services will be stable, predictable and long lived
- Services will be consistent across the enterprise
- Development times will be lowered.
- Stability for external consumers as our API’s change rarely our consumers will only need to change if they need to utilise any new functionality
- Escape from dependency hell
This type of strategy is a key element of creating flexible and change supporting enterprise architectures, it does not come free but it does offer a number of advantages by shifting the cost of change from the consumers (of which there may be many) to the producers of that software component (of which there is only one).