One technology I tend to use a fair amount is OpenAPI(1). Several years ago I originally adopted RAML as a more powerful means to capture API documentation, but shortly thereafter I was building an internal API gateway solution and pivoted to OpenAPI as a configuration mechanism to enable easier integration with alternative systems (the overall solution was largely glue between modules of fairly standard functionality with one eye towards exploring off-the-shelf alternatives in the longer term while minimizing immediate risks to legacy parity). OpenAPI certainly seems to have risen to a dominant position in terms of specifying REST APIs with a sizeable ecosystem, support from a variety of large players, and a trustworthy position due to being supported by the OpenAPI initiative arm of the Linux Foundation.

In the course of that project a fair amount of OpenAPI tooling was created which I now wish had been open sourced and which I may look to recreate at some point. The core logic involved a custom OpenAPI generator which produced nginx configuration files (and some corresponding wiring to consume such files in an tuned nginx container), but there was also a fair amount of filters which enabled assorted preprocessing of source files to produce assorted outputs. The presumed model (which proved to satsify the uses of many teams) was that any non-trivial API may be composed of multiple backends. There were therefore different use cases around the desired specification for individual services and those for more formal APIs along with more granular variations within those categories depending on which subset of information should be exposed. The filters created supported all of the identified needs such that one or more source files could be passed through UNIX pipelines to produce desired outputs. This involved a combination of standard YAML/JSON tools where those were sufficient along with some Java tools that made use of the OpenAPI parser to ensure proper semantics where necessary (all of which are packaged in a container to facilitate shell style invocation without concern for dependencies).

I foresee a need for similar functionality in upcoming projects, but since then I have largely been involved with simpler and more local (and typically monolithic needs). The values I primarily look to get out of OpenAPI are currently those of documentation and validation.

Potential Values


Likely the most obvious benefit for OpenAPI is that of providing API documentation. There are plenty of tools around which support displaying and using APIs based on their specs, and there is plenty of support for assorted methods to provide further information and examples within the spec. OpenAPI therefore provides a natural starting point for documenting an API and provides a natural structure to convey all appropriate usage information.


A subtle benefit which ties in with both documentation and validation is that OpenAPI can drive API consistency across an organization. Beyond any inferred inconsistencies while reading, calculated use of referring to common components can provide a very natural way to guarantee that some behaviors are provided uniformly across endpoints and systems. Using a combination of established practices and a centralized repository of shared components can deliver both more economical definitions and a smoother user experience.


It’s typically a good idea to make sure that an API is actually doing what is desired. Often this is lumped into one big bucket but I strongly advocating splitting up testing whether an API is well-behaved from whether it is actually useful. The former amounts to validating that the things like responses have proper statuses and bodies that are well-formed and structurally sound, whereas the latter verifies that a user can actually get the proposed value out of the system; this involves the more interesting business semantics and may require actions which span combinations of operations and systems. Having a well-behaved API is important but is ultimately the boilerplate channel for delivering the latter. The core value should be the interesting pieces that are worth attention in both implementing and testing, but the friction of delivering and validating that value should be reduced as much as possible. Using a testing tool which can verify API behavior based on an OpenAPI spec can absorb much of the more mechanical churn so that the focus can remain on the existential purpose of the API.

Driving API validation from the OpenAPI spec also has the potential advantage that the defined API can be treated more directly as a first class deliverable. As an infrastructure as a whole evolves it may make sense to shift the ownership of operations within an API, but any resulting impacts on compatibility should be tightly controlled. By making sure that the validation is tied to a spec which can itself be controlled the implementation can be more safely varied.

I was originally fairly enticed by using Dredd as a means to perform this type of validation, but have ultimately adopted Prism for this purpose. While the seeming automagic testing of Dredd is tempting the project as a whole seems to have stalled and the declarative nature quickly gets awkward. Using Prism’s validating proxy with some minimal tests validates that the API is behaving as defined while fitting naturally into standard practices. A potential means to bridge the gap between the more implicit Dredd behavior and the lazier Prism approach would be to add generation of tests to guarantee that the spec will be exercised to a desired level of completeness. I may explore this option depending on how use plays out; manually writing the tests seems sufficient so far but there are certainly cases where it may be inefficient (such as backfilling tests for an existing API).

Client/SDK Generation

In smaller cases I don’t see the generation of a client using OpenAPI as being valuable. My opinion for this particular value is that it can often be counterproductive in that one of the primary values of a REST API is that it is built on top of existing standards and a well designed API should be easy to use with those standards alone. In smaller systems it is also very likely that a given system may only use a small portion of the exposed API. For these reasons the assorted overheads of distributing a client package may not prove worthwhile.

On the other hand for widely used APIs, particularly those where the client systems are likely to be maintained by third parties, generating SDKs is likely to be very valuable. Providing a supported reference client implementation can lower support costs across runtimes, and having the option to provide client side behavior can keep the resulting system simpler, more resilient, and better behaved.

Server Generation

As will be touched on elsewhere, I’m a proponent of reducing code as a whole and boilerplate specifically. I haven’t been in a position recently to use any solutions which generate service code based on an OpenAPI document, but I’d actively explore such options when available.

A related notion is that of generating mock services. This is a practice I am currently exploring and promoting.

OpenAPI-specification/ at main - OAI/OpenAPI-specification - GitHub [online]. 16 February 2021. Available from: