Since I started this blog, I had the need to develop a couple of sample applications to showcase some of the topics I have been covering. Usually, some kind of Java EE application that needs to be deployed in a Java EE container. Even by providing instructions on how to setup the environment, it can be tricky for a newcomer. A few of my readers don’t have a Java EE container available in their local machine. Some don’t even have Java Development Kit installed. If I could provide the entire environment set up for you and you only need to execute it somehow, wouldn’t that be great? I do think so! Instead of distributing only the application, also distribute the environment needed for the application to run. We can do that using Docker.
A couple of weeks ago, I wrote the post Get into Docker. Now, this post is going to continue to explore one of the most interesting Docker features: the Docker Images. This is the answer to provide my reader with a complete environment with everything ready to run.
Docker Image
A Docker Image is a read only template used to create the Docker containers. Each image is built with a series of layers composing your final image. If you need to distribute something using Ubuntu and Apache, you start with a base Ubuntu image and add Apache on top.
Create a Docker Image file
I’m going to use one my latest application, the World of Warcraft Auction House, to show how we can package it into a Docker Image and distribute it to others. The easiest way is to create a Dockerfile. This is a simple plain text file that contains a set of instructions that tells Docker how to build our image. The instructions that you can use are well defined and straightforward. Check the Dockerfile reference page for a list of possible instructions. Each instruction adds a new layer to your Docker Images. Usually the Dockerfile is named Dockerfile
. Place it in a directory of your choice.
Base Image
Every Dockerfile needs to start with a FROM
instruction. We need to start from somewhere, so this indicates the base image that we are going to use to build our environment. If you were building a Virtual Machine you also had to start from somewhere, and you have to start by picking up the Operating System that you are going to use. With the Dockerfile it’s no different. Let’s add the following to the Dockerfile:
FROM debian:latest
Our base image will be the latest version of Debian, available here: Docker Hub Debian Repo.
Add what you need
The idea here is to build an environment that checkouts the code, build and execute the World of Warcraft Auction House sample. Can you figure out what you need? The JDK of course, to compile and run and Maven to perform the build. But these are not enough. You also need the Git command line client to checkout the code. For this part you need to know a little bit about Unix shell scripting.
Since we need to install the JDK and Maven, we need to download them into our image from somewhere. You can use the wget
command to do it. But wget
is not available in our base Debian image so we need to install it first. To run shell commands we use the RUN
instruction in the Dockerfile.
Install wget:
| RUN apt-get update && apt-get -y install wget git |
Install the JDK:
| RUN wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u40-b25/jdk-8u40-linux-x64.tar.gz && \ mkdir /opt/jdk && \ tar -zxf jdk-8u40-linux-x64.tar.gz -C /opt/jdk && \ update-alternatives --install /usr/bin/java java /opt/jdk/jdk1.8.0_40/bin/java 100 && \ update-alternatives --install /usr/bin/javac javac /opt/jdk/jdk1.8.0_40/bin/javac 100 && \ rm -rf jdk-8u40-linux-x64.tar.gz |
Install Maven:
| RUN wget http://mirrors.fe.up.pt/pub/apache/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz && \ tar -zxf apache-maven-3.2.5-bin.tar.gz -C /opt/ && \ rm -rf apache-maven-3.2.5-bin.tar.gz |
We also need to have Java and Maven accessible from anywhere in our image. As you would do to your local machine when setting the environment, you need to set JAVA_HOME
and add Maven binary to the PATH
. You can do this by using Docker ENV
instruction.
| ENV JAVA_HOME /opt/jdk/jdk1.8.0_40/ ENV PATH /opt/apache-maven-3.2.5/bin:$PATH |
Add the application
Now that we have the required environment for the World of Warcraft Auction House, we just need to clone the code and build it:
| RUN cd opt && \ git clone https://github.com/radcortez/wow-auctions.git wow-auctions WORKDIR /opt/wow-auctions/ RUN mvn clean install && \ cd batch && \ mvn wildfly:start |
We also want to expose a port, so you can access the application. You should use the listening http port of the application server. In this case, it’s 8080. You can do this in Docker with the EXPOSE
instruction:
I had to use a little trick here. I don’t want to download and install the application server, so I’m using the embedded Wildfly version of the Maven plugin. Now, as I told you before, each instruction of the Dockerfile adds a new layer to the image. In here I’m forcing a start and stop of the server, just for Maven to download the required dependencies and have them available in the image. If I didn’t do this, whenever I wanted to run the image, I would have to download all the server dependencies and the startup of the image would take considerably longer.
Run the application
The final instruction should be a CMD
to set the command to be executed when running the image:
| CMD git pull && cd batch && mvn wildfly:run |
In this case we want to make sure we are using the latest code, so we do a git pull
and then run the embedded Wildfly server. The deploy configuration has been already set up in Wildfly Maven plugin.
Complete Dockerfile
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 | FROM debian:latest MAINTAINER Roberto Cortez <radcortez@yahoo.com> RUN apt-get update && apt-get -y install wget git RUN wget --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u40-b25/jdk-8u40-linux-x64.tar.gz && \ mkdir /opt/jdk && \ tar -zxf jdk-8u40-linux-x64.tar.gz -C /opt/jdk && \ update-alternatives --install /usr/bin/java java /opt/jdk/jdk1.8.0_40/bin/java 100 && \ update-alternatives --install /usr/bin/javac javac /opt/jdk/jdk1.8.0_40/bin/javac 100 && \ rm -rf jdk-8u40-linux-x64.tar.gz ENV JAVA_HOME /opt/jdk/jdk1.8.0_40/ RUN wget http://mirrors.fe.up.pt/pub/apache/maven/maven-3/3.2.5/binaries/apache-maven-3.2.5-bin.tar.gz && \ tar -zxf apache-maven-3.2.5-bin.tar.gz -C /opt/ && \ rm -rf apache-maven-3.2.5-bin.tar.gz ENV PATH /opt/apache-maven-3.2.5/bin:$PATH RUN cd opt && \ git clone https://github.com/radcortez/wow-auctions.git wow-auctions WORKDIR /opt/wow-auctions/ RUN mvn clean install && \ cd batch && \ mvn wildfly:start EXPOSE 8080 CMD git pull && cd batch && mvn wildfly:run |
Build the Dockerfile
To be able to distribute your image, you need to build your Dockerfile. What this is going to do, is to read every instruction, execute it and add a layer to your Docker Images. You only need to do this once, unless you change your Dockerfile. The CMD
instruction is not executed in the build, since it’s only used when you are actually running the image and executing the container.
To build the Dockerfile, I use the following command in the directory containing your Dockerfile:
docker build -t radcortez/wow-auctions .
The -t radcortez/wow-auctions
is to tag and name the image I’m building. You should use the format user/name. You should use the same user name that you register with Docker Hub.
Pushing the Image
Docker Hub is a Docker Image repository. It’s the same concept of Maven repositories for Java libraries. Download or upload images and you are good to go. The Docker Hub already contains a huge number of images ready to use, from simple Unix distributions, to full blown application servers.
We can now pick the image we build locally and upload it to Docker Hub. This will allow anyone to download and use this image. We can do it like this:
docker push radcortez/wow-auctions
Depending on the image size, this can take a few minutes.
Run the Image
Finally to run the image and the container we execute:
docker run -it --name wow-auctions -p 8080:8080 radcortez/wow-auctions
Since I’ve built the image locally first, this will run the CMD
radcortez/wow-auctions. Just by using the above command, the image is going to be downloaded and executed in your environment.
Conclusion
With Docker, is possible to distribute your own applications and have the required environment for the application to run properly created by you. It’s not exactly trivial, since you need some knowledge of Unix, but it’s shouldn’t be a problem.
My main motivation to use Docker here, was to simplify the distribution of my sample applications. It’s not unusual to receive a few reader emails asking for help to set up their environment. Sure, in this way you now have to install Docker too, but that’s the only thing you need. The rest, just leave it to me now!
Related Articles
Remember to check my introductory post about Docker:
Get Into Docker
Last Thursday, 16 April 2015, the eight meeting of Coimbra JUG was held at the Department of Informatics Engineering of the University of Coimbra, in Portugal. The attendance was not great, compared to the amount of people signed up for it, around 20 to almost 40 signed up, but still a worthy session. We had the pleasure to listen to Bruno Baptista talking about Integration Testing with Arquillian. A very special thanks to Bruno for taking the challenge and steer the session. He is also going to support the group and help me run it.
Anyway, all the audience recognized the importance of Integration Tests, but no one was implementing them. This is why I think these kind of sessions related to Testing are very important to create awareness in the community and professionals. It’s no secret that these practices produce a higher quality result, but for one reason or another, testing is sometimes an elusive task. Bruno explained the main benefits of Integration Testing, plus the major pain points in performing the test. By using a demo, Bruno demonstrated that Arquillian can alleviate much of the pain, by showing how to test a JPA and a REST application.

As always, we had surprises for the attendees. IntelliJ sponsored our event, by offering a free license to raffle among the attendees. Congratulations to Miriam Lopes for winning the license. Develop with pleasure! We also offered the book Continuous Enterprise Development in Java courtesy of O’Reilly, congratulations to Victor Reina. We also handed a few ZeroTurnaround t-shirts.
Here are the materials for the session:
Enjoy!
Have you ever heard about Docker before? Most likely. If not, don’t worry, I’ll try to summarize it for you. Docker is probably one of the hottest technologies at the moment. It has the potential to revolutionize the way we build, deploy and distribute applications. At the same time, it’s already having a huge impact in the development process.
In some cases, the development environments can be so much complicated, that it’s hard to keep the consistency between the different team members. I’m pretty sure that most of us already suffered from the syndrome “Works on my Machine”, right? One way to deal with the problem is to build Virtual Machines (VM) with everything set up so you can distribute them through your team. But VM’s are slow, large and you cannot access them if they are not running.
What is Docker?
Short answer: it’s like a lightweight VM. In practice, it’s not the case, since Docker is different from a regular VM. Docker creates a container for your application, packaged with all of the required dependencies and ready to run. These containers run on a shared Linux kernel, but they are isolated from each other. This means that you don’t need the usual VM operating system, giving a considerable performance boost and shrinking the application size.
Let’s dig a little more into detail:
Docker Image
A Docker Image is a read only template used to create the Docker containers. Each image is built with a series of layers composing your final image. If you need to distribute something using Ubuntu and Apache, you start with a base Ubuntu image and add Apache on top. If you later want to upgrade to a Tomcat instance, you just add another layer to your image. Instead of distributing the entire image as you would with a VM, you just release the update.
Docker Registry
The Docker registry also called Docker Hub is a Docker Image repository. It’s the same concept of Maven repositories for Java libraries. Download or upload images and you are good to go. The Docker Hub already contains a huge number of images ready to use, from simple Unix distributions, to full blown application servers.
Docker Container
A Docker Container is the runtime component of the Docker Image. You can spin multiple containers from the same Docker Image in an isolated context. Docker containers can be run, started, stopped, moved, and deleted.
How do I start?
You need to install Docker of course. Please refer to the installation guides of Docker. They are pretty good and I had no problem installing the software. Make sure you follow the proper guide to your system.
Our first Docker Container
After having Docker installed, you can immediately type in your command line:
docker run -it -p 8080:8080 tomcat
You should see the following message:
Unable to find image ‘tomcat:latest’ locally
And a lot of downloads starting. Like Maven, when you build an application, it downloads the required libraries to run Tomcat, by reaching out to Docker Hub. It takes a while to download. (Great, one more thing to download the Internet. Luckily we can use ZipRebel, to download it quickly).
After everything is downloaded, you should see the Tomcat instance booting up, and you can access it by going to http://localhost:8080
in Linux boxes. For Windows and Mac users is slightly more complicated. Since Docker only works in a Linux environment, to be able to use it in Windows and Mac you need boot2docker (which you should have from the installation guide). This is in fact a VM that runs Docker on Linux completely from memory. To access the Docker containers you need to refer to this VM IP. You can get the IP with the command: boot2docker ip
.
Explaining the command:
docker run | The command to create and start a new Docker container. |
-it | To run in interactive mode, so you can see the after running the container. |
-p 8080:8080 | This is to map the internal container port to the outside host, usually your machine. Port mapping information can only be set on the container creation. If you don’t specify it, you need to check which port Docker assigned |
tomcat | Name of the image to run. This is linked to the Docker tomcat repository. This holds the instructions, so Docker knows how to run the server. |
Remember that if you stop and run again the same command, you are creating and running a new container.
Multiple Containers
You can run multiple Tomcat instances by issuing the following commands:
docker run -d -p 8080:8080 --name tomcat tomcat
docker run -d -p 9090:8080 --name web tomcat
These create two Tomcat containers named tomcat and web. Just remember to change the port mapping and the name. Adding a name is useful to control the container. If not, Docker will randomly generate one for you.
The -d
instructs Docker to run the container in the background. You can now control your container with the following commands:
docker ps | See a list of all the running Docker containers. Add -a to see all the containers. |
docker stop web | Stops the container named web.
|
docker start web | Starts the container named web.
|
docker rm web | Remove the container named web.
|
docker logs web | Shows the container named web logs.
|
Connecting to the Container
If you execute the command docker exec -it tomcat bash
, you will be able to connect to the container shell and explore the environment. You can for instance, verify the running processes with ps -ax
.
| radcortez:~ radcortez$ docker exec -it web bash root@75cd742dc39e:/usr/local/tomcat# ps -ax PID TTY STAT TIME COMMAND 1 ? Ssl+ 0:05 /usr/bin/java -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager -Djava.endorsed.dirs= 47 ? S 0:00 bash 51 ? R+ 0:00 ps -ax root@75cd742dc39e:/usr/local/tomcat# |
Interacting with the Container
Let’s add a file to the container:
echo "radcortez" > radcortez
Exit the container, but keep it running. Execute docker diff web
. You are going to see a bunch of files related to the tomcat temporary files, plus the file we just added. This command evaluates the file system differences between the running container and the origin image.
Conclusion
We only scratched the surface of Docker capabilities. It’s still soon to tell if Docker will become a mandatory tool. Currently it’s receiving major adoption from big players like Google, Microsoft or Amazon. Docker may end up failing in the end, but it sure opened up an old discussion which doesn’t have a clear answer yet.
Related Articles
Learn how to create, build and distribute your own Docker Images in this follow up post:
Distribute your applications with Docker Images
For some time now, most of the main implementations of JPA, like Hibernate, EclipseLink, OpenJPA or DataNucleus, offered ways to generate database schema objects. These include generation of tables, primary keys, foreign keys, indexes and other objects. Unfortunately, these are not standard between the implementations, when dealing with multiple environments. Only in the latest JPA 2.1 specification, the Schema Generation standardization was introduced.
From now on, if you are using Java EE 7, you don’t have to worry about the differences between the providers. Just use the new standard properties and you are done. Of course, you might be thinking that these are not needed at all, since database schemas for environments should not be managed like this. Still, these are very useful for development or testing purposes.
Schema Generation
Properties
If you wish to use the new standards for Schema Generation, just add any of the following properties to your properties
section of the persistence.xml
:
Property | Values |
---|
javax.persistence.schema-generation.database.action
Specifies the action to be taken regarding to the database schema. Possible values are self-explanatory. If this property is not specific no actions are performed in the database. | none, create, drop-and-create, drop |
javax.persistence.schema-generation.create-source
Specifies how the database schema should be created. It can be by just using the annotation metadata specified in the application entities, by executing a SQL script or a combination of both. You can also define the order. This property does not need to be specified for schema generation to occur. The default value is metadata. You need to be careful if you use a combination of create actions. The resulting actions may generate unexpected behaviour in the database schema and lead to failure. | metadata, script, metadata-than-script, script-then-metadata |
javax.persistence.schema-generation.drop-source
Same as javax.persistence.schema-generation.create-source, but for drop actions. | metadata, script, metadata-than-script, script-then-metadata |
javax.persistence.schema-generation.create-script-source, javax.persistence.schema-generation.drop-script-source
Specifies the target location to a SQL script file to execute on create or drop of the database schema. | String for the file URL to execute |
javax.persistence.sql-load-script-source
Specifies the target location to a SQL file to load data into the database. | String for the file URL to execute |
Additionally, it’s also possible to generate SQL scripts with the Schema Generation actions:
Property | Values |
---|
javax.persistence.schema-generation.scripts.action
Specifies which SQL scripts should be generated. Scripts are only generated if the corresponding generation location targets are specified. | none, create, drop-and-create, drop |
javax.persistence.schema-generation.scripts.create-target, javax.persistence.schema-generation.scripts.drop-target
Specifies the target location to generate the SQL script file to create or drop of the database schema. | String for the file URL to execute |
Samples
The following sample, drops and creates the database schema objects needed by the JPA application. Relies on the annotations metadata of the entities and also executes an arbitrary SQL file named load.sql
.
| <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="MyPU" transaction-type="JTA"> <properties> <property name="javax.persistence.schema-generation.database.action" value="drop-and-create"/> <property name="javax.persistence.schema-generation.create-source" value="metadata"/> <property name="javax.persistence.schema-generation.drop-source" value="metadata"/> <property name="javax.persistence.sql-load-script-source" value="META-INF/load.sql"/> </properties> </persistence-unit> </persistence> |
Another sample that generates the database schema objects to be created and dropped in the target locations:
| <?xml version="1.0" encoding="UTF-8"?> <persistence version="2.1" xmlns="http://xmlns.jcp.org/xml/ns/persistence" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/persistence http://xmlns.jcp.org/xml/ns/persistence/persistence_2_1.xsd"> <persistence-unit name="MyPU" transaction-type="JTA"> <properties> <property name="javax.persistence.schema-generation.scripts.action" value="drop-and-create"/> <property name="javax.persistence.schema-generation.scripts.create-target" value="file:/tmp/create.sql"/> <property name="javax.persistence.schema-generation.scripts.drop-target" value="file:/tmp/drop.sql"/> </properties> </persistence-unit> </persistence> |
Both samples can also be combined for dropping and creating the database objects and generating the corresponding scripts that perform these operations. You can find these and other samples in the Java EE Samples project hosted on Github.
Limitations
As I mentioned before, I recommend that you use these properties for development or testing purposes only. A wrong setting, might easily destroy or mess with your production database.
There are no actions to update or just validate the schema. I couldn’t find the reason why they didn’t make it into the specification, but here is an issue with the improvement suggestion.
The database schema actions are only performed on the application deployment in a Java EE environment. For development, you might want to perform the actions on the server restart.
Support
Both Hibernate and EclipseLink, which are bundled with Wildfly and Glassfish support these properties.
OpenJPA, currently does not support these properties, but I’ve been working in the OpenJPA support for standard Schema Generation. If you’re curious or want to follow the progress, check my Github repo, here. This was actually my main motivation to write this post, since I’m a bit involved in the implementation of the feature.
I hope you enjoyed the post 🙂
In one way or another, every developer has come in touch with an API. Either integrating a major system for a big corporation, producing some fancy charts with the latest graph library, or simply by interacting with his favorite programming language. The truth is that APIs are everywhere! They actually represent a fundamental building block of the nowadays Internet, playing a fundamental role in the data exchange process that takes place between different systems and devices. From the simple weather widget on your mobile phone to a credit card payment you perform on an online shop, all of these wouldn’t be possible if those systems wouldn’t communicate with each other by calling one another’s APIs.
So with the ever growing eco-system of heterogeneous devices connected to the internet, APIs are put a new set of demanding challenges. While they must continue to perform in a reliable and secure manner, they must also be compatible with all these devices that can range from a wristwatch to the most advanced server in a data-center.
REST to the rescue
One of the most widely used technologies for building such APIs are the so called REST APIs. These APIs aim to provide a generic and standardize way of communication between heterogeneous systems. Because they heavily rely on standard communication protocols and data representation – like HTTP, XML or JSON – it’s quite easy to provide client side implementations on most programming languages, thus making them compatible with the vast majority of systems and devices.
So while these REST APIs can be compatible with most devices and technologies out there, they also must evolve. And the problem with evolution is that you sometimes have to maintain retro-compatibility with old client versions.
Let’s build up an example.
Let’s imagine an appointment system where you have an API to create and retrieve appointments. To simplify things let’s imagine our appointment object with a date and a guest name. Something like this:
| public class AppointmentDTO { public Long id; public Date date; public String guestName; } |
A very simple REST API would look like this:
| @Path("/api/appointments") public class AppointmentsAPI { @GET @Path("/{id}") public AppointmentDTO getAppointment(@PathParam("id") String id) { ... } @POST public void createAppointment(AppointmentDTO appointment) { ... } } |
Let’s assume this plain simple API works and is being used on mobile phones, tablets and various websites that allow for booking and displaying appointments. So far so good.
At some point, you decide it would be very interesting to start gathering some statistics about your appointment system. To keep things simple you just want to know who’s the person who booked most times. For this you would need to correlate guest between themselves and decide you need to add an unique identifier to each guest. Let’s use Email. So now your object model would look like something like this:
| public class AppointmentDTO { public Long id; public Date date; public GuestDTO guest; } public class GuestDTO { public String email; public String name; } |
So our object model changed slightly which means we will have to adapt the business logic on our api.
The Problem

While adapting the API to store and retrieve the new object types should be a no brainer, the problem is that all your current clients are using the old model and will continue to do so until they update. One can argue that you shouldn’t have to worry about this, and that customers should update to the newer version, but the truth is that you can’t really force an update from night to day. There will always be a time window where you have to keep both models running, which means your api must be retro-compatible.
This is where your problems start.
So back to our example, in this case it means that our API will have to handle both object models and be able to store and retrieve those models depending on the client. So let’s add back the guestName to our object to maintain compatibility with the old clients:
| public class AppointmentDTO { public Long id; public Date date; @Deprecated //For retro compatibility purposes public String guestName; public GuestDTO guest; } |
Remember a good thumb rule on API objects is that you should never delete fields. Adding new ones usually won’t break any client implementations (assuming they follow a good thumb rule of ignoring new fields), but removing fields is usually a road to nightmares.
Now for maintaining the API compatible, there are a few different options. Let’s look at some of the alternatives:
- Duplication: pure and simple. Create a new method for the new clients and have the old ones using the same one.
- Query parameters: introduce a flag to control the behavior. Something like useGuests=true.
- API Versioning: Introduce a version in your URL path to control which method version to call.
So all these alternatives have their pros and cons. While duplication can be plain simple, it can easily turn your API classes into a bowl of duplicated code.
Query parameters can (and should) be used for behavior control (for example to add pagination to a listing) but we should avoid using them for actual API evolutions, since these are usually of a permanent kind and therefore you don’t want to make it optional for the consumer.
Versioning seems like a good idea. It allows for a clean way to evolve the API, it keeps old clients separated from new ones and provides a generic base from all kinds of changes that will occur during your API lifespan. On the other hand it also introduces a bit of complexity, specially if you will have different calls at different versions. Your clients would end up having to manage your API evolution themselves by upgrading a call, instead of the API. It’s like instead of upgrading a library to the next version, you would upgrade only a certain class of that library. This can easily turn into a version nightmare…
To overcome this we must ensure that our versions cover the whole API. This means that I should be able to call every available method on /v1 using /v2. Of course that if a newer version on a given method exists on v2 it should be run on the /v2 call. However, if a given method hasn’t changed in v2, I expect that the v1 version would seamlessly be called.
Inheritance based API Versioning
In order to achieve this we can take advantage of Java objects polymorphic capabilities. We can build up API versions in a hierarchical way so that older version methods can be overridden by newer, and calls to a newer version of an unchanged method can be seamlessly fallen back to it’s earlier version.
So back to our example we could build up a new version of the create method so that the API would look like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | @Path("/api/v1/appointments") //We add a version to our base path public class AppointmentsAPIv1 { //We add the version to our API classes @GET @Path("/{id}") public AppointmentDTO getAppointment(@PathParam("id") String id) { ... } @POST public void createAppointment(AppointmentDTO appointment) { //Your old way of creating Appointments only with names } } //New API class that extends the previous version @Path("/api/v2/appointments") public class AppointmentsAPIv2 extends AppointmentsAPIv1 { @POST @Override public void createAppointment(AppointmentDTO appointment) { //Your new way of creating appointments with guests } } |
So now we have 2 working versions of our API. While all the old clients that didn’t yet upgrade to the new version will continue to use v1 – and will see no changes – all your new consumers can now use the latest v2. Note that all these calls are valid:
Call | Result |
---|
GET /api/v1/appointments/123 | Will run getAppointment on the v1 class |
GET /api/v2/appointments/123 | Will run getAppointment on the v1 class |
POST /api/v1/appointments | Will run createAppointment on the v1 class |
POST /api/v2/appointments | Will run createAppointment on the v2 class |
This way any consumers that want to start using the latest version will only have to update their base URLs to the corresponding version, and all of the API will seamlessly shift to the most recent implementations, while keeping the old unchanged ones.
Caveat
For the keen eye there is an immediate caveat with this approach. If your API consists of tenths of different classes, a newer version would imply duplicating them all to an upper version even for those where you don’t actually have any changes. It’s a bit of boiler plate code that can be mostly auto-generated. Still annoying though.
Although there is no quick way to overcome this, the use of interfaces could help. Instead of creating a new implementation class you could simply create a new Path annotated interface and have it implemented in your current implementing class. Although you would sill have to create one interface per API class, it is a bit cleaner. It helps a little bit, but it’s still a caveat.
Final thoughts
API versioning seems to be a current hot topic. Lot of different angles and opinions exists but there seems to be a lack of standard best practices. While this post doesn’t aim to provide such I hope that it helps to achieve a better API structure and contribute to it’s maintainability.
A final word goes to Roberto Cortez for encouraging and allowing this post on his blog. This is actually my first blog post so load the cannons and fire at will. 😉