Joining the Tribe

posted by Roberto Cortez on
tags:

TomitribeToday, I’m officially part of Tomitribe. I’m very excited to work with all the amazing tribers working hard to make TomEE a compelling Java EE server.

Some of my readers follow me because I work as a Freelancer and this move may confuse you. Let me assure you, that I’m not joining Tomitribe because I was unhappy with my Freelancer career. In fact, when I’ve started as a Freelancer a couple of year ago, I had no idea how things were going to turn out. Today, I can say that it was the right move. I had the chance to work on my own stuff, travel to conferences around the world, met a lot of different people and have fun in general.

In Tomitribe, I believe I will be able to do all the things that I love as a Freelancer and many more. Until now, I only developed or built applications that sit and run on a server. From now on, I have the chance to work on the server itself. Tomitribe is still a startup, a small one, but with incredible potential. After working for a few major corporations, I’m eager to help and grow something from the ground up. This was just an opportunity that I couldn’t refuse.

I have to say thank you to Decare Systems Ireland and Anthem for two wonderful years. They always understood my needs and gave me the freedom to explore my own things.

Moving forward, a big thanks to David and Amelia for believing in me and bringing me to Tomitribe. I hope I can full fill your expectations. Cheers!

Tomitribe Team

I’m sorry you are missing Andy, but I couldn’t find a better picture :(

Anyway, you can expect me to keep the blog running with awesome and independent content. Thank you for reading!

Share with others!
  • Twitter
  • Facebook
  • LinkedIn
  • Google Plus
  • Reddit
  • Add to favorites
  • Email
  • RSS

Wildfly, Apache CXF and @SchemaValidation

posted by Roberto Cortez on
tags: , ,

Over the last few days, I have been working on an application migration from JBoss 4 to Wildfly 8. The application is using different technologies, but we are going to focus here on XML Web Services, JAX-WS. Yeah, I know that they are not trendy anymore, but these were developed a long time ago and need to be maintained for compatibility issues.

Anyway, the path to migrate these services was not so easy. I’m sharing some of the problems and fixes with the hope that these could help other developers out there stuck with the same problems.

Sample Definition

Here is a sample of a Web Service definition in the old system, JBoss 4:

Luckily, most of the definition is using standard Java EE annotations. Only @org.jboss.ws.annotation.EndpointConfig and @org.jboss.ws.annotation.SchemaValidation are from the old JBossWS libraries.

We can easily get rid of @org.jboss.ws.annotation.EndpointConfig since we are not going to need it in the new application. For reference, it’s used to set up extra configuration data to be predefined with an endpoint. Check the documentation Predefined client and endpoint configurations.

We want to keep @org.jboss.ws.annotation.SchemaValidation. For reference, this annotation validates incoming and outgoing SOAP messages against the relevant schema in the endpoint wsdl contract. Since the annotation no longer exists in JBossWS we have to use Apache CXF, which is the underlying implementation for JAX-WS on Wildfly.

Problems

Here are a few of the problems I’ve faced:

SchemaValidation Annotation

The annotation @org.jboss.ws.annotation.SchemaValidation doesn’t exist anymore. You have to use the annotation org.apache.cxf.annotations.SchemaValidation from Apache CXF.

Add the following Maven dependency to use the Apache CXF annotation:

Also, notice that in the original annotation we could define an errorHandler property. The old application used a custom error handler to set a custom error message on schema validation errors. There is no equivalent in the new annotation, so we need to do it in another way. To replicate the old behaviour I’ve used Apache CXF Interceptors. Create an interceptor class and extend AbstractPhaseInterceptor. Here is a sample:

And you can use it like this:

Interceptors are used by both CXF clients and CXF servers. There are incoming and outgoing interceptor chains being executed for regular processing and also when an error occurs. In this case, we want to override the Schema Validation message, so we need to bind our interceptor in the error outgoing interceptor chain. You can use the annotation @OutFaultInterceptors for that behaviour. Each chain is split into phases. You define the phase where you want the interceptor to run by passing the Phase.MARSHAL in the constructor. There are other phases, but since we want to change the error message we do it in the MARSHAL phase.

Different WSDL

The old Web Services had the WSDL file being auto generated on deploy time. Unfortunately, in some situations, the WSDL generated by JBoss 4 and Wildfly 8 are different. This can cause problems with your external callers. In this case the main problem was in the Schema Validation. Requests that were valid in JBoss 4 were not valid anymore when being executed in Wildfly 8.

The reason for this behaviour was in the target namespaces. If you are using annotated @XmlRootElement pojos in your Web Service parameters, without defining the namespace property in the annotation, JBoss 4 WS generated the target WSDL element with a black namespace. Apache CXF will use the Web Service default namespace to bind the WSDL elements if they are blank. For reference, this is done in CXF code: org.apache.cxf.jaxws.support.JaxWsServiceConfiguration#getParameterName.

This could be fixed by changing the CXF code, but we opted to place the old generated WSDL file in the migrated application sources and include it in the distribution. It’s not auto generated anymore, meaning that we need to manually generate the WSDL if we change the API. We need to be careful to make sure that we are not breaking anything in the WSDL. This approach seemed better than having to maintain our own CXF version. We could probably submit a fix for this as well, but we believe that JBoss 4 behaviour was not intended.

Start CXF

To use specific API’s from CXF, is not enough to have a project dependency for it. In fact, the first few times I’ve tried the changes, nothing related with CXF seemed to work. This happens because Wildfly it’s only looking for the standard Java EE JAX-WS annotations. To have all the CXF behaviour working, we need to tell Wildfly that our application depends on CXF, even if the libs are already on the server. Yeah, it’s a bit confusing.

The application is deployed in a EAR file. So you need to create a jboss-deployment-structure.xml and add the following content:

Using a MANIFEST.MF in the WAR file apparently doesn’t work if it’s deployed inside an EAR file. For more information, please check Class Loading in WildFly.

If you want to use other CXF features, especially the ones linked with Spring, thing might be a bit trickier. Have a look into this post: Assorted facts about JBoss. Fact 6: JBoss and CXF: match made in heaven.

Final Definition

This should be our final definition for our Web Service:

As you can see, the required changes to migrate a Web Service from JBoss 4 to Wildfly are just a few. However, there are a few minor details that can block you for a long time if you don’t know the details. Maybe you have a different setup and the problems that you face are different. This can also help if you are just trying to setup CXF with Wildfly Anyway, I hope that this post can be useful to you.

Share with others!
  • Twitter
  • Facebook
  • LinkedIn
  • Google Plus
  • Reddit
  • Add to favorites
  • Email
  • RSS

Distribute your applications with Docker Images

posted by Roberto Cortez on

Works on my MachineSince I started this blog, I had the need to develop a couple of sample applications to showcase some of the topics I have been covering. Usually, some kind of Java EE application that needs to be deployed in a Java EE container. Even by providing instructions on how to setup the environment, it can be tricky for a newcomer. A few of my readers don’t have a Java EE container available in their local machine. Some don’t even have Java Development Kit installed. If I could provide the entire environment set up for you and you only need to execute it somehow, wouldn’t that be great? I do think so! Instead of distributing only the application, also distribute the environment needed for the application to run. We can do that using Docker.

A couple of weeks ago, I wrote the post Get into Docker. Now, this post is going to continue to explore one of the most interesting Docker features: the Docker Images. This is the answer to provide my reader with a complete environment with everything ready to run.

Docker Image

A Docker Image is a read only template used to create the Docker containers. Each image is built with a series of layers composing your final image. If you need to distribute something using Ubuntu and Apache, you start with a base Ubuntu image and add Apache on top.

Create a Docker Image file

I’m going to use one my latest application, the World of Warcraft Auction House, to show how we can package it into a Docker Image and distribute it to others. The easiest way is to create a Dockerfile. This is a simple plain text file that contains a set of instructions that tells Docker how to build our image. The instructions that you can use are well defined and straightforward. Check the Dockerfile reference page for a list of possible instructions. Each instruction adds a new layer to your Docker Images. Usually the Dockerfile is named Dockerfile. Place it in a directory of your choice.

Base Image

Every Dockerfile needs to start with a FROM instruction. We need to start from somewhere, so this indicates the base image that we are going to use to build our environment. If you were building a Virtual Machine you also had to start from somewhere, and you have to start by picking up the Operating System that you are going to use. With the Dockerfile it’s no different. Let’s add the following to the Dockerfile:

FROM debian:latest

Our base image will be the latest version of Debian, available here: Docker Hub Debian Repo.

Add what you need

The idea here is to build an environment that checkouts the code, build and execute the World of Warcraft Auction House sample. Can you figure out what you need? The JDK of course, to compile and run and Maven to perform the build. But these are not enough. You also need the Git command line client to checkout the code. For this part you need to know a little bit about Unix shell scripting.

Since we need to install the JDK and Maven, we need to download them into our image from somewhere. You can use the wget command to do it. But wget is not available in our base Debian image so we need to install it first. To run shell commands we use the RUN instruction in the Dockerfile.

Install wget:

Install the JDK:

Install Maven:

We also need to have Java and Maven accessible from anywhere in our image. As you would do to your local machine when setting the environment, you need to set JAVA_HOME and add Maven binary to the PATH. You can do this by using Docker ENV instruction.

Add the application

Now that we have the required environment for the World of Warcraft Auction House, we just need to clone the code and build it:

We also want to expose a port, so you can access the application. You should use the listening http port of the application server. In this case, it’s 8080. You can do this in Docker with the EXPOSE instruction:

I had to use a little trick here. I don’t want to download and install the application server, so I’m using the embedded Wildfly version of the Maven plugin. Now, as I told you before, each instruction of the Dockerfile adds a new layer to the image. In here I’m forcing a start and stop of the server, just for Maven to download the required dependencies and have them available in the image. If I didn’t do this, whenever I wanted to run the image, I would have to download all the server dependencies and the startup of the image would take considerably longer.

Run the application

The final instruction should be a CMD to set the command to be executed when running the image:

In this case we want to make sure we are using the latest code, so we do a git pull and then run the embedded Wildfly server. The deploy configuration has been already set up in Wildfly Maven plugin.

Complete Dockerfile

Build the Dockerfile

To be able to distribute your image, you need to build your Dockerfile. What this is going to do, is to read every instruction, execute it and add a layer to your Docker Images. You only need to do this once, unless you change your Dockerfile. The CMD instruction is not executed in the build, since it’s only used when you are actually running the image and executing the container.

To build the Dockerfile, I use the following command in the directory containing your Dockerfile:

docker build -t radcortez/wow-auctions .

The -t radcortez/wow-auctions is to tag and name the image I’m building. You should use the format user/name. You should use the same user name that you register with Docker Hub.

Pushing the Image

Docker Hub is a Docker Image repository. It’s the same concept of Maven repositories for Java libraries. Download or upload images and you are good to go. The Docker Hub already contains a huge number of images ready to use, from simple Unix distributions, to full blown application servers.

We can now pick the image we build locally and upload it to Docker Hub. This will allow anyone to download and use this image. We can do it like this:

docker push radcortez/wow-auctions

Depending on the image size, this can take a few minutes.

Run the Image

Finally to run the image and the container we execute:

docker run -it --name wow-auctions -p 8080:8080 radcortez/wow-auctions

Since I’ve built the image locally first, this will run the CMDradcortez/wow-auctions. Just by using the above command, the image is going to be downloaded and executed in your environment.

Conclusion

With Docker, is possible to distribute your own applications and have the required environment for the application to run properly created by you. It’s not exactly trivial, since you need some knowledge of Unix, but it’s shouldn’t be a problem.

My main motivation to use Docker here, was to simplify the distribution of my sample applications. It’s not unusual to receive a few reader emails asking for help to set up their environment. Sure, in this way you now have to install Docker too, but that’s the only thing you need. The rest, just leave it to me now!

Related Articles

Remember to check my introductory post about Docker:

Get Into Docker

Share with others!
  • Twitter
  • Facebook
  • LinkedIn
  • Google Plus
  • Reddit
  • Add to favorites
  • Email
  • RSS

Eighth Coimbra JUG Meeting – Integration Testing with Arquillian

posted by Roberto Cortez on

Last Thursday, 16 April 2015, the eight meeting of Coimbra JUG was held at the Department of Informatics Engineering of the University of Coimbra, in Portugal. The attendance was not great, compared to the amount of people signed up for it, around 20 to almost 40 signed up, but still a worthy session. We had the pleasure to listen to Bruno Baptista talking about Integration Testing with Arquillian. A very special thanks to Bruno for taking the challenge and steer the session. He is also going to support the group and help me run it.

Anyway, all the audience recognized the importance of Integration Tests, but no one was implementing them. This is why I think these kind of sessions related to Testing are very important to create awareness in the community and professionals. It’s no secret that these practices produce a higher quality result, but for one reason or another, testing is sometimes an elusive task. Bruno explained the main benefits of Integration Testing, plus the major pain points in performing the test. By using a demo, Bruno demonstrated that Arquillian can alleviate much of the pain, by showing how to test a JPA and a REST application.

Coimbra JUG Meeting 8

As always, we had surprises for the attendees. IntelliJ sponsored our event, by offering a free license to raffle among the attendees. Congratulations to Miriam Lopes for winning the license. Develop with pleasure! We also offered the book Continuous Enterprise Development in Java courtesy of O’Reilly, congratulations to Victor Reina. We also handed a few ZeroTurnaround t-shirts.

Here are the materials for the session:

Enjoy!

Share with others!
  • Twitter
  • Facebook
  • LinkedIn
  • Google Plus
  • Reddit
  • Add to favorites
  • Email
  • RSS

Get into Docker

posted by Roberto Cortez on

Works on my MachineHave you ever heard about Docker before? Most likely. If not, don’t worry, I’ll try to summarize it for you. Docker is probably one of the hottest technologies at the moment. It has the potential to revolutionize the way we build, deploy and distribute applications. At the same time, it’s already having a huge impact in the development process.

In some cases, the development environments can be so much complicated, that it’s hard to keep the consistency between the different team members. I’m pretty sure that most of us already suffered from the syndrome “Works on my Machine”, right? One way to deal with the problem is to build Virtual Machines (VM) with everything set up so you can distribute them through your team. But VM’s are slow, large and you cannot access them if they are not running.

What is Docker?

Docker LogoShort answer: it’s like a lightweight VM. In practice, it’s not the case, since Docker is different from a regular VM. Docker creates a container for your application, packaged with all of the required dependencies and ready to run. These containers run on a shared Linux kernel, but they are isolated from each other. This means that you don’t need the usual VM operating system, giving a considerable performance boost and shrinking the application size.

Let’s dig a little more into detail:

Docker Image

A Docker Image is a read only template used to create the Docker containers. Each image is built with a series of layers composing your final image. If you need to distribute something using Ubuntu and Apache, you start with a base Ubuntu image and add Apache on top. If you later want to upgrade to a Tomcat instance, you just add another layer to your image. Instead of distributing the entire image as you would with a VM, you just release the update.

Docker Registry

The Docker registry also called Docker Hub is a Docker Image repository. It’s the same concept of Maven repositories for Java libraries. Download or upload images and you are good to go. The Docker Hub already contains a huge number of images ready to use, from simple Unix distributions, to full blown application servers.

Docker Container

A Docker Container is the runtime component of the Docker Image. You can spin multiple containers from the same Docker Image in an isolated context. Docker containers can be run, started, stopped, moved, and deleted.

How do I start?

You need to install Docker of course. Please refer to the installation guides of Docker. They are pretty good and I had no problem installing the software. Make sure you follow the proper guide to your system.

Our first Docker Container

After having Docker installed, you can immediately type in your command line:

docker run -it -p 8080:8080 tomcat

You should see the following message:

Unable to find image ‘tomcat:latest’ locally

And a lot of downloads starting. Like Maven, when you build an application, it downloads the required libraries to run Tomcat, by reaching out to Docker Hub. It takes a while to download. (Great, one more thing to download the Internet. Luckily we can use ZipRebel, to download it quickly).

After everything is downloaded, you should see the Tomcat instance booting up, and you can access it by going to http://localhost:8080 in Linux boxes. For Windows and Mac users is slightly more complicated. Since Docker only works in a Linux environment, to be able to use it in Windows and Mac you need boot2docker (which you should have from the installation guide). This is in fact a VM that runs Docker on Linux completely from memory. To access the Docker containers you need to refer to this VM IP. You can get the IP with the command: boot2docker ip.

Explaining the command:

docker runThe command to create and start a new Docker container.
-itTo run in interactive mode, so you can see the after running the container.
-p 8080:8080This is to map the internal container port to the outside host, usually your machine. Port mapping information can only be set on the container creation. If you don’t specify it, you need to check which port Docker assigned
tomcatName of the image to run. This is linked to the Docker tomcat repository. This holds the instructions, so Docker knows how to run the server.

Remember that if you stop and run again the same command, you are creating and running a new container.

Multiple Containers

You can run multiple Tomcat instances by issuing the following commands:

docker run -d -p 8080:8080 --name tomcat tomcat
docker run -d -p 9090:8080 --name web tomcat

These create two Tomcat containers named tomcat and web. Just remember to change the port mapping and the name. Adding a name is useful to control the container. If not, Docker will randomly generate one for you.

The -d instructs Docker to run the container in the background. You can now control your container with the following commands:

docker psSee a list of all the running Docker containers. Add -a to see all the containers.
docker stop webStops the container named web.
docker start webStarts the container named web.
docker rm webRemove the container named web.
docker logs webShows the container named web logs.

Connecting to the Container

If you execute the command docker exec -it tomcat bash, you will be able to connect to the container shell and explore the environment. You can for instance, verify the running processes with ps -ax.

Interacting with the Container

Let’s add a file to the container:

echo "radcortez" > radcortez

Exit the container, but keep it running. Execute docker diff web. You are going to see a bunch of files related to the tomcat temporary files, plus the file we just added. This command evaluates the file system differences between the running container and the origin image.

Conclusion

We only scratched the surface of Docker capabilities. It’s still soon to tell if Docker will become a mandatory tool. Currently it’s receiving major adoption from big players like Google, Microsoft or Amazon. Docker may end up failing in the end, but it sure opened up an old discussion which doesn’t have a clear answer yet.

Related Articles

Learn how to create, build and distribute your own Docker Images in this follow up post:

Distribute your applications with Docker Images

Share with others!
  • Twitter
  • Facebook
  • LinkedIn
  • Google Plus
  • Reddit
  • Add to favorites
  • Email
  • RSS