Results for tag "wildfly"
A few weeks ago, I’ve posted a blog about moving from Java EE 5 to 7. It was mostly about how you could improve your Java EE 5 code with the new Java EE 7 stuff. Now in this post, I’m going to look a little bit into the migration path on the Application Server side.
If you’re using Java EE 5, there is good chance that you are using one of these servers:
There are many other servers supporting Java EE 5, and you could check them out here.
Prelude
I’ve ended up getting most of my experience with JBoss 4x, since the company I was working on at the time was already using it heavily in most of their projects. I hardly had any vote on the matter and just kept the company direction with JBoss.
When we decided to move one of our client critical applications from Java EE 5 to 7, we were faced with the dilemma of which application server to use. Since I was in a technical management position, I was now able to influence that decision. We end up picking Wildfly for the following reasons:
- Implemented Java EE 7 Full profile
- Powerful CLI to manage the server
- Team already familiar with the Java EE implementations shipped with Wildfly
Even though this post looks into JBoss and Wildfly, some of the principles still apply for Application Servers in general. So I hope that this can be useful for other Application Servers users as well. We are currently using Wildfly 8.2.0, but the content discussed in this post should also work with the latest Wildfly version.
Strategy
Performing an Application Server migration, especially one that involves servers so far apart, is never easy. The migration path is not exactly straightforward, because each application ends up using different features of the Application Server. Worse, the application might even be implementing business code supported in these features that might not be available in the target migration server.
Anyway, there are two strategies that you can follow when working on a migration project:
Feature Freeze
As the name implies, you freeze your project to perform the necessary adjustments to migrate the application. It’s probably easier to deal with complexity, but on the other hand it delays business features and creates a non negotiable deadline. It’s very hard to convince stakeholders to go with this strategy, but if you are able, go for it.
Combined
The other alternative is to keep development going and work the migration at the same time. It’s best for the business, but requires much more discipline and planning. You can always partition and split your application into modules and migrate it in small bits. This in the strategy I usually use.
First Steps
You might need some time to completely migrate your application. During that time, you need to keep the old server running as well as the new. For this, you are required to update and duplicate your environments. It’s like branching the code, but in runtime.
Support tools that you use, might need updating as well. Maven plugins for the new server, Jenkins deployments, whatever interacts with the Application Server. It’s a daunting task, since the complexity to manage all these extra environment and branches is not easy.
Walking the Path
There are a couple of details that you need to worry about when thinking about the migration. This is not an extensive list, but are probably the most common topics that you are going to come across.
Classloading
If you don’t run into ClassNotFoundException
, NoClassDefFoundError
or ClassCastException
you might want to consider to play the lottery and win!
This is especially true with JBoss 4.x Classloader. At the time, class loading was (still is, but even more than) an expensive operation, so JBoss used something called the UnifiedClassLoader
. This meant that there was no true isolation between applications. EAR archives could look into each other to load libraries. Of course, this was a major headache to manage. The worst part was when you had to deploy your application into a customer using a JBoss server. If you didn’t have control over it, the current deployment could clash with your own.
Wildfly introduced class loading based on modules instead of the usual hierarchical approach. Usually, an application deployed in Wildfly, doesn’t have access to the Application Server libraries unless is stated explicitly with a file descriptor. For Java EE Applications these modules are loaded automatically.
When changing servers, these are the most common issues related to class loading:
- Missing libraries that were sitting on other applications.
- Relaying on libraries sitting on the server that were either removed or updated.
- Libraries used on the application that are now part of the new server.
To fix this you need to tune your project dependencies by adding your removing the required libraries. There is no step by step guide here. Each case needs to be analyzed and fixed accordingly. It’s kinda like trying to untangle a string full of knots.
If you’re using | <?xml version="1.0" encoding="UTF-8"?> <jboss-deployment-structure xmlns="urn:jboss:deployment-structure:1.2"> <ear-subdeployments-isolated>false</ear-subdeployments-isolated> <deployment> <dependencies> <module name="org.jboss.msc" export="true"/> <module name="org.jboss.as.naming" export="true"/> <module name="org.jboss.as.server" export="true"/> <module name="deployment.app-client.jar" export="true"/> <module name="deployment.app-ear.ear.app-entity.jar" export="true"/> </dependencies> </deployment> </jboss-deployment-structure> |
This custom descriptor is adding dependencies from other deployments, namely app-client.jar
and even a sub deployment of another EAR in app-ear.ear.app-entity.jar
.
Finally, my advice here is to try to keep with the Standards and only introduce additional libraries if absolutely necessary. This will surely reduce your class loading problem and it would make it easier to move to new versions of the server or even change to another server in the future.
General Configuration
In JBoss 4.x, all the configuration was spread around different files: server.xml
, jboss-service.xml
, login-config.xml
and many others. You had to manually edit the files to change the required configuration. This was a tedious work, especially when you didn’t have access to the server and had to document the set of changes for someone else to perform.
In Wildfly most of the configuration goes into configuration/standalone.xml
, but I don’t ever edit the file. Wildfly ships with a very powerful Command Line Interface (CLI) that allows you to script pretty much every change that you need to perform on the server. Here is a sample of Undertow configuration:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | /subsystem=undertow/server=default-server/ajp-listener=ajp:add(socket-binding=ajp) /subsystem=undertow/server=default-server/host=app \ :add( \ alias=["localhost, ${app.host}"] \ ) /subsystem=undertow/server=default-server:write-attribute(name="default-host", value="app") /subsystem=undertow/server=default-server/host=app/filter-ref=server-header:add /subsystem=undertow/server=default-server/host=app/filter-ref=x-powered-by-header:add /subsystem=undertow/server=default-server/host=app/location="/":add (\ handler=welcome-content) /subsystem=undertow/server=default-server/host=default-host/filter-ref=server-header:remove /subsystem=undertow/server=default-server/host=default-host/filter-ref=x-powered-by-header:remove :reload /subsystem=undertow/server=default-server/host=default-host/location="/":remove /subsystem=undertow/server=default-server/host=default-host:remove /subsystem=undertow/server=default-server/host=segurnet/setting=single-sign-on:add(path="/") :reload |
This is setting up a virtual host called app
, making it the default host, removes the default host that comes with Wildfly and activate Single Sign On.
With scripting and the CLI is very easy to spin up a new server from the ground up. You should always prefer this way of changing configuration on the server.
Datasources
In JBoss 4.x, setting up a Datasource only require you to copy the database driver to the lib
folder and create a *-ds.xml
file with the Datasource connection details.
In Wildfly, is a little more tricky, but not a big deal. You set up the Datasource as a module and then you can use the CLI to add the Datasource connection details to the server configuration. I even wrote an entire blog post about this in the past: Configure JBoss / Wildfly Datasource with Maven.
Security
Security in JBoss 4.x was set up in conf/login-config.xml
. Not many changes were introduced with Wildfly, but if you need to implement a custom login module, the dependencies changed. I’ve also written an entire blog post about it: Custom Principal and LoginModule for Wildfly.
JNDI Bindings
It was common to use @LocalBinding
in JBoss 4.x to define the exact JNDI name for your EJB. But Java EE 7 introduced standard JNDI names by scope, meaning that you should follow the convention to lookup EJB’s.
Instead of:
| @Stateless @Local(UserBusiness.class) @LocalBinding(jndiBinding="custom/UserBusiness") public class UserBusinessBean implements UserBusiness {} ... private UserBusiness userBusiness; try { InitialContext context = new InitialContext(); userBusiness = (UserBusiness) context.lookup("custom/userBusiness"); } catch(Exception e) { } |
You can:
| @EJB(lookup="java:global/app-name/app-service/UserBusinessBean") private UserBusiness userBusiness; |
When Wildfly is starting you can also check the standard bindings in the log:
| java:global/segurnet/segurnet-protocol-gu-ejb/UserBusinessBean!com.criticalsoftware.segurnet.protocol.gu.ejb.business.UserBusiness java:app/app-service/UserBusinessBean!com.app.business.UserBusiness java:module/UserBusinessBean!com.app.business.UserBusiness java:global/app-name/app-service/UserBusinessBean java:app/app-service/UserBusinessBean java:module/UserBusinessBean |
Other Stuff
This are more specific topics that I’ve also wrote blog posts about, and might be interesting as well:
Spring Batch as Wildfly Module
Wildfly, Apache CXF and @SchemaValidation
Final Words
As stated, migrations never follow a direct path. Still, there are a couple of things that you can do to improve. Write tests, tests and tests. Did I tell you to write tests yet? Do it before working on any migration stuff. Even if everything with the migration seems fine, you might encounter slight behaviour changes between the different versions of the Java EE implementations.
Also, don’t underestimate the job. Keeping your application working with new features being developed, plus changing a server requires you to invest time and effort to make sure that nothing is going to break. Definitely it won’t take you 1 week, unless we are talking about a very tiny application. We took almost 2 years to migrate an application over 1 Million lines. But take these numbers lightly. These are very dependent on your team dynamics.
My final advice: if you are sitting in an old Java EE version, you should definitely migrate. Have a look in my blog about Reduce Legacy from Java EE 5 to 7. The jump is not easy, but with each new version of Java EE release and the concern about standardization, each upgrade should become less painful.
Over the last few days, I have been working on an application migration from JBoss 4 to Wildfly 8. The application is using different technologies, but we are going to focus here on XML Web Services, JAX-WS. Yeah, I know that they are not trendy anymore, but these were developed a long time ago and need to be maintained for compatibility issues.
Anyway, the path to migrate these services was not so easy. I’m sharing some of the problems and fixes with the hope that these could help other developers out there stuck with the same problems.
Sample Definition
Here is a sample of a Web Service definition in the old system, JBoss 4:
| @javax.jws.WebService(endpointInterface = "some.pack.age.WebService") @javax.jws.soap.SOAPBinding(style = SOAPBinding.Style.DOCUMENT) @org.jboss.ws.annotation.EndpointConfig(configName = "Standard WSSecurity Endpoint") @javax.jws.HandlerChain(file = "handlers.xml") @org.jboss.ws.annotation.SchemaValidation(enabled = true, errorHandler = CustomErrorHandler.class) public class WebServiceImpl implements WebService { |
Luckily, most of the definition is using standard Java EE annotations. Only @org.jboss.ws.annotation.EndpointConfig
and @org.jboss.ws.annotation.SchemaValidation
are from the old JBossWS libraries.
We can easily get rid of @org.jboss.ws.annotation.EndpointConfig
since we are not going to need it in the new application. For reference, it’s used to set up extra configuration data to be predefined with an endpoint. Check the documentation Predefined client and endpoint configurations.
We want to keep @org.jboss.ws.annotation.SchemaValidation
. For reference, this annotation validates incoming and outgoing SOAP messages against the relevant schema in the endpoint wsdl contract. Since the annotation no longer exists in JBossWS we have to use Apache CXF, which is the underlying implementation for JAX-WS on Wildfly.
Problems
Here are a few of the problems I’ve faced:
SchemaValidation Annotation
The annotation @org.jboss.ws.annotation.SchemaValidation
doesn’t exist anymore. You have to use the annotation org.apache.cxf.annotations.SchemaValidation
from Apache CXF.
Add the following Maven dependency to use the Apache CXF annotation:
| <dependency> <groupId>org.apache.cxf</groupId> <artifactId>cxf-api</artifactId> <version>2.7.11</version> <scope>provided</scope> </dependency> |
Also, notice that in the original annotation we could define an errorHandler
property. The old application used a custom error handler to set a custom error message on schema validation errors. There is no equivalent in the new annotation, so we need to do it in another way. To replicate the old behaviour I’ve used Apache CXF Interceptors. Create an interceptor class and extend AbstractPhaseInterceptor
. Here is a sample:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | public class SchemaValidationErrorInterceptor extends AbstractPhaseInterceptor<Message> { public SchemaValidationErrorInterceptor() { super(Phase.MARSHAL); } @Override public void handleMessage(Message message) throws Fault { Fault fault = (Fault) message.getContent(Exception.class); Throwable cause = fault.getCause(); while (cause != null) { if (cause instanceof SAXParseException) { fault.setMessage("Invalid XML: " + fault.getLocalizedMessage()); break; } cause = cause.getCause(); } } } |
And you can use it like this:
| @org.apache.cxf.interceptor.OutFaultInterceptors( classes = SchemaValidationErrorInterceptor.class ) |
Interceptors are used by both CXF clients and CXF servers. There are incoming and outgoing interceptor chains being executed for regular processing and also when an error occurs. In this case, we want to override the Schema Validation message, so we need to bind our interceptor in the error outgoing interceptor chain. You can use the annotation @OutFaultInterceptors
for that behaviour. Each chain is split into phases. You define the phase where you want the interceptor to run by passing the Phase.MARSHAL
in the constructor. There are other phases, but since we want to change the error message we do it in the MARSHAL
phase.
Different WSDL
The old Web Services had the WSDL file being auto generated on deploy time. Unfortunately, in some situations, the WSDL generated by JBoss 4 and Wildfly 8 are different. This can cause problems with your external callers. In this case the main problem was in the Schema Validation. Requests that were valid in JBoss 4 were not valid anymore when being executed in Wildfly 8.
The reason for this behaviour was in the target namespaces. If you are using annotated @XmlRootElement
pojos in your Web Service parameters, without defining the namespace
property in the annotation, JBoss 4 WS generated the target WSDL element with a black namespace. Apache CXF will use the Web Service default namespace to bind the WSDL elements if they are blank. For reference, this is done in CXF code: org.apache.cxf.jaxws.support.JaxWsServiceConfiguration#getParameterName
.
This could be fixed by changing the CXF code, but we opted to place the old generated WSDL file in the migrated application sources and include it in the distribution. It’s not auto generated anymore, meaning that we need to manually generate the WSDL if we change the API. We need to be careful to make sure that we are not breaking anything in the WSDL. This approach seemed better than having to maintain our own CXF version. We could probably submit a fix for this as well, but we believe that JBoss 4 behaviour was not intended.
Start CXF
To use specific API’s from CXF, is not enough to have a project dependency for it. In fact, the first few times I’ve tried the changes, nothing related with CXF seemed to work. This happens because Wildfly it’s only looking for the standard Java EE JAX-WS annotations. To have all the CXF behaviour working, we need to tell Wildfly that our application depends on CXF, even if the libs are already on the server. Yeah, it’s a bit confusing.
The application is deployed in a EAR file. So you need to create a jboss-deployment-structure.xml
and add the following content:
| <jboss-deployment-structure> <sub-deployment name="application.war"> <dependencies> <module name="org.apache.cxf"/> </dependencies> </sub-deployment> </jboss-deployment-structure> |
Using a MANIFEST.MF
in the WAR file apparently doesn’t work if it’s deployed inside an EAR file. For more information, please check Class Loading in WildFly.
If you want to use other CXF features, especially the ones linked with Spring, thing might be a bit trickier. Have a look into this post: Assorted facts about JBoss. Fact 6: JBoss and CXF: match made in heaven.
Final Definition
This should be our final definition for our Web Service:
| @WebService( wsdlLocation = "WebService.wsdl", endpointInterface = "some.pack.age.WebService" ) @SOAPBinding(style = SOAPBinding.Style.DOCUMENT) @HandlerChain(file = "/handlers.xml") @SchemaValidation(type = SchemaValidation.SchemaValidationType.IN) @OutFaultInterceptors(classes = SchemaValidationErrorInterceptor.class) public class WebServiceImpl implements BDNSWebService { |
As you can see, the required changes to migrate a Web Service from JBoss 4 to Wildfly are just a few. However, there are a few minor details that can block you for a long time if you don’t know the details. Maybe you have a different setup and the problems that you face are different. This can also help if you are just trying to setup CXF with Wildfly Anyway, I hope that this post can be useful to you.
Most Java EE applications use database access in their business logic, so developers are often faced with the need to configure drivers and database connection properties in the application server. In this post, we are going to automate that task for JBoss / Wildfly and a Postgre database using Maven. The work is based on my World of Warcraft Auctions Batch application from the previous post.
Maven Configuration
Let’s start by adding the following to our pom.xml
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>1.0.2.Final</version> <configuration> <executeCommands> <batch>false</batch> <scripts> <script>target/scripts/${cli.file}</script> </scripts> </executeCommands> </configuration> <dependencies> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.3-1102-jdbc41</version> </dependency> </dependencies> </plugin> |
We are going to use the Wildfly Maven Plugin to execute scripts with commands in the application server. Note that we also added a dependency to the Postgre driver. This is for Maven to download the dependency, because we are going to need it later to add it to the server. There is also a ${cli.file}
property that is going to be assigned to a profile. This is to indicate which script we want to execute.
Let’s also add the following to the pom.xml
:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 | <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>2.6</version> <executions> <execution> <id>copy-resources</id> <phase>process-resources</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>${basedir}/target/scripts</outputDirectory> <resources> <resource> <directory>src/main/resources/scripts</directory> <filtering>true</filtering> </resource> </resources> <filters> <filter>${basedir}/src/main/resources/configuration.properties</filter> </filters> </configuration> </execution> </executions> </plugin> |
With the Resources Maven Plugin we are going to filter the script files contained in the src/main/resources/scripts
and replace them with the properties contained in ${basedir}/src/main/resources/configuration.properties
file.
Finally lets add a few Maven profiles to the pom.xml
, with the scripts that we want to run:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | <profiles> <profile> <id>install-driver</id> <properties> <cli.file>wildfly-install-postgre-driver.cli</cli.file> </properties> </profile> <profile> <id>remove-driver</id> <properties> <cli.file>wildfly-remove-postgre-driver.cli</cli.file> </properties> </profile> <profile> <id>install-wow-auctions</id> <properties> <cli.file>wow-auctions-install.cli</cli.file> </properties> </profile> <profile> <id>remove-wow-auctions</id> <properties> <cli.file>wow-auctions-remove.cli</cli.file> </properties> </profile> </profiles> |
Wildfly Script Files
Add Driver
The scripts with the commands to add a Driver:
wildfly-install-postgre-driver.cli
| # Connect to Wildfly instance connect # Create Oracle JDBC Driver Module # If the module already exists, Wildfly will output a message saying that the module already exists and the script exits. module add \ --name=org.postgre \ --resources=${settings.localRepository}/org/postgresql/postgresql/9.3-1102-jdbc41/postgresql-9.3-1102-jdbc41.jar \ --dependencies=javax.api,javax.transaction.api # Add Driver Properties /subsystem=datasources/jdbc-driver=postgre: \ add( \ driver-name="postgre", \ driver-module-name="org.postgre") |
Database drivers are added to Wildfly as a module. In this was, the driver is widely available to all the applications deployed in the server. With ${settings.localRepository}
we are pointing into the database driver jar downloaded to your local Maven repository. Remember the dependency that we added into the Wildfly Maven Plugin? It’s to download the driver when you run the plugin and add it to the server. Now, to run the script we execute (you need to have the application server running):
mvn process-resources wildfly:execute-commands -P "install-driver"
The process-resources
lifecycle is needed to replace the properties in the script file. In my case ${settings.localRepository}
is replaced by /Users/radcortez/.m3/repository/
. Check the target/scripts
folder. After running the command, you should see the following output in the Maven log:
And on the server:
| INFO [org.jboss.as.connector.subsystems.datasources] (management-handler-thread - 4) JBAS010404: Deploying non-JDBC-compliant driver class org.postgresql.Driver (version 9.3) INFO [org.jboss.as.connector.deployers.jdbc] (MSC service thread 1-4) JBAS010417: Started Driver service with driver-name = postgre |
wildfly-remove-postgre-driver.cli
| # Connect to Wildfly instance connect if (outcome == success) of /subsystem=datasources/jdbc-driver=postgre:read-attribute(name=driver-name) # Remove Driver /subsystem=datasources/jdbc-driver=postgre:remove end-if # Remove Oracle JDBC Driver Module module remove --name=org.postgre |
This script is to remove the driver from the application server. Execute mvn wildfly:execute-commands -P "remove-driver"
. You don’t need process-resources
if you already executed the command before, unless you change the scripts.
Add Datasource
wow-auctions-install.cli
The scripts with the commands to add a Datasource:
| # Connect to Wildfly instance connect # Create Datasource /subsystem=datasources/data-source=WowAuctionsDS: \ add( \ jndi-name="${datasource.jndi}", \ driver-name=postgre, \ connection-url="${datasource.connection}", \ user-name="${datasource.user}", \ password="${datasource.password}") /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}") |
We also need a a file to define the properties:
configuration.properties
| datasource.jndi=java:/datasources/WowAuctionsDS datasource.connection=jdbc:postgresql://localhost:5432/wowauctions datasource.user=wowauctions datasource.password=wowauctions |
Default Java EE 7 Datasource
Java EE 7, specifies that the container should provide a default Datasource. Instead of defining a Datasource with the JNDI name java:/datasources/WowAuctionsDS
in the application, we are going to point our newly created datasource to the default one with /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}")
. In this way, we don’t need to change anything in the application. Execute the script with mvn wildfly:execute-commands -P "install-wow-auctions"
. You should get the following Maven output:
| org.jboss.as.cli.impl.CommandContextImpl printLine INFO: {"outcome" => "success"} {"outcome" => "success"} org.jboss.as.cli.impl.CommandContextImpl printLine INFO: {"outcome" => "success"} {"outcome" => "success"} |
And on the server:
| INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread 1-1) JBAS010400: Bound data source [java:/datasources/WowAuctionsDS] |
wow-auctions-remove.cli
| # Connect to Wildfly instance connect # Remove Datasources /subsystem=datasources/data-source=WowAuctionsDS:remove /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="java:jboss/datasources/ExampleDS") |
This is the script to remove the Datasource and revert the Java EE 7 default Datasource. Run it by executing mvn wildfly:execute-commands -P "remove-wow-auctions"
Conclusion
This post demonstrated how to automate add / remove Drivers to Wildfly instances and also add / remove Datasources. This is useful if you want to switch between databases or if you’re configuring a server from the ground up. Think about CI environments. These scripts are also easily adjustable to other drivers.
You can get the code from the WoW Auctions Github repo, which uses this setup. Enjoy!
(TL;DR – Get me to the
code)
For a long time, the Java EE specification was lacking a Batch Processing API. Today, this is an essential necessity for enterprise applications. This was finally fixed with the JSR-352 Batch Applications for the Java Platform now available in Java EE 7. The JSR-352 got it’s inspiration from the Spring Batch counterpart. Both cover the same concepts, although the resulting API’s are a bit different.
Since the Spring team also collaborated in the JSR-352, it was only a matter of time for them to provide an implementation based on Spring Batch. The latest major version of Spring Batch (version 3), now supports the JSR-352.
I’m a Spring Batch user for many years and I’ve always enjoyed that the technology had a interesting set of built-in readers and writers. These allowed you to perform the most common operations required by batch processing. Do you need to read data from a database? You could use JdbcCursorItemReader
, how about writing data in a fixed format? Use FlatFileItemWriter
, and so on.
Unfortunately, JSR-352 implementations do not have the amount of readers and writers available in Spring Batch. We have to remember that JSR-352 is very recent and didn’t have time to catch up. jBeret, the Wildfly implementation for JSR-352 already provides a few custom readers and writers.
What’s the point?
I was hoping that with the latest release, all the readers and writers from the original Spring Batch would be available as well. This is not the case yet, since it would take a lot of work, but there are plans to make them available in future versions. This would allow us to migrate native Spring Batch applications into JSR-352. We still have the issue of the implementation vendor lock-in, but it may be interesting in some cases.
Motivation
I’m one of the main test contributors for the Java EE Samples in the JSR-352 specification. I wanted to find out if the tests I’ve implemented have the same behaviour using the Spring Batch implementation. How can we do that?
Code
I think this exercise is not only interesting because of the original motivation, but it’s also useful to learn about modules and class loading on Wildfly. First we need to decide how are we going to deploy the needed Spring Batch dependencies. We could deploy them directly with the application, or use a Wildfly module. Modules have the advantage to be bundled directly into the application server and can be reused by all deployed applications.
Adding Wildfly Module with Maven
With a bit of work it’s possible to add the module automatically with the Wildfly Maven Plugin and the CLI (command line). Let’s start to create two files that represent the CLI commands that we need to create and remove the module:
wildfly-add-spring-batch.cli
| # Connect to Wildfly instance connect # Create Spring Batch Module # If the module already exists, Wildfly will output a message saying that the module already exists and the script exits. module add \ --name=org.springframework.batch \ --dependencies=javax.api,javaee.api \ --resources=${wildfly.module.classpath} |
The module --name
is important. We’re going to need it to reference it in our application. The --resources
is a pain, since you need to indicate a full classpath to all the required module dependencies, but we’re generating the paths in the next few steps.
wildfly-remove-spring-batch.cli
| # Connect to Wildfly instance connect # Remove Oracle JDBC Driver Module module remove --name=org.springframework.batch |
Note wildfly.module.classpath
. This property will hold the complete classpath for the required Spring Batch dependencies. We can generate it with Maven Dependency plugin:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 | <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <version>${version.plugin.dependency}</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>build-classpath</goal> </goals> <configuration> <outputProperty>wildfly.module.classpath</outputProperty> <pathSeparator>:</pathSeparator> <excludeGroupIds>javax</excludeGroupIds> <excludeScope>test</excludeScope> <includeScope>provided</includeScope> </configuration> </execution> </executions> </plugin> |
This is going to pick all dependencies (including transitive), exclude javax
(since they are already present in Wildfly) and exclude test
scope dependencies. We need the following dependencies for Spring Batch:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | <!-- Needed for Wildfly module --> <dependency> <groupId>org.springframework.batch</groupId> <artifactId>spring-batch-core</artifactId> <version>3.0.0.RELEASE</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-jdbc</artifactId> <version>4.0.5.RELEASE</version> <scope>provided</scope> </dependency> <dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> <version>1.4</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.hsqldb</groupId> <artifactId>hsqldb</artifactId> <version>2.3.2</version> <scope>provided</scope> </dependency> |
Now, we need to replace the property in the file. Let’s use Maven Resources plugin:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 | <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> <version>${version.plugin.resources}</version> <executions> <execution> <id>copy-resources</id> <phase>process-resources</phase> <goals> <goal>copy-resources</goal> </goals> <configuration> <outputDirectory>${basedir}/target/scripts</outputDirectory> <resources> <resource> <directory>src/main/resources/scripts</directory> <filtering>true</filtering> </resource> </resources> </configuration> </execution> </executions> </plugin> |
This will filter the configured files and replace the property wildfly.module.classpath
with the value we generated previously. This is a classpath pointing to the dependencies in your local Maven repository. Now with Wildfly Maven Plugin we can execute this script (you need to have Wildfly running):
| <plugin> <groupId>org.wildfly.plugins</groupId> <artifactId>wildfly-maven-plugin</artifactId> <version>${version.plugin.wildfly}</version> <configuration> <skip>false</skip> <executeCommands> <batch>false</batch> <scripts> <!--suppress MavenModelInspection --> <script>target/scripts/${cli.file}</script> </scripts> </executeCommands> </configuration> </plugin> |
And these profiles:
| <profiles> <profile> <id>install-spring-batch</id> <properties> <cli.file>wildfly-add-spring-batch.cli</cli.file> </properties> </profile> <profile> <id>remove-spring-batch</id> <properties> <cli.file>wildfly-remove-spring-batch.cli</cli.file> </properties> </profile> </profiles> |
(For the full pom.xml
contents, check here)
We can add the module by executing:
mvn process-resources wildfly:execute-commands -P install-spring-batch
.
Or remove the module by executing:
mvn wildfly:execute-commands -P remove-spring-batch
.
This strategy works for any module that you want to create into Wildfly. Think about adding a JDBC driver. You usually use a module to add it into the server, but all the documentation I’ve found about this is always a manual process. This works great for CI builds, so you can have everything you need to setup your environment.
Use Spring-Batch
Ok, I have my module there, but how can I instruct Wildfly to use it instead of jBeret? We need to add a the following file in META-INF
folder of our application:
jboss-deployment-structure.xml
| <?xml version="1.0" encoding="UTF-8"?> <jboss-deployment-structure> <deployment> <exclusions> <module name="org.wildfly.jberet"/> <module name="org.jberet.jberet-core"/> </exclusions> <dependencies> <module name="org.springframework.batch" services="import" meta-inf="import"/> </dependencies> </deployment> </jboss-deployment-structure> |
Since the JSR-352 uses a Service Loader to load the implementation, the only possible outcome would be to load the service specified in org.springframework.batch
module. Your batch code will now run with the Spring Batch implementation.
Testing
The github repository code, has Arquillian sample tests that demonstrate the behaviour. Check the Resources section below.
Resources
You can clone a full working copy from my github repository. You can find instructions there to deploy it.
Wildfly – Spring Batch
Since I may modify the code in the future, you can download the original source of this post from the release 1.0. In alternative, clone the repo, and checkout the tag from release 1.0 with the following command: git checkout 1.0
.
Future
I’ve still need to apply this to the Java EE Samples. It’s on my TODO list.
Did you ever had the need to implement your own custom JAAS Principal and LoginModule for you JEE application? There are a couple of reasons for it. I’ve done it in the following cases:
- Authenticate the user using different strategies.
- Have additional user information on the Principal object.
- Share user information between applications using the Principal object.
Maybe you have your own specific reason, it doesn’t matter. Today’s post will guide you on how to do it for Wildfly. There are a few articles on the topic, but each of them deal with a different aspect of the problem. I got motivated to write this post to aggregate all the steps in a single article, including the Arquillian test.
Wildfly uses PicketBox for Java Application Security and already implements some handy classes to take care of the authentication of the user for you. Have a look into UsersRolesLoginModule
, DatabaseServerLoginModule
, LdapUsersLoginModule
, BaseCertLoginModule
and so on. Let’s start by creating a Maven project with the following dependency:
| <dependency> <groupId>org.picketbox</groupId> <artifactId>picketbox</artifactId> <version>4.0.20.Beta2</version> </dependency> |
Next, just create a CustomPrincipal
and a CustomLoginModule
classes:
| public class CustomPrincipal extends SimplePrincipal { private String description; public CustomPrincipal(String name, String description) { super(name); this.description = description; } public String getDescription() { return description; } public void setDescription(String description) { this.description = description; } } |
Note that we’re extending the org.jboss.security.SimplePrincipal
present in PicketBox, but you can also implement java.security.Principal
instead.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | public class CustomLoginModule extends UsersRolesLoginModule { private CustomPrincipal principal; @Override public boolean login() throws LoginException { boolean login = super.login(); if (login) { principal = new CustomPrincipal(getUsername(), "An user description!"); } return login; } @Override protected Principal getIdentity() { return principal != null ? principal : super.getIdentity(); } } |
Here again we’re extending a PicketBox class, org.jboss.security.auth.spi.UsersRolesLoginModule
. You can code your own login module by implementing javax.security.auth.spi.LoginModule
, but I recommend to extend one of the PicketBox classes, since they already have a lot of the behaviour that you will need. org.jboss.security.auth.spi.UsersRolesLoginModule
is a very simple login module that authenticates an user by matching his login and password to a file. You shouldn’t use it for production applications, but it’s very handy for prototypes.
CustomLoginModule
is also overriding two methods. These are needed to access our CustomPrincipal
in a JEE application. The login()
method as the name says is called when the user is performing the login action, so in here we create our CustomPrincipal
object. On the other hand, the getIdentity()
method is called to return the Principal that corresponds to the user primary identity, so we return our own instance if the login was successful.
Ok, great. How do we test it now? We use Arquillian, JUnit and HttpUnit. Start by adding the needed Maven dependencies (these are all the project dependencies):
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | <dependencyManagement> <dependencies> <dependency> <groupId>org.jboss.arquillian</groupId> <artifactId>arquillian-bom</artifactId> <version>${arquillian.version}</version> <scope>import</scope> <type>pom</type> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>javax</groupId> <artifactId>javaee-api</artifactId> <version>7.0</version> <scope>provided</scope> </dependency> <dependency> <groupId>org.picketbox</groupId> <artifactId>picketbox</artifactId> <version>4.0.20.Beta2</version> </dependency> <dependency> <groupId>org.wildfly</groupId> <artifactId>wildfly-arquillian-container-remote</artifactId> <version>${wildfly.version}</version> <scope>test</scope> </dependency> <dependency> <groupId>org.jboss.arquillian.junit</groupId> <artifactId>arquillian-junit-container</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> <dependency> <groupId>httpunit</groupId> <artifactId>httpunit</artifactId> <version>1.7</version> <scope>test</scope> </dependency> <dependency> <groupId>net.sourceforge.htmlunit</groupId> <artifactId>htmlunit</artifactId> <version>2.13</version> <scope>test</scope> </dependency> <dependency> <groupId>rhino</groupId> <artifactId>js</artifactId> <version>1.7R1</version> <scope>test</scope> </dependency> </dependencies> |
Note that we also included the javaee-api 7
dependency. Next, we’re creating a simple EJB to access our CustomPrincipal
and a Servlet to perform the authentication:
| @Stateless public class SampleEJB { @Resource private EJBContext ejbContext; @RolesAllowed("user") public String getPrincipalName() { return ejbContext.getCallerPrincipal().getName(); } } |
Now the Servlet:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 | @WebServlet(urlPatterns = {"/LoginServlet"}) public class LoginServlet extends HttpServlet { @Inject private SampleEJB sampleEJB; @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } @Override protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { processRequest(request, response); } private void processRequest(HttpServletRequest request, HttpServletResponse response) throws IOException { try { String username = request.getParameter("username"); String password = request.getParameter("password"); if (username != null && password != null) { request.login(username, password); } CustomPrincipal principal = (CustomPrincipal) request.getUserPrincipal(); response.getWriter().println("principal=" + request.getUserPrincipal().getClass().getSimpleName()); response.getWriter().println("username=" + sampleEJB.getPrincipalName()); response.getWriter().println("description=" + principal.getDescription()); } catch (ServletException e) { response.sendError(HttpServletResponse.SC_FORBIDDEN); } } } |
I think the code is self-explanatory, but we still need to wire everything together. The way we configure our login module to be used on Wildfly and on our application is by using Security Domains. You can add a Security Domain by hand using the standalone/configuration/standalone.xml
and domain/configuration/domain.xml
files on the Wildfly installation folder, but we’re going to do something more interesting.
Using the Command Line Interface (CLI), is really easy to make changes to the server configuration without modifying any XML. This also allows you to setup a test environment and clean up your changes in the end. To achieve that, create the following files:
jboss-add-login-module.cli
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | connect /subsystem=security/security-domain=CustomSecurityDomain:add(cache-type=default) reload /subsystem=security/security-domain=CustomSecurityDomain/authentication=classic: \ add( \ login-modules=[{ \ "code"=>"com.cortez.wildfly.security.CustomLoginModule", \ "flag"=>"required", \ "module-options"=>[ \ ("usersProperties"=>"user.properties"), \ ("rolesProperties"=>"roles.properties")] \ }]) reload |
As you probably have guessed, jboss-add-login-module.cli
contains the CLI commands to add our Security Domain to the Wildfly instance. We first add the Security Domain and then assign the login modules to the domain. Both commands should be able to be executed as a single command, but for some reason I was getting an error so I had to split them apart. The configuration is not available on the server unless you perform a reload
command. For the login module, please note the FQN of our CustomLoginModule
associated with the configuration. That’s how we wire the custom login module to the Security Domain. The configuration also references two other files: user.properties
and roles.properties
that are used to perform the user credential verification and load the user roles. Here are examples for both files:
user.properties
Define all valid usernames and their corresponding passwords.
roles.properties
Define the sets of roles for valid usernames.
We still need the CLI commands to remove the Security Domain at the cleanup phase:
jboss-remove-login-module.cli
| connect /subsystem=security/security-domain=CustomSecurityDomain:remove reload |
Almost done? Not yet! We still need to associate our Security Domain to our JEE application so our custom code runs when we perform authentication or execute Principal related behaviour. We need the following files now:
jboss-web.xml
| <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>CustomSecurityDomain</security-domain> </jboss-web> |
The previous file sets the Security Domain into Servlets.
jboss-ejb3.xml
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | <?xml version="1.0" encoding="UTF-8"?> <jboss:jboss xmlns="http://java.sun.com/xml/ns/javaee" xmlns:jboss="http://www.jboss.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:s="urn:security:1.1" version="3.1" impl-version="2.0"> <assembly-descriptor> <s:security> <!-- Even wildcard * is supported --> <ejb-name>SampleEJB</ejb-name> <!-- Name of the security domain which is configured in the EJB3 subsystem --> <s:security-domain>CustomSecurityDomain</s:security-domain> </s:security> </assembly-descriptor> </jboss:jboss> |
This file sets the Security Domain in EJB’s.
Uff! Now we’re finally ready to see some action! We required a bit of setup to have this example working! Here is the test class:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 | @RunWith(Arquillian.class) public class CustomLoginModuleTest { @ArquillianResource private URL deployUrl; @AfterClass public static void removeSecurityDomain() { processCliFile(new File("src/test/resources/jboss-remove-login-module.cli")); } @Deployment(testable = false) public static WebArchive createDeployment() { processCliFile(new File("src/test/resources/jboss-add-login-module.cli")); WebArchive war = ShrinkWrap.create(WebArchive.class) .addClass(CustomPrincipal.class) .addClass(CustomLoginModule.class) .addClass(SampleEJB.class) .addClass(LoginServlet.class) .addAsWebInfResource("jboss-web.xml") .addAsWebInfResource("jboss-ejb3.xml") .addAsResource("user.properties") .addAsResource("roles.properties"); System.out.println(war.toString(true)); return war; } @Test public void testLogin() throws IOException, SAXException { WebConversation webConversation = new WebConversation(); GetMethodWebRequest request = new GetMethodWebRequest(deployUrl + "LoginServlet"); request.setParameter("username", "username"); request.setParameter("password", "password"); WebResponse response = webConversation.getResponse(request); assertTrue(response.getText().contains("principal=" + CustomPrincipal.class.getSimpleName())); assertTrue(response.getText().contains("username=username")); assertTrue(response.getText().contains("description=An user description!")); System.out.println(response.getText()); } private static void processCliFile(File file) { CLI cli = CLI.newInstance(); cli.connect("localhost", 9990, null, null); CommandContext commandContext = cli.getCommandContext(); BufferedReader reader = null; try { reader = new BufferedReader(new FileReader(file)); String line = reader.readLine(); while (commandContext.getExitCode() == 0 && !commandContext.isTerminated() && line != null) { commandContext.handleSafe(line.trim()); line = reader.readLine(); } } catch (Throwable e) { throw new IllegalStateException("Failed to process file '" + file.getAbsolutePath() + "'", e); } finally { StreamUtils.safeClose(reader); } } } |
That’s it! Now your servlet login
method will authenticate using your custom login module and methods like getUserPrincipal()
from the servlet request or getCallerPrincipal()
from the EJBContext will return the CustomPrincipal
instance.
Fire up a Wildfly instance and run the test using mvm test
or just use your favourite IDE.
A few problems:
- It should be possible to add the security domain using only one single CLI command, but for some reason I was getting an erro. I need to have a better look into this.
- I couldn’t find a way to run code before and after the Arquillian deployment, so the code to add the Security Domain is inside the deployment method. I’ll try to find a way to do it.
If you want additional information, please check the following references:
If you’re too lazy to write the code on your own or just want a working sample, you can download it here. Enjoy!