The Thirteen Meeting of Coimbra JUG was about Databases and how to keep their Schema versioning and controlled. While not being directly related to Java, it’s very common for every Java developer to come across with a project that needs to use a Database. Usually developers worry about versioning the code, but nobody cares about the Database. Shouldn’t the database be treated the same was as the code, since it’s also a part of the application? For that reason, we think this is a good topic to have on our JUG.
To talk about this very challenging topic, we had the pleasure to host Nuno Alves. Nuno is a very experience DBA, with more than 10 years of experience with Oracle, PostgreSQL, MSSQLServer and DB2. He is the right man to have on your team to deal with the Database stuff!
The session itself covered ER modelling, versioning of SQL scripts, deployment and documentation. It also had a demo of a tool called Flyway to version and control scripts executions in different environments.
In the end, we would like to thank Praxis and Critical Software for sponsoring the event with the venue, food and drinks for everyone! Thank you very much for your support!
For some time now, most of the main implementations of JPA, like Hibernate, EclipseLink, OpenJPA or DataNucleus, offered ways to generate database schema objects. These include generation of tables, primary keys, foreign keys, indexes and other objects. Unfortunately, these are not standard between the implementations, when dealing with multiple environments. Only in the latest JPA 2.1 specification, the Schema Generation standardization was introduced.
From now on, if you are using Java EE 7, you don’t have to worry about the differences between the providers. Just use the new standard properties and you are done. Of course, you might be thinking that these are not needed at all, since database schemas for environments should not be managed like this. Still, these are very useful for development or testing purposes.
Schema Generation
Properties If you wish to use the new standards for Schema Generation, just add any of the following properties to your properties section of the persistence.xml:
Specifies the action to be taken regarding to the database schema. Possible values are self-explanatory. If this property is not specific no actions are performed in the database.
none, create, drop-and-create, drop
javax.persistence.schema-generation.create-source
Specifies how the database schema should be created. It can be by just using the annotation metadata specified in the application entities, by executing a SQL script or a combination of both. You can also define the order. This property does not need to be specified for schema generation to occur. The default value is metadata. You need to be careful if you use a combination of create actions. The resulting actions may generate unexpected behaviour in the database schema and lead to failure.
Specifies the target location to generate the SQL script file to create or drop of the database schema.
String for the file URL to execute
Samples
The following sample, drops and creates the database schema objects needed by the JPA application. Relies on the annotations metadata of the entities and also executes an arbitrary SQL file named load.sql.
Both samples can also be combined for dropping and creating the database objects and generating the corresponding scripts that perform these operations. You can find these and other samples in the Java EE Samples project hosted on Github.
Limitations
As I mentioned before, I recommend that you use these properties for development or testing purposes only. A wrong setting, might easily destroy or mess with your production database.
There are no actions to update or just validate the schema. I couldn’t find the reason why they didn’t make it into the specification, but here is an issue with the improvement suggestion.
The database schema actions are only performed on the application deployment in a Java EE environment. For development, you might want to perform the actions on the server restart.
OpenJPA, currently does not support these properties, but I’ve been working in the OpenJPA support for standard Schema Generation. If you’re curious or want to follow the progress, check my Github repo, here. This was actually my main motivation to write this post, since I’m a bit involved in the implementation of the feature.
Most Java EE applications use database access in their business logic, so developers are often faced with the need to configure drivers and database connection properties in the application server. In this post, we are going to automate that task for JBoss / Wildfly and a Postgre database using Maven. The work is based on my World of Warcraft Auctions Batch application from the previous post.
Maven Configuration
Let’s start by adding the following to our pom.xml:
Wildfly Maven Plugin
XHTML
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<plugin>
<groupId>org.wildfly.plugins</groupId>
<artifactId>wildfly-maven-plugin</artifactId>
<version>1.0.2.Final</version>
<configuration>
<executeCommands>
<batch>false</batch>
<scripts>
<script>target/scripts/${cli.file}</script>
</scripts>
</executeCommands>
</configuration>
<dependencies>
<dependency>
<groupId>org.postgresql</groupId>
<artifactId>postgresql</artifactId>
<version>9.3-1102-jdbc41</version>
</dependency>
</dependencies>
</plugin>
We are going to use the Wildfly Maven Plugin to execute scripts with commands in the application server. Note that we also added a dependency to the Postgre driver. This is for Maven to download the dependency, because we are going to need it later to add it to the server. There is also a ${cli.file} property that is going to be assigned to a profile. This is to indicate which script we want to execute.
With the Resources Maven Plugin we are going to filter the script files contained in the src/main/resources/scripts and replace them with the properties contained in ${basedir}/src/main/resources/configuration.properties file.
Finally lets add a few Maven profiles to the pom.xml, with the scripts that we want to run:
Database drivers are added to Wildfly as a module. In this was, the driver is widely available to all the applications deployed in the server. With ${settings.localRepository} we are pointing into the database driver jar downloaded to your local Maven repository. Remember the dependency that we added into the Wildfly Maven Plugin? It’s to download the driver when you run the plugin and add it to the server. Now, to run the script we execute (you need to have the application server running):
The process-resources lifecycle is needed to replace the properties in the script file. In my case ${settings.localRepository} is replaced by /Users/radcortez/.m3/repository/. Check the target/scripts folder. After running the command, you should see the following output in the Maven log:
This script is to remove the driver from the application server. Execute mvn wildfly:execute-commands -P "remove-driver". You don’t need process-resources if you already executed the command before, unless you change the scripts.
Add Datasource
wow-auctions-install.cli
The scripts with the commands to add a Datasource:
Java EE 7, specifies that the container should provide a default Datasource. Instead of defining a Datasource with the JNDI name java:/datasources/WowAuctionsDS in the application, we are going to point our newly created datasource to the default one with /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}"). In this way, we don’t need to change anything in the application. Execute the script with mvn wildfly:execute-commands -P "install-wow-auctions". You should get the following Maven output:
1
2
3
4
5
6
org.jboss.as.cli.impl.CommandContextImplprintLine
INFO:{"outcome"=>"success"}
{"outcome"=>"success"}
org.jboss.as.cli.impl.CommandContextImplprintLine
INFO:{"outcome"=>"success"}
{"outcome"=>"success"}
And on the server:
1
INFO[org.jboss.as.connector.subsystems.datasources](MSC service thread1-1)JBAS010400:Bound data source[java:/datasources/WowAuctionsDS]
This is the script to remove the Datasource and revert the Java EE 7 default Datasource. Run it by executing mvn wildfly:execute-commands -P "remove-wow-auctions"
Conclusion
This post demonstrated how to automate add / remove Drivers to Wildfly instances and also add / remove Datasources. This is useful if you want to switch between databases or if you’re configuring a server from the ground up. Think about CI environments. These scripts are also easily adjustable to other drivers.
A special thanks to RebelLabs for letting me publish my work there and of course to Oliver White for all the support on writing and reviewing the article. You rock!