Results for tag "database"

Thirteen Coimbra JUG Meeting – Keep your Database Schema Under Control

posted by Roberto Cortez on

The Thirteen Meeting of Coimbra JUG was about Databases and how to keep their Schema versioning and controlled. While not being directly related to Java, it’s very common for every Java developer to come across with a project that needs to use a Database. Usually developers worry about versioning the code, but nobody cares about the Database. Shouldn’t the database be treated the same was as the code, since it’s also a part of the application? For that reason, we think this is a good topic to have on our JUG.

To talk about this very challenging topic, we had the pleasure to host Nuno Alves. Nuno is a very experience DBA, with more than 10 years of experience with Oracle, PostgreSQL, MSSQLServer and DB2. He is the right man to have on your team to deal with the Database stuff!

Coimbra JUG Meeting 13

The session itself covered ER modelling, versioning of SQL scripts, deployment and documentation. It also had a demo of a tool called Flyway to version and control scripts executions in different environments.

In the end, we would like to thank Praxis and Critical Software for sponsoring the event with the venue, food and drinks for everyone! Thank you very much for your support!

Here are the materials for the session:

Presentation

Video (in Portuguese)

JPA Database Schema Generation

posted by Roberto Cortez on

For some time now, most of the main implementations of JPA, like Hibernate, EclipseLink, OpenJPA or DataNucleus, offered ways to generate database schema objects. These include generation of tables, primary keys, foreign keys, indexes and other objects. Unfortunately, these are not standard between the implementations, when dealing with multiple environments. Only in the latest JPA 2.1 specification, the Schema Generation standardization was introduced.

From now on, if you are using Java EE 7, you don’t have to worry about the differences between the providers. Just use the new standard properties and you are done. Of course, you might be thinking that these are not needed at all, since database schemas for environments should not be managed like this. Still, these are very useful for development or testing purposes.

Schema Generation

Properties
If you wish to use the new standards for Schema Generation, just add any of the following properties to your properties section of the persistence.xml:

PropertyValues
javax.persistence.schema-generation.database.action

Specifies the action to be taken regarding to the database schema. Possible values are self-explanatory. If this property is not specific no actions are performed in the database.
none, create, drop-and-create, drop
javax.persistence.schema-generation.create-source

Specifies how the database schema should be created. It can be by just using the annotation metadata specified in the application entities, by executing a SQL script or a combination of both. You can also define the order. This property does not need to be specified for schema generation to occur. The default value is metadata. You need to be careful if you use a combination of create actions. The resulting actions may generate unexpected behaviour in the database schema and lead to failure.
metadata, script, metadata-than-script, script-then-metadata
javax.persistence.schema-generation.drop-source

Same as javax.persistence.schema-generation.create-source, but for drop actions.
metadata, script, metadata-than-script, script-then-metadata
javax.persistence.schema-generation.create-script-source, javax.persistence.schema-generation.drop-script-source

Specifies the target location to a SQL script file to execute on create or drop of the database schema.
String for the file URL to execute
javax.persistence.sql-load-script-source

Specifies the target location to a SQL file to load data into the database.
String for the file URL to execute

Additionally, it’s also possible to generate SQL scripts with the Schema Generation actions:

PropertyValues
javax.persistence.schema-generation.scripts.action

Specifies which SQL scripts should be generated. Scripts are only generated if the corresponding generation location targets are specified.
none, create, drop-and-create, drop
javax.persistence.schema-generation.scripts.create-target, javax.persistence.schema-generation.scripts.drop-target

Specifies the target location to generate the SQL script file to create or drop of the database schema.
String for the file URL to execute

Samples

The following sample, drops and creates the database schema objects needed by the JPA application. Relies on the annotations metadata of the entities and also executes an arbitrary SQL file named load.sql.

Another sample that generates the database schema objects to be created and dropped in the target locations:

Both samples can also be combined for dropping and creating the database objects and generating the corresponding scripts that perform these operations. You can find these and other samples in the Java EE Samples project hosted on Github.

Limitations

As I mentioned before, I recommend that you use these properties for development or testing purposes only. A wrong setting, might easily destroy or mess with your production database.

There are no actions to update or just validate the schema. I couldn’t find the reason why they didn’t make it into the specification, but here is an issue with the improvement suggestion.

The database schema actions are only performed on the application deployment in a Java EE environment. For development, you might want to perform the actions on the server restart.

Support

Both Hibernate and EclipseLink, which are bundled with Wildfly and Glassfish support these properties.

OpenJPA, currently does not support these properties, but I’ve been working in the OpenJPA support for standard Schema Generation. If you’re curious or want to follow the progress, check my Github repo, here. This was actually my main motivation to write this post, since I’m a bit involved in the implementation of the feature.

I hope you enjoyed the post 🙂

Configure JBoss / Wildfly Datasource with Maven

posted by Roberto Cortez on

Most Java EE applications use database access in their business logic, so developers are often faced with the need to configure drivers and database connection properties in the application server. In this post, we are going to automate that task for JBoss / Wildfly and a Postgre database using Maven. The work is based on my World of Warcraft Auctions Batch application from the previous post.

Maven Configuration

Let’s start by adding the following to our pom.xml:

We are going to use the Wildfly Maven Plugin to execute scripts with commands in the application server. Note that we also added a dependency to the Postgre driver. This is for Maven to download the dependency, because we are going to need it later to add it to the server. There is also a ${cli.file} property that is going to be assigned to a profile. This is to indicate which script we want to execute.

Let’s also add the following to the pom.xml:

With the Resources Maven Plugin we are going to filter the script files contained in the src/main/resources/scripts and replace them with the properties contained in ${basedir}/src/main/resources/configuration.properties file.

Finally lets add a few Maven profiles to the pom.xml, with the scripts that we want to run:

Wildfly Script Files

Add Driver

The scripts with the commands to add a Driver:

wildfly-install-postgre-driver.cli

Database drivers are added to Wildfly as a module. In this was, the driver is widely available to all the applications deployed in the server. With ${settings.localRepository} we are pointing into the database driver jar downloaded to your local Maven repository. Remember the dependency that we added into the Wildfly Maven Plugin? It’s to download the driver when you run the plugin and add it to the server. Now, to run the script we execute (you need to have the application server running):

mvn process-resources wildfly:execute-commands -P "install-driver"

The process-resources lifecycle is needed to replace the properties in the script file. In my case ${settings.localRepository} is replaced by /Users/radcortez/.m3/repository/. Check the target/scripts folder. After running the command, you should see the following output in the Maven log:

And on the server:

wildfly-remove-postgre-driver.cli

This script is to remove the driver from the application server. Execute mvn wildfly:execute-commands -P "remove-driver". You don’t need process-resources if you already executed the command before, unless you change the scripts.

Add Datasource

wow-auctions-install.cli
The scripts with the commands to add a Datasource:

We also need a a file to define the properties:

configuration.properties

Default Java EE 7 Datasource

Java EE 7, specifies that the container should provide a default Datasource. Instead of defining a Datasource with the JNDI name java:/datasources/WowAuctionsDS in the application, we are going to point our newly created datasource to the default one with /subsystem=ee/service=default-bindings:write-attribute(name="datasource", value="${datasource.jndi}"). In this way, we don’t need to change anything in the application. Execute the script with mvn wildfly:execute-commands -P "install-wow-auctions". You should get the following Maven output:

And on the server:

wow-auctions-remove.cli

This is the script to remove the Datasource and revert the Java EE 7 default Datasource. Run it by executing mvn wildfly:execute-commands -P "remove-wow-auctions"

Conclusion

This post demonstrated how to automate add / remove Drivers to Wildfly instances and also add / remove Datasources. This is useful if you want to switch between databases or if you’re configuring a server from the ground up. Think about CI environments. These scripts are also easily adjustable to other drivers.

You can get the code from the WoW Auctions Github repo, which uses this setup. Enjoy!

How to use JPA

posted by Roberto Cortez on

Today, my third article How to use JPA correctly to avoid complaints of a slow application was published on RebelLabs. In the last few years I have worked a lot with JPA, so it was a perfect opportunity to write down some of the things I have learned and hopefully help other developers.

Database Slow

A special thanks to RebelLabs for letting me publish my work there and of course to Oliver White for all the support on writing and reviewing the article. You rock!