After attending Devoxx, I followed directly to Sofia – Bulgaria to attend one of the most popular conferences in the region called Java2Days. Java2Days started 6 years ago by Iva, Nadia and Yoana (yes, they are all girls). They have been doing a terrific job organizing and growing the conference over the last few years. This edition counted more than 800 attendees and 25% were women! For years, the tech world has been trying to attract more women is a world mostly dominated by men. For instance, Devoxx women’s attendance was only 5%. Maybe the tech world could extract a few lessons from Java2Days on how to attract more women.
This was my first time at Java2Days and I really enjoyed my time there. I was also invited as a speaker, to talk about the same sessions I’ve presented at JavaOne: Java EE 7 Batch Processing in the Real World with Ivan Ivanov and The 5 people in your organization that grow legacy code.
Sessions
Java2Days kicked out with a packed room to listen to a very good inspirational keynote by John Davies about Technology Landscape and Innovation. I’ve retained the following words: “If you don’t innovate others will”. Check the full keynote here.
Most of the speakers were local guys (speaking in English) and they delivered great content. Java2Days is a perfect place for less known speakers to show their skills (myself included).
I’ve spent a lot of time hacking, but also attended a few sessions. These are my top 3 sessions (from the ones I have attended):
Unfortunately, the sessions were not recorded, but you can check the slides here.
My Sessions
After practicing a bit more on the presentations following JavaOne I was more comfortable doing the sessions. I’m very happy with the result. Attendees seemed interested and the rooms were full for both sessions. Thank you everyone that attended the sessions.
Here are the slides:
Community
The community was amazing! Very friendly and happy to have people from outside of Bulgaria. Also a lot of Macedonians in the conference, which invited me to attend their own event next year. If I’m available, I will gladly attend.
I’ve also noticed that some attendees don’t feel comfortable enough approaching speakers. This is nothing new, since it also happens in other conferences, but I would like to leave this message: feel free to ALWAYS approach me and engage in conversation with me. I’m going to be very disappointed if I’ve missed the opportunity to engage with someone, because he or she couldn’t get to me. Please do it next time!
Final Words
Java2Days was a great conference. I was surprised with the atmosphere, which was awesome and friendly. I was very well treated by everyone, and I already have plans to return next year. If you find the time, don’t hesitate to pay Java2Days a visit.
A big thanks to Iva, Nadia and Yoana for having me at Java2Days and for being great hosts to me. Also a special thanks to Ivan Ivanov for convincing me to go there and for his awesome hospitality. Cya next year!
This year, Devoxx Belgium was held between 10 and 14 November at its usual place, Antwerp – Metropolis Business Center. This was my second time at Devoxx BE and I’ve enjoyed my time there. Unfortunately, none of my submitted sessions were selected for this event. It’s very hard to get im there, since there are so many good submissions. Check the program here.
Announcements
Devoxx is going to Poland. The popular 33rd Degree Conference is rebranding to Devoxx.
Parleys has a new look and it’s now possible to enroll in online courses by recognised experts from across the tech sector.
A new knowledge sharing platform was revealed: Voxxed.com. Here you should be able to find the most recent news about Java and JVM technologies. When the platform was being demoed, the server crashed with everyone in the audience trying to access it. A funny moment, but the platform is stable now, and had no problems accessing it afterwards.
A new brand of smaller conferences was launched as Voxxed Days. These are one day tech events organised by local community groups and supported by the Voxxed team. Check the schedule. Voxxed Days are going to be in Vienna, Ticino, Istanbul and Berlin. Maybe we can bring them to Portugal too!
Sessions
I’ve spent most of my time in the Hackergarten, but I’ve also attended a few sessions. I recommend to have a look into these:
All the sessions will be on Parleys. So keep an eye on it.
There are a lot of sessions dedicated or related to Docker. It seems to be the next big thing. It would be interesting to see if Docker is going to be threat to multi-platform Java and open the path to other technologies. I don’t believe in it, since Java evolved way more than that, but let’s see what is going to happen.
We also got to see the very last podcast of The Java Posse. Thank you for all the great content produced for the Java community since 2005.
Interesting Facts
Attendees of Devoxx vote on certain topics on whiteboards. Check this great post: Devoxx 2014 – Whiteboard votes by Stephen Colebourne with these year hot topics.
All results should be looked carefully, since these represent a very small number of developers at a top tech conference. Still, it’s interesting to see a very good adoption of Java 8 and IntelliJ IDEA being the number one IDE these days. What made me wonder was the huge amount of web frameworks or techniques (I’ve counted 27!) to build web applications. Diversity is good, but anyone else feels that something is really wrong here?
Final Words
It was great to attend Devoxx and to hang out live with the persons you usually only interact online. If you have never been to a conference, you should definitely consider attending Devoxx. Probably the reference conference for Europe.
(Please serve better food next year)!
I’ve travelled next to the Java2Days conference in Sofia – Bulgaria. So expect a post about that one too.
One of the latest features in JPA 2.1 is the ability to specify fetch plans using Entity Graphs. This is useful since it allows you to customize the data that is retrieved with a query or find operation. When working with mid to large size applications is common to display data from the same entity in different and many ways. In other cases, you just want to select a smallest set of information to optimize the performance of your application.
You don’t have many mechanisms to control what is loaded or not in a JPA Entity. You could use EAGER / LAZY fetching, but these definitions are pretty much static. You were unable to change their behaviour when retrieving data, meaning that you were stuck with what was defined in the entity. Changing these in mid development is a nightmare, since it can cause queries to behave unexpectedly. Another way to control loading is to write specific JPQL queries. You usually end up with very similar queries and the following methods: findEntityWithX
, findEntityWithY
, findEntityWithXandY
, and so on.
Before JPA 2.1, the implementations already supported a non standard way to load data similar to Entity Graphs. You have Hibernate Fetch Profiles, OpenJPA Fetch Groups and EclipseLink Fetch Groups. It was logical to have this kind of behaviour in the specification. It allows you a much finer and detail control on what you need to load using a standard API.
Example
Consider the following Entity Graph:
(Probably the relationships should be N to N, but lets keep it simple).
And the Movie Entity has the following definition:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | @Entity @Table(name = "MOVIE_ENTITY_GRAPH") @NamedQueries({ @NamedQuery(name = "Movie.findAll", query = "SELECT m FROM Movie m") }) @NamedEntityGraphs({ @NamedEntityGraph( name = "movieWithActors", attributeNodes = { @NamedAttributeNode("movieActors") } ), @NamedEntityGraph( name = "movieWithActorsAndAwards", attributeNodes = { @NamedAttributeNode(value = "movieActors", subgraph = "movieActorsGraph") }, subgraphs = { @NamedSubgraph( name = "movieActorsGraph", attributeNodes = { @NamedAttributeNode("movieActorAwards") } ) } ) }) public class Movie implements Serializable { @Id private Integer id; @NotNull @Size(max = 50) private String name; @OneToMany @JoinColumn(name = "ID") private Set<MovieActor> movieActors; @OneToMany(fetch = FetchType.EAGER) @JoinColumn(name = "ID") private Set<MovieDirector> movieDirectors; @OneToMany @JoinColumn(name = "ID") private Set<MovieAward> movieAwards; } |
Looking closer to the entity, we can see that we have three 1 to N relationships and movieDirectors
is set to be Eagerly loaded. The other relationships are set to the default Lazy loading strategy. If we want to change this behaviour, we can define different loading models by using the annotation @NamedEntityGraph
. Just set a name to identify it and then use the @NamedAttributeNode
to specify which attributes of the root entity that you want to load. For relationships you need to set a name to the subgraph and then use @NamedSubgraph
. In detail:
Annotations
| @NamedEntityGraph( name = "movieWithActors", attributeNodes = { @NamedAttributeNode("movieActors") } ) |
This defines an Entity Graph with name movieWithActors
and specifies that the relationship movieActors
should be loaded.
| @NamedEntityGraph( name = "movieWithActorsAndAwards", attributeNodes = { @NamedAttributeNode(value = "movieActors", subgraph = "movieActorsGraph") }, subgraphs = { @NamedSubgraph( name = "movieActorsGraph", attributeNodes = { @NamedAttributeNode("movieActorAwards") } ) } ) |
This defines an Entity Graph with name movieWithActorsAndAwards
and specifies that the relationship movieActors
should be loaded. Additionally, it also specifies that the relationship movieActors
should load the movieActorAwards
.
Note that we don’t specify the id
attribute in the Entity Graph. This is because primary keys are always fetched regardless of what’s being specified. This is also true for version attributes.
Hints
To use the Entity Graphs defined in a query, you need to set them as an hint. You can use two hint properties and these also influences the way the data is loaded.
You can use javax.persistence.fetchgraph
and this hint will treat all the specified attributes in the Entity Graph as FetchType.EAGER
. Attributes that are not specified are treated as FetchType.LAZY
.
The other property hint is javax.persistence.loadgraph
. This will treat all the specified attributes in the Entity Graph as FetchType.EAGER
. Attributes that are not specified are treated to their specified or default FetchType
.
For more information, please refer to the sections 3.7.4.1 – Fetch Graph Semantics and 3.7.4.2 – Load Graph Semantics of the JPA 2.1 specification.
To simplify, and based on our example when applying the Entity Graph movieWithActors
:
| Default / Specified | javax.persistence.fetchgraph | javax.persistence.loadgraph |
---|
movieActors | LAZY | EAGER | EAGER |
movieDirectors | EAGER | LAZY | EAGER |
movieAwards | LAZY | LAZY | LAZY |
In theory, this should be how the different relationships are fetched. In practice, it may not work this way, because the JPA 2.1 specification also states that the JPA provider can always fetch extra state beyond the one specified in the Entity Graph. This is because the provider can optimize which data to fetch and end up loading much more stuff. You need to check your provider behaviour. For instance Hibernate always fetch everything that is specified as EAGER even when using the javax.persistence.fetchgraph
hint. Check the issue here.
Query
Performing the query is easy. You do it as you would normally do, but just call setHint
on the Query
object:
| @PersistenceContext private EntityManager entityManager; public List<Movie> listMovies(String hint, String graphName) { return entityManager.createNamedQuery("Movie.findAll") .setHint(hint, entityManager.getEntityGraph(graphName)) .getResultList(); } |
To get the Entity Graph you want to use on your query, you need to call the getEntityGraph
method on the EntityManager
and pass the name. Then use the reference in the hint. Hint must be either javax.persistence.fetchgraph
or javax.persistence.loadgraph
.
Programmatic
Annotations may become verbose, especially if you have big graphs or many Entity Graphs. Instead of using annotations, you can programmatically define Entity Graphs. Let’s see how:
Start by adding a static meta model Entity Class:
| @StaticMetamodel(Movie.class) public abstract class Movie_ { public static volatile SingularAttribute<Movie, Integer> id; public static volatile SetAttribute<Movie, MovieAward> movieAwards; public static volatile SingularAttribute<Movie, String> name; public static volatile SetAttribute<Movie, MovieActor> movieActors; public static volatile SetAttribute<Movie, MovieDirector> movieDirectors; } |
This is not really needed, you can reference the attributes by their string names, but this will give you type safety.
| EntityGraph<Movie> fetchAll = entityManager.createEntityGraph(Movie.class); fetchAll.addSubgraph(Movie_.movieActors); fetchAll.addSubgraph(Movie_.movieDirectors); fetchAll.addSubgraph(Movie_.movieAwards); |
This Entity Graph specifies that all relationships of the Entity must be loaded. You can now adjust to your own use cases.
Resources
You can find this sample code in the Java EE samples at Github. Check it here.
Extra Note: currently there is a bug in EclipseLink / Glassfish that prevents javax.persistence.loadgraph
hint from working properly. Check the issue here.
Conclusion
Entity Graphs filled a gap missing in the JPA specification. They are an extra mechanism that helps you to query for what you really need. They also help you to improve the performance of your application. But be smart when using them. There might be a better way.
Based on my session idea at JavaOne about things that went terrible wrong in our development careers, I thought about writing a few of these stories. I’ll start with one of my favourites ones: Crashing a customer’s mail server after generating more than 23 Million emails! Yes, that’s right, 23 Millions!
History
A few years ago, I’ve joined a project that was being developed for several months, but had no production release yet. Actually, the project was scheduled to replace an existing application in the upcoming weeks. My first task in the project was to figure out what was needed to deploy the application in a production environment and replace the old application.
This application had a considerable amount of users (around 50 k), but not all of them were active. The new application had a new feature to exclude the users that didn’t log into the application for the last few months. This was implemented as a timer (executed daily) and a email notification was sent to that user warning him that he was excluded from the application.
The Problem
The release was installed on a Friday (yes, Friday!), and everyone went for a rest. Monday morning, all hell broke loose! The customer mail server was down, and nobody had any idea why.
The first reports indicated that the mail server was out of disk space, because it had around 2 Million emails pending delivery and a lot more incoming. What the hell happened?
The Cause
Even with the server down, support was able to show us a copy of an email stuck in the server. It was consistent with the email sent when a user was excluded. It didn’t make any sense, because we counted the number of users to be excluded and they were around 28 k, so only 28 k emails should have been sent. Even if all users were excluded the number could not be higher than 50 k (the total number of users).
Invalid Email
Looking into the code, we found out a bug that would cause the user to not be excluded if he had an invalid email. As a consequence these users were caught every time that the timer executed. From the total 28 k users to be excluded, around 26 k had invalid emails. From Friday to Monday, we count 3 executions * 26 k users, so 78k k emails. Ok, so now we have an email increase, but not close enough to the reported numbers.
Timer Bug
Actually the timer also had a bug. It was not scheduled to be executed daily, but every 8 hours. Let’s adjust the numbers: 3 days * 3 executions a day * 26 k users, brings the total to 234 k emails. A considerable increase but still far from a big number.
Additional Node
The operations installed the application in a second node, and the timer was executed in both. So a double increase. Let’s update: 2 * 234 k emails, brings the total to 468 k emails.
No-reply Address
Since the emails were automated, you usually set up a no-reply email as the email sender. Now the problem was that the domain for the no-reply address was invalid. Combining this with the users invalid emails, the mail server entered in a loop state. Each invalid user email generated an error email sent to the no-reply address, which was invalid as well and this caused a returned email again to the server. The loop end when the Maximum hop count is exceeded. In this case it was 50. Now everything starts to make sense! Let’s update the numbers:
26 k users * 3 days * 3 executions * 2 servers * 50 hops for a grand total of 23.4 Million emails!
Aftermath
The customer lost all their email from Friday to Monday, but it was possible to recover the mail server. The problems were fixed and it never happened again. I remember those days, to be very stressful, but today all of us involved, laugh about it!
Remember: always check the no-reply address!
Last Thursday, 30 October 2014, the sixth meeting of Coimbra JUG was held on the Department of Informatics Engineering of the University of Coimbra, in Portugal. The attendance was good, we had around 25 people to listen to my talk about the Java EE Batch. This is the same session I have presented at JavaOne.
No one in the audience was using Java EE Batch, and only a couple were using Spring Batch technologies. By coincidence a few old colleagues of mine, working in a project where I introduced the technology. The attendees seemed very curious and interested to learn about it and the questions were great. A lot of interactions and discussions were generated during the session. A funny thing happened toward the end: there was a power failure and we had to finished the session on the dark! We were lucky, since the session was almost done. Discussions about the topic (and others), bounced to the dinner. We had around ten enthusiasts, for our biggest dinner ever!
As always, we had surprises for the attendees: beer and chocolates, if you participated in the discussion. IntelliJ sponsored our event, by offering a free license to raffle among the attendees. Congratulations to Décio Sousa for winning the license. Develop with pleasure!
Here are the materials for the session:
Enjoy!
A few additional notes:
I would like to welcome Bruno Baptista to the Coimbra JUG Organization. He is going to help me running the JUG.
Coimbra JUG is almost 1 year old! Let’s see if we can pull something interesting to commemorate!