Java logging in development and production

Logging is tricky. Even some major open-source projects don’t do it correctly, so if you use their libraries, you end up with log files you didn’t ask for cluttering your machine.

Current best practices are to use a logging facade like commons-logging or slf4j to avoid these kind of problems by allowing libraries to conform to whatever logging strategy the application which uses the library is using. This means that if your app logs to myapp.log, then the library using slf4j will also log to myapp.log.

Here’s how we use slf4j in our projects:

Our libraries use slf4j-api – here’s the maven dependency you’ll need:


Our applications use slf4j-log4j12 – here’s the maven dependency you’ll need:


The code to log looks like this (its the same in libraries or applications):

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MyClass {
	static Logger log = LoggerFactory.getLogger(MyClass.class);

	public Application() {"some logging info");

We want to have simple console logging during development. To do this we use a simple file containing only a console appender as shown below.

log4j.rootLogger=INFO, Stdout
log4j.appender.Stdout.layout.conversionPattern=%-5p - %-26.26c{1} - %mn

We put this file in a Settings folder in our home directory.

During development we need to get log4j to load this file so we get console logging. We do this by defining a system property “log4j.configuration” in the Eclipse Preferences/Installed JREs/Edit/Default VM Arguments (that way it applies for all development projects):


During production, we do this same thing, but this time we pass a log4j configuration with a rolling file appender as shown below:

log4j.rootLogger=INFO, R
log4j.appender.R.layout.ConversionPattern=%d %p %t %c - %m%n

So the startup in production looks something like this:

java -Dlog4j.configuration=file:///home/administrator/ -jar myapp.jar

Clustering scheduled jobs with Quartz and Terracotta

We’re using Quartz to handle scheduled jobs in our java applications. Since we cluster our apps for failover using Terracotta, we need to address the issue of how to failover scheduled jobs (we only want each job to execute exactly once, but for fault tolerance it should be scheduled on multiple servers).

Quartz handles this situation with clustered job stores and Terracotta provides a job store for Quartz. Its easy to use – just add the quartz-terracotta jar (you’ll also need a quartz version > 1.7 – I’m using 1.8.4 below).



then set two systems properties. I’m doing it programmatically below because I use a commandline option to enable clustering in applications using embedded jetty. The “terracotta” commandline option is a terracotta url which tells the terracotta client where its servers are (something like “terracotta:9510,terracotta2:9510” for a fault-tolerant pair of terracotta servers).

if (cmd.hasOption("terracotta")) {
	System.setProperty("org.quartz.jobStore.class", "org.terracotta.quartz.TerracottaJobStore");
	System.setProperty("org.quartz.jobStore.tcConfigUrl", cmd.getOptionValue("terracotta"));

That’s it basically. The only additional thing you have to take care of is that while setting up the scheduled job, you need to be aware that another member of the cluster may have already set it up, so you should expect an ObjectAlreadyExistsException while setting up the quartz job.

Weird racing clock problem on VMWare Linux Guests

A while back we installed a new Dell server with a low-power Xeon 3GHz L3110 CPU to run some other essential network infrastructure. We chose the specific server configuration because it dissipates less than 30W while running 5 VMware VMs. It runs for hours on a UPS and doesn’t require cooling, so even if our server room air-conditioning were to die, this server should keep our network, firewalls, VPNs, DNS, DHCP and primary Terracotta server and a few other vital services up long enough for us to figure out what’s going on.

The server is running CentOS 5.5 and VMWare 1.0.10 – normally a very stable combination. However, we found that Linux guests running on this server were not keeping time properly. their clocks were running too fast – 50% too fast in fact. We finally figured out that the problem was caused by the fact that the CPU power-saving causes Linux to think that has a 2GHz processor instead of a 3GHz processor and this causes the 50% clock speedup in the Linux guests under VMWare. We disabled the demand-based power saving feature of the CPU in the BIOS and now it works correctly.