Building a modern Java app with Eclipse, Maven, OSGi, and Spring DM (Part 4)

August 5th, 2009  |  Published in development

Today I’m going to talk about how we’re using the Eclipse IDE to develop the product.  Much of what I discuss was worked out by our Eclipse guru, Stephen Evanchik, so credit for it goes to him as well as to the folks that worked on the JDT and PDE features.  Can’t forget to include the Sonatype folks who work on the m2eclipse plugin.

Background

Originally we worked at the command line using Maven, but as the team grew we started doing the majority of the work in Eclipse and relying on the stable development release of the m2eclipse plugin to integrate with Maven.

We import all of our modules into Eclipse as Maven projects and via the magic of the P4WSAD SCM plugin, many developers never need to leave the IDE once they’ve done an initial build at the command line.  We also have custom launchers and target platform definitions that allow developers to run and debug the product from within the IDE.

Typical workflow

Developers start by checking out code via p4 or p4v in order to get a full copy of the codebase.  They then typically perform a full build at the command line using Maven in order to do some one-time setup (copying files around, creating the Eclipse target platform, etc.) as well as installing an initial copy of the artifacts into the local Maven repository.  The projects are then imported into Eclipse.

Once the projects are loaded they can work on their features or bug fixes and use P4WSAD to interact with Perforce.  It works fairly well, although there are some rough edges (such as having to select all the projects and go through the Team > Share Project dialog for every single project individually instead of doing it as a single batch).

For running and debugging the product we’ve assembled a custom target platform that incorporates all of our dependencies, so developers can launch the product and test out their work in the same runtime that the final product will use.  Remote debugging can also be used which has come in very handy when QA reports issues and developers can connect directly to the service and set breakpoints.

What works well

The target platform support and PDE tools (the manifest editor in particular) have worked well enough, and hopefully we’ll get everyone migrated from Ganymede to Galileo soon enough.

What could be better

  1. I already mentioned the rough edges on P4WSAD, but thankfully that’s typically only a problem when doing the initial (large) import into Eclipse.  It can also be difficult to remember which projects you’ve already shared if you’ve got projects coming and going.  One thing that can make it more obvious which projects you’ve shared is to enable the Perforce annotations in the package explorer (Preferences > General > Appearance > Label Decorations and check the Perforce option).  This will show the Perforce server and file information in the package explorer.
  2. We’ve also experienced a lot of pain with m2eclipse because of the time required to calculate dependencies and its tendency to eagerly rebuild projects.  Some of this is due to our running Maven in offline mode and the wonky HTTP proxies we have to deal with, but in some cases it blatantly ignores the settings you’ve given it.
  3. Maven, Eclipse, and OSGi can be tricky enough on their own, but can be downright scary to someone that’s new to Java.  There’s a lot of magic involved for the newbie so the learning curve can be quite steep.  I would recommend that folks add a single technology at a time instead of trying for that perfect storm all at once.

Tags: , , , , ,

Building a modern Java app with Eclipse, Maven, OSGi, and Spring DM (Part 3)

August 4th, 2009  |  Published in development

Background

As I mentioned a few weeks ago, we have about 120 modules (bundles) that are part of our regular build process.  Almost all of those bundles use Spring and Spring Dynamic Modules to wire their dependencies and configuration together.

Spring is used to automatically inject configuration settings into components (in conjunction with Apache Commons Configuration) as well as to inject the particular implementation into components where multiple options exist (such as database persistence, caching layers, etc.).

Many of the major components in the system are exposed as OSGi services and so we use Spring DM to automatically register and consume those services without coupling our code directly to OSGi.  Spring DM is invaluable at helping to “damp the use of services” (as SpringSource’s Glyn Normington put it at one point), meaning that by giving you a dynamic proxy instead of a reference to the real service, your app can better tolerate the perturbations caused by services coming and going.

I don’t have much else to describe about unique work we’re doing with Spring DM because the reference manual is so comprehensive!

Recommendations

  1. Keep your code decoupled from OSGi by relying on Spring DM as the glue between your app and the OSGi framework.
  2. As suggested by the Spring DM documentation, split your application contexts into two files: one to contain the standard Spring bean definitions, and a second to contain the OSGi specific beans.  This will make it easier to test and substitute mocks in place of real OSGi services.
  3. Create a parallel set of “test” contexts in the test hierarchy Maven has (src/test/resources) that mock certain objects or use.
  4. Create Spring integration tests to verify that your application contexts are correct (in particular, see the Spring TestContext Framework section of the reference).  Utilize OSGi integration tests in addition to confirm that your services and service references are defined correctly.

Tags: , ,

Building a modern Java app with Eclipse, Maven, OSGi, and Spring DM (Part 2)

July 9th, 2009  |  Published in development

Continuing from yesterday’s post about Maven, I’m going to briefly discuss our approach to handling OSGi.

Every module that we build, whether it’s a JAR or a WAR, is packaged as an OSGi bundle. Originally we relied on the maven-bundle-plugin from Apache Felix to generate the MANIFEST.MF files every time the package phase was executed. As the manifest was dynamically generated, we were careful to never check in the generated manifest, as it would result in errors when a developer or our continuous integration system attempted to build the product and would inadvertently clobber the read-only manifest file.

As part of the plug-in configuration we originally hand-coded each and every package import, although recently we’ve been relying more on bnd’s ability to scan bytecode and only adding imports for packages referenced from Spring application contexts or other non-bytecode sources. This has made the process much easier, but there is still a chance that you could end up with a ClassNotFoundException or NoClassDefFoundError if you’re not careful (more on that in a moment).

We have also recently switched to generating the manifest file once and then checking it into Perforce. The static manifest means that when we import the modules into Eclipse we can take advantage of the PDE tooling to edit and maintain the manifest. It also allows us to have the modules (bundles) automatically added to the Eclipse target platform, making it easy to run the product and debug it from within the Eclipse IDE.

One downside to these approaches is that there is still duplicated metadata between Maven and the maven-bundle-plugin/OSGi because they do not share a common metadata source. Our practice of relying on bnd to pick up most imports has minimized this to some degree. We expect that future improvements to Maven, PDE, and other OSGi tooling will eliminate the problem entirely.

Testing

OSGi metadata is challenging to get right the first time, so early on we established the practice of creating OSGi integration tests for each bundle that was produced. The purpose of the tests is to verify that the correct packages are being imported/exported from a bundle, that no implementation classes slip into the export list, and that any services or service references are correctly registered/resolved. One of my colleagues wrote an abstract class that takes away much of the pain of programmaticaly starting Eclipse Equinox and automatically loading in our third-party dependencies, so all the individual test authors needs to do is essentially reproduce the OSGi metadata/contract in JUnit form (we’re currently using Spring DM’s OSGi tests support).

So far the tests have been rather successful at identifying missing dependencies prior to deployment. The only real downside to this approach is the duplication of metadata in the test cases.

Recommendations

  1. Rely on bnd’s bytecode scanning technique to pick up most package imports instead of explicitly adding them to the maven-bundle-plugin’s configuration.
  2. Explicitly add packages referenced from XML files (Spring, Hibernate, etc.) and for classes that are dynamically loaded. Hibernate or other frameworks that use cglib/javassist can be particularly difficult to get right if you’re not extremely careful.
  3. Make sure your modules always have a manifest file with OSGi metadata so you can take advantage of the Eclipse PDE tooling.
  4. Run your bundles in an environment as similar as possible to your target platform prior to deploying in production so you can ensure that all the necessary dependencies have been specified and are present in the environment.

Tags: , , ,

Building a modern Java app with Eclipse, Maven, OSGi, and Spring DM (Part 1)

July 8th, 2009  |  Published in development

Last week Michael Nygard tweeted about difficulties with Eclipse, Maven, OSGi, and Spring DM. Given that Michael and others have expressed interest in hearing how we’ve been using all of those technologies when developing VMware’s forthcoming vCloud product, I thought I’d try to go through it over the next week or two in a series of blog posts. Today’s post will provide some of the details about our base Maven setup.

Note: I won’t be talking about the specifics of our product, so if you’d like more details please consult the presentation from my colleague Orran Krieger. I’m also not going to touch much on our deployment work; for that see my other colleague Stephen Evanchik’s blog or the Eclipse Integration for Karaf project he started on FUSE Forge.

Project layout and building the product

The product codebase is almost entirely Java and when we first started writing code last year it seemed to make sense to use a tool that understood Java and was able to help us resolve third-party dependencies. At the time we weren’t really aware of Ant+Ivy, so we opted to go with Maven. It was also nice that Maven follows the convention-over-configuration approach which made it very easy to create new modules and quickly get new developers up to speed. One downside at the time was that the Sonatype folks were still working on their Maven book, so we ended up having to figure out a number of Maven best practices on our own, which resulted in some frustration early on until we got over the learning curve. Today we have about 120 modules that are part of the build, and 1-2 dozen additional modules that are not part of the regular process.

We have a single master POM that defines the project defaults (including artifact versions and plug-in configurations). The rest of the modules are organized into subsystems, and each subsystem has its own POM to allow us to build them in isolation if we wish (in some cases there are inter-subsystem dependencies that prevent us from doing so). For the final deliverable we rely on Maven assemblies to collect the appropriate artifacts and package them into a tarball suitable for distribution.

One other important distinction is that we always download artifacts into a local repository that is checked into Perforce (the SCM system we use) and run Maven in offline mode. This allows us to reproduce any build based on the Perforce changelist number and also means the team doesn’t spend all day downloading artifacts just to do a build.

Recommendations

  1. Place parent POMs in a sibling directory (i.e. ../foo-parent) instead of in the parent directory (../) as Eclipse/m2eclipse seems to handle the nested projects slightly better. We originally ran into problems where Eclipse would start shifting files and output artifacts around when the parent POM wasn’t in its own directory.
  2. Define variables in the master POM to capture artifact versions. This will allow you to update the value in one place and have the change automatically propagate throughout the system. There is nothing more frustrating than having to search and replace version strings through 120 modules and in different scopes. Multiply that by the number of artifacts that comprise Spring or Spring DM and you’ll soon be begging for a drink.
  3. Take advantage of the dependencyManagement and pluginManagement elements so artifact versions don’t need to be specified in child POMs.
  4. Utilize profiles to pull out processes that don’t need to be executed all of the time. We originally generated some JAXB and WSDL stubs every time until we eventually moved those goals into deactivated profiles for the few times they actually needed to be changed. We also started to do this for the MANIFEST.MF files, which I’ll touch on more in a future post.
  5. Don’t use snapshot versions of artifacts unless absolutely necessary. They make it extremely difficult (if not impossible) to produce repeatable builds. We got in the habit of disabling snapshots in our repository definitions to make sure they didn’t slip in.
  6. Don’t use version ranges for dependencies if possible; you want to be able to recreate a build exactly without having to guess what version of an artifact was pulled from the repository at the time of the original build. If you’re using an offline repository could be slightly easier because you have a static snapshot of the repository and can sync to a particular changelist number (referring to Perforce in particular).

Tags: , , , ,