March 30th, 2011 |
I’ll be straight with you. I’ve given up on blogging. From the lack of posts over the last few years you should have guessed that. Social media did it in. If I’ve got something to say, I end up saying it on Twitter or Google Buzz/Reader.
To end the sham, I’ve disabled comments and made my blog read-only. Perhaps at some point in the future when I’ve got something worthwhile to say I’ll make it read/write again, but until then you can follow me on Twitter if you want to know what I’m up to.
August 26th, 2009 |
project-management | 2 Comments
A few months ago I read 97 Things Every Project Manager Should Know and one point that really stood out for me was Adrian Wible’s recommendation to use a wiki for maintaining project information. There are a number of positive aspects of using a wiki, but Wible doesn’t even allude to the many downsides: page sprawl, the difficulty in finding information, and keeping it all up to date, which I feel outweigh the positive benefits.
At VMware we’ve had wikis for engineering teams to post project information for a number of years now but it’s become more of a dumping ground than a useful reference. In the last few years our use of the wiki has exploded, but engineers are busy so they tend to “fire and forget”: content gets posted but no one bothers to go back and update it. And because wikis set the bar extremely low, people just throw in content without thinking about findability or how to properly take advantage of hypermedia.
The result is that it’s extremely difficult to find the page or content you’re looking for. Our Google Search Appliance doesn’t even help because there aren’t enough quality links for the PageRank algorithm to produce useful results (that’s my guess anyway). So instead of finding a page with details on how our virtual machine monitor works, you’re more likely to get a page full of daily status log entries.
Perhaps the best solution is to do what Wikipedia does: establish standards for content, linking, and organizing the information (it also helps to have a volunteer army of curators/librarians to maintain it). Of course, this is easier said than done. Companies like VMware make money by shipping products, not pruning wiki pages, so it’s difficult to sell the idea of having all employees act as part-time curators unless you can quantify the ROI. Sadly, I don’t have an answer for that (yet). Some inexpensive alternatives that come to mind are to hire a librarian to prune and organize or to have the company’s Intranet team help establish the best practices — assuming they’re good with information architecture and not just throwing up Web pages).
While all of these options focus on primarily Web content, the overall problem of capturing organizational knowledge is much larger. Hopefully more on this soon…
Tags: 97-things, information-architecture, wiki
August 5th, 2009 |
Today I’m going to talk about how we’re using the Eclipse IDE to develop the product. Much of what I discuss was worked out by our Eclipse guru, Stephen Evanchik, so credit for it goes to him as well as to the folks that worked on the JDT and PDE features. Can’t forget to include the Sonatype folks who work on the m2eclipse plugin.
Originally we worked at the command line using Maven, but as the team grew we started doing the majority of the work in Eclipse and relying on the stable development release of the m2eclipse plugin to integrate with Maven.
We import all of our modules into Eclipse as Maven projects and via the magic of the P4WSAD SCM plugin, many developers never need to leave the IDE once they’ve done an initial build at the command line. We also have custom launchers and target platform definitions that allow developers to run and debug the product from within the IDE.
Developers start by checking out code via p4 or p4v in order to get a full copy of the codebase. They then typically perform a full build at the command line using Maven in order to do some one-time setup (copying files around, creating the Eclipse target platform, etc.) as well as installing an initial copy of the artifacts into the local Maven repository. The projects are then imported into Eclipse.
Once the projects are loaded they can work on their features or bug fixes and use P4WSAD to interact with Perforce. It works fairly well, although there are some rough edges (such as having to select all the projects and go through the Team > Share Project dialog for every single project individually instead of doing it as a single batch).
For running and debugging the product we’ve assembled a custom target platform that incorporates all of our dependencies, so developers can launch the product and test out their work in the same runtime that the final product will use. Remote debugging can also be used which has come in very handy when QA reports issues and developers can connect directly to the service and set breakpoints.
What works well
The target platform support and PDE tools (the manifest editor in particular) have worked well enough, and hopefully we’ll get everyone migrated from Ganymede to Galileo soon enough.
What could be better
- I already mentioned the rough edges on P4WSAD, but thankfully that’s typically only a problem when doing the initial (large) import into Eclipse. It can also be difficult to remember which projects you’ve already shared if you’ve got projects coming and going. One thing that can make it more obvious which projects you’ve shared is to enable the Perforce annotations in the package explorer (Preferences > General > Appearance > Label Decorations and check the Perforce option). This will show the Perforce server and file information in the package explorer.
- We’ve also experienced a lot of pain with m2eclipse because of the time required to calculate dependencies and its tendency to eagerly rebuild projects. Some of this is due to our running Maven in offline mode and the wonky HTTP proxies we have to deal with, but in some cases it blatantly ignores the settings you’ve given it.
- Maven, Eclipse, and OSGi can be tricky enough on their own, but can be downright scary to someone that’s new to Java. There’s a lot of magic involved for the newbie so the learning curve can be quite steep. I would recommend that folks add a single technology at a time instead of trying for that perfect storm all at once.
Tags: eclipse, m2eclipse, maven, osgi, p4wsad, perforce
August 4th, 2009 |
As I mentioned a few weeks ago, we have about 120 modules (bundles) that are part of our regular build process. Almost all of those bundles use Spring and Spring Dynamic Modules to wire their dependencies and configuration together.
Spring is used to automatically inject configuration settings into components (in conjunction with Apache Commons Configuration) as well as to inject the particular implementation into components where multiple options exist (such as database persistence, caching layers, etc.).
Many of the major components in the system are exposed as OSGi services and so we use Spring DM to automatically register and consume those services without coupling our code directly to OSGi. Spring DM is invaluable at helping to “damp the use of services” (as SpringSource’s Glyn Normington put it at one point), meaning that by giving you a dynamic proxy instead of a reference to the real service, your app can better tolerate the perturbations caused by services coming and going.
I don’t have much else to describe about unique work we’re doing with Spring DM because the reference manual is so comprehensive!
- Keep your code decoupled from OSGi by relying on Spring DM as the glue between your app and the OSGi framework.
- As suggested by the Spring DM documentation, split your application contexts into two files: one to contain the standard Spring bean definitions, and a second to contain the OSGi specific beans. This will make it easier to test and substitute mocks in place of real OSGi services.
- Create a parallel set of “test” contexts in the test hierarchy Maven has (src/test/resources) that mock certain objects or use.
- Create Spring integration tests to verify that your application contexts are correct (in particular, see the Spring TestContext Framework section of the reference). Utilize OSGi integration tests in addition to confirm that your services and service references are defined correctly.
Tags: osgi, spring, spring-dm
July 26th, 2009 |
In my first post on using Maven, OSGi, and Spring to create enterprise apps I listed several recommendations for how to structure the projects. One reader called me on the fact that I didn’t post a sample, so I’ve put together a 2 module sample that illustrates the second and third recommendations: http://infinitechaos.com/files/sample-maven-projects.zip
The combination of the help:effective-pom and dependency:tree goals as well as the -X flag should let you see how the settings from the parent module are being inherited and used by the child.
July 9th, 2009 |
development | 4 Comments
Continuing from yesterday’s post about Maven, I’m going to briefly discuss our approach to handling OSGi.
Every module that we build, whether it’s a JAR or a WAR, is packaged as an OSGi bundle. Originally we relied on the maven-bundle-plugin from Apache Felix to generate the MANIFEST.MF files every time the package phase was executed. As the manifest was dynamically generated, we were careful to never check in the generated manifest, as it would result in errors when a developer or our continuous integration system attempted to build the product and would inadvertently clobber the read-only manifest file.
As part of the plug-in configuration we originally hand-coded each and every package import, although recently we’ve been relying more on bnd’s ability to scan bytecode and only adding imports for packages referenced from Spring application contexts or other non-bytecode sources. This has made the process much easier, but there is still a chance that you could end up with a ClassNotFoundException or NoClassDefFoundError if you’re not careful (more on that in a moment).
We have also recently switched to generating the manifest file once and then checking it into Perforce. The static manifest means that when we import the modules into Eclipse we can take advantage of the PDE tooling to edit and maintain the manifest. It also allows us to have the modules (bundles) automatically added to the Eclipse target platform, making it easy to run the product and debug it from within the Eclipse IDE.
One downside to these approaches is that there is still duplicated metadata between Maven and the maven-bundle-plugin/OSGi because they do not share a common metadata source. Our practice of relying on bnd to pick up most imports has minimized this to some degree. We expect that future improvements to Maven, PDE, and other OSGi tooling will eliminate the problem entirely.
OSGi metadata is challenging to get right the first time, so early on we established the practice of creating OSGi integration tests for each bundle that was produced. The purpose of the tests is to verify that the correct packages are being imported/exported from a bundle, that no implementation classes slip into the export list, and that any services or service references are correctly registered/resolved. One of my colleagues wrote an abstract class that takes away much of the pain of programmaticaly starting Eclipse Equinox and automatically loading in our third-party dependencies, so all the individual test authors needs to do is essentially reproduce the OSGi metadata/contract in JUnit form (we’re currently using Spring DM’s OSGi tests support).
So far the tests have been rather successful at identifying missing dependencies prior to deployment. The only real downside to this approach is the duplication of metadata in the test cases.
- Rely on bnd’s bytecode scanning technique to pick up most package imports instead of explicitly adding them to the maven-bundle-plugin’s configuration.
- Explicitly add packages referenced from XML files (Spring, Hibernate, etc.) and for classes that are dynamically loaded. Hibernate or other frameworks that use cglib/javassist can be particularly difficult to get right if you’re not extremely careful.
- Make sure your modules always have a manifest file with OSGi metadata so you can take advantage of the Eclipse PDE tooling.
- Run your bundles in an environment as similar as possible to your target platform prior to deploying in production so you can ensure that all the necessary dependencies have been specified and are present in the environment.
Tags: eclipse, java, maven, osgi
July 8th, 2009 |
development | 5 Comments
Last week Michael Nygard tweeted about difficulties with Eclipse, Maven, OSGi, and Spring DM. Given that Michael and others have expressed interest in hearing how we’ve been using all of those technologies when developing VMware’s forthcoming vCloud product, I thought I’d try to go through it over the next week or two in a series of blog posts. Today’s post will provide some of the details about our base Maven setup.
Note: I won’t be talking about the specifics of our product, so if you’d like more details please consult the presentation from my colleague Orran Krieger. I’m also not going to touch much on our deployment work; for that see my other colleague Stephen Evanchik’s blog or the Eclipse Integration for Karaf project he started on FUSE Forge.
Project layout and building the product
The product codebase is almost entirely Java and when we first started writing code last year it seemed to make sense to use a tool that understood Java and was able to help us resolve third-party dependencies. At the time we weren’t really aware of Ant+Ivy, so we opted to go with Maven. It was also nice that Maven follows the convention-over-configuration approach which made it very easy to create new modules and quickly get new developers up to speed. One downside at the time was that the Sonatype folks were still working on their Maven book, so we ended up having to figure out a number of Maven best practices on our own, which resulted in some frustration early on until we got over the learning curve. Today we have about 120 modules that are part of the build, and 1-2 dozen additional modules that are not part of the regular process.
We have a single master POM that defines the project defaults (including artifact versions and plug-in configurations). The rest of the modules are organized into subsystems, and each subsystem has its own POM to allow us to build them in isolation if we wish (in some cases there are inter-subsystem dependencies that prevent us from doing so). For the final deliverable we rely on Maven assemblies to collect the appropriate artifacts and package them into a tarball suitable for distribution.
One other important distinction is that we always download artifacts into a local repository that is checked into Perforce (the SCM system we use) and run Maven in offline mode. This allows us to reproduce any build based on the Perforce changelist number and also means the team doesn’t spend all day downloading artifacts just to do a build.
- Place parent POMs in a sibling directory (i.e. ../foo-parent) instead of in the parent directory (../) as Eclipse/m2eclipse seems to handle the nested projects slightly better. We originally ran into problems where Eclipse would start shifting files and output artifacts around when the parent POM wasn’t in its own directory.
- Define variables in the master POM to capture artifact versions. This will allow you to update the value in one place and have the change automatically propagate throughout the system. There is nothing more frustrating than having to search and replace version strings through 120 modules and in different scopes. Multiply that by the number of artifacts that comprise Spring or Spring DM and you’ll soon be begging for a drink.
- Take advantage of the dependencyManagement and pluginManagement elements so artifact versions don’t need to be specified in child POMs.
- Utilize profiles to pull out processes that don’t need to be executed all of the time. We originally generated some JAXB and WSDL stubs every time until we eventually moved those goals into deactivated profiles for the few times they actually needed to be changed. We also started to do this for the MANIFEST.MF files, which I’ll touch on more in a future post.
- Don’t use snapshot versions of artifacts unless absolutely necessary. They make it extremely difficult (if not impossible) to produce repeatable builds. We got in the habit of disabling snapshots in our repository definitions to make sure they didn’t slip in.
- Don’t use version ranges for dependencies if possible; you want to be able to recreate a build exactly without having to guess what version of an artifact was pulled from the repository at the time of the original build. If you’re using an offline repository could be slightly easier because you have a static snapshot of the repository and can sync to a particular changelist number (referring to Perforce in particular).
Tags: eclipse, java, maven, osgi, spring-dm
April 11th, 2009 |
In my experience helping with recruiting over the last few years I have come across a couple of tips that many companies haven’t picked up on yet, so I thought I’d share them.
Key point: Campus recruiting is about establishing relationships
If you want your candidate pipeline to be full of talent, you need to establish lasting relationships with the schools that you recruit from. This doesn’t mean conducting information sessions every semester and calling it good. You need to get to know both the students and the faculty so the students know to attend and apply and the faculty know to promote your events and to recommend promising students to you.
A few related tips:
- Turn your interns into evangelists. Get them to talk about their experience with other students and help you connect with potential candidates.
- Establish a campus ambassador program. Either formalize the role of former interns or allow passionate students to get involved. They can help coordinate and promote events on campus, promote the company and its products, and can also provide valuable feedback about how you’re doing.
- Give stuff away. Get students using your company’s products. As the tobacco industry used to say, hook ‘em while they’re young. Once they go off into industry they’ll bring their experience with your products with them.
Also, giving away free food during finals week doesn’t hurt either.
March 3rd, 2009 |
good news! | 1 Comment
This past week VMworld Europe took place in Cannes, France. While I didn’t get to attend, it was still pretty exciting to see the product I’ve been working on for the last year and a half featured in both Paul Maritz’s (video) and Steve Herrod’s (video) keynotes. Much of my time in February was spent assisting the two ISVs that demonstrated their use of the vCloud API – IT Structures and EngineYard.
Joe Arnold, the director of engineering for EY, has posted a blog entry about what went into making the demo bulletproof, and both he and Andy Delcambre (also of EY) have posted a couple of sets on Flickr (1, 2).
I had quite a bit of fun helping both partners as they exercised our API, and now you too can sign up to get access to the vCloud API beta when it becomes available (additional info from Mike D).
Update (3/3): As Ophir mentioned in the comments, he’s got some pictures from Cannes on his blog as well.
Tags: cloud, vcloud, vmware, vmworld
March 1st, 2009 |
If you find yourself working Eclipse and a file you’re working on gets deleted inadvertently there may be some hope of recovery.
Right-click on the project the file was in and select Restore from Local History. You’ll be presented with a dialog that lists recent file revisions in the project and be given an option to restore them.
Thanks to Stephen for the tip. And yes, I probably do owe him brownies by now.
Note: I scheduled this post on Feb 2 but apparently it never went live. So Stephen did get brownies at one point.