Archive for January, 2009

Git versus Mercurial

Tuesday, January 27th, 2009

A few months ago I adopted Mercurial for a project on which I was working. At that time I was faced with a choice between Mercurial or Git, and Mercurial seemed the right choice because it behaved similarly to Subversion, with which nearly everyone is familiar. Git, however, seemed to be developed by-hackers-for-hackers. Although this often yields the best feature set it does, in my view, tend to lead to confusion (and occasionally architectural omissions).

(That said, I was tempted by Git; I’d closely followed the discussion which led to its development on the Linux kernel mailing list years ago, so I felt ahead of the game!)

Only four months later, and it looks to me like I backed the wrong horse. The list of open source projects using Mercurial is impressive – OpenJDK, Mozilla, OpenSolaris, Symbian OS – but they tend to be corporate efforts, and therefore aren’t likely to be trend-setters. All the cool kids seem to be hanging around the Git block – Linux, Perl,, Qt, Ruby on Rails, Wine, Samba, GHC, Scratchbox, VLC and not to mention Android. That sort of momentum of grass-roots-led open source projects is, I reckon, bound to push Git ahead irrespective of the merits of the two.

So what are the merits of the two? From a few hours’ research, most people seem to agree on these facts:

  • Mercurial behaves more like subversion, in terms of command-line syntax. Some see this is as good, some as bad.
  • Git has squillions of commands. Likewise.
  • Git is rubbish on Windows. Subversion has the excellent TortoiseHg. Efforts to produce an equivalent for Git seem to have stopped.
  • Git has no knowledge of file renames. Most people seem to think this is a disadvantage, but the more enlightened opinions note that this is actually an advantage, because Git will automatically notice portions of files that are similar, and can therefore keep track of changes even if a file is split into two. Allegedly. Cool!
  • Git has freakishly good Subversion integration. It appears you can almost ask it to treat a Subversion branch on a server as just another Git branch.
  • The physical structure of Mercurial is possibly more suited to integration with other tools/IDEs/etc., in that there are clear libraries. Tools would need to launch processes to integrate with Git.
  • Both are a bit rubbish with projects composed of many smaller projects, though it sounds like Git’s submodules support is more ‘core’ than the Subversion ‘forest’ extension designed to support this. I may be wrong.

I’ve also heard two other things, but I can’t justify them: one is that Git has a more coherent branching strategy than Mercurial (there’s no reason why you need to create a new repository for a new branch), and the other is that Git is rubbish with UTF-16 text.

Behind the scenes, the concepts are very similar. The only real difference is that Mercurial works on changesets, which can only build on the relevant prior changeset, whilst Git works in terms of entire trees of files (which, behind the scenes, may be stored as deltas).

So, when I eventually switch from Subversion to a distributed system, I’ll probably be switching to Git. I hope there’s a viable path for progress on the Windows version by then.

Calling Eclipse/OSGi Plug-in APIs from outside OSGi – part three

Tuesday, January 27th, 2009

Previously I have tried to explain how it’s possible to call Eclipse APIs from outside Eclipse. It was all a bit painful.

I decided to automate the process as much as possible. I’ve now got a “Run in OSGI” API which enables you to write code like this:

package com.macrobug.testOutside;

import com.macrobug.osgiWrapper.OsgiWrapper;
import com.macrobug.testinside.IResults;
import com.macrobug.testinside.MyAPI;

public class Main {

  public static String resultsHolder;
  public static void main(String[] args) {
    try {
      System.out.println("Results holder is "+resultsHolder);
    } catch (Exception e) {
  public static class MyRunnable implements Runnable {
    public void run() {
      MyAPI myapi = new MyAPI();
      IResults results = myapi.doFirstApi();
      resultsHolder = results.getResults();

Note that MyAPI is an API exposed by a normal Eclipse plug-in, whilst this code is running in a normal Java VM outside of Eclipse.

The code works by creating two new class loaders. Firstly, it fabricates an entire new Eclipse plug-in which depends upon the packages you want. It steals the class loader from that plug-in; this class loader has access to the APIs in those packages. It then creates a new class loader which delegates some requests to that class loader, but also allows access to whatever classes can be loaded via the normal means.

Finally it asks that new class loader to load the runnable, and then runs it.

The code is here if you’re interested.

But I’m not sure I’ll ever use it, or advocate its use to my customers. It’s still just too horrendous. The point of this whole exercise is to allow my customers, who may have arbitrarily complex Java systems involving their own wacky class loaders (e.g. in a Java web serving environment) to embed my code and use my APIs, even if I use OSGi internally. But this solution won’t work for that: it will only work where the existing system is simple.

So I am still not sure what to do. I really, really, want to use OSGi inside my projects but that makes it virtually impossible to embed them into an existing complex Java world. It’s daft.

Paternity leave and perspective

Monday, January 19th, 2009

Probably the greatest thing about being self-employed (or being a director of one’s own company, no matter how big it is) is the flexibility to take time off when it suits you, as opposed to the rules put in place by your organisation. So, I’ve taken the whole of January off as paternity leave. It’s great. I feel greatly sorry for my friends who are also becoming Dads, and have had to go back to work after just two weeks. It can’t be good for them, or their kids, or probably their work either – I know how sleepy I’d be if I had to stick to 9-5 hours.

Anyway, this month off is also the longest period I’ve had away from my Macrobug work – ever. Unsurprisingly, that gives a good sense of perspective about the endeavour.

When I started off, I actually had a fairly solid game-plan. It was more detailed than I ever wrote in this blog, and ultimately would have led to Macrobug being the domain specialist for a particular type of tool, across all platforms on all types of devices. Starting on Symbian OS was a good stepping-stone because the tool in question was more easily possible on Symbian OS, and in fact would have been most useful on Symbian OS compared to other platforms.

However, problems reared their ugly heads. Serious commercial development on Symbian OS is expensive, ultimately. It involves paying Symbian heaps of money to get a version of Symbian OS with source code (or, in my case, spending equivalent heaps of time trying to negotiate a free version with various conditions attached), as well as paying a lot of money for the compiler etc.

It would be nice to think that with Nokia’s purchase of Symbian, and the resulting spirit of openness, many of these problems would disappear. I do believe they will. But this has been accompanied by a marked reduction in my customer base. Previously I was aiming at Symbian, UIQ, Motorola, Sony Ericsson and ultimately I hoped to market to Nokia once I’d had experience. Of that list, only Nokia and Sony Ericsson are left (and most of the bits of Sony Ericsson I know have disappeared too). I have no suitable contacts in the other Symbian OS licensees which are sure to pop up. Symbian’s merge with Nokia has also changed parts of their tools roadmap on which I was relying.

The other thing that became clear about my original tools plans was that their benefits are not easy to quantify financially. I can’t definitively say “this tool will save you £100,000 per year”. That’s a serious problem, because even though the tools are really useful and would save heaps of time (and money) for their purchasers, it’s hard to prove that.

So developing the tools that I had originally in mind on Symbian OS looks tricky. In the mean time, then, I’ve been doing other things. I’ve developed a completely different series of tools. They’re very Symbian-specific, which is a shame. But the benefits they produce are financially quantifiable, and I have successfully sold them. In addition, of course, I’ve been doing lots of tools-related contracts and trying to establish Macrobug as a place to come if you have a need for a Symbian-related tool. This has been moderately successful, and actually very enjoyable. (That’s why I’m only now getting this sense of perspective when I take time off). But as everyone knows, doing endless contracts does not equal a career.

The perspective I’ve gained from my month off, then, suggests that I’m at a crossroads. Do I:

  1. Continue doing contracts a lot of the time.
  2. Develop tools which are Symbian-specific but whose benefits are financially quantifiable. Successfully sell those tools – maybe.
  3. Develop tools which correspond to my original plan, and could enable Macrobug to grow to be a cross-platform vendor, dominant in its own small field.
  4. Give up and do something else, probably a full-time job.

I don’t know the answer. I fear that I might have to go for the third option if I have any ambition!

But having kids when self-employed also has another impact: it makes it very appealing to go for the financially safer options – which are any of the others!

Parallels bug fixes

Monday, January 12th, 2009

Just a quick note to say that Parallels has fixed both of the two bugs that mean the Symbian emulator wouldn’t run from a shared Mac drive. This first one was a problem with file attributes. The second one is excessive case-sensitivity of the file system. Top marks to Parallels for fixing them! (Fixed in build 3810).