News from Macrobugland

September 22nd, 2009

Some of you may have heard that I have taken a full-time job at another company, so Macrobug is proceeding as a background activity. But that’s not all the news!

  • The stray event scanner has morphed (finally!) into a generic static analysis tool. As well as checking for all kinds of active object problems, it can find problems related to Symbian OS kernel handles… and in fact, anything else – it’s fully extensible! (For the techies – all you need to do is define an Eclipse plug-in which does two things. First, you define certain ‘actions’ taken by different function calls; secondly, you define a ‘device state model’ which gets fed every possible sequence of those actions called by the code you’re analysing).
  • Although it’s terrific, this generic static analysis tool probably isn’t as terrific as some of the existing tools out there. So Macrobug is also hoping to convert its existing analysis engines – for active objects and kernel handles – into checkers for an existing product. Perhaps a commercial one, or perhaps something open-source. If you’re interested, get in touch!
  • Macrobug (the company) might be starting a small extra website on the side soon… something more leisure oriented. Watch this space!

If you’ve been reading since the days I first started the company, thanks! Can I recommend that you take a look at the blog of Transmission Begins, a company run by my good friends John and Morgan?

Hints for posting patches to Android

March 11th, 2009

I’ve had a few (very minor) patches accepted by the Android community. Although they’re fairly explicit about what needs doing, here are a few findings which might grease the wheels for others.

  • When uploading using repo, you do not get an opportunity to input any message explaining the patch. Instead, your git commit message will be used, so get it looking nice! (You can use git commit --amend to tweak it).
  • It appears to be undocumented what will happen if you have multiple commits per topic branch. Are they all uploaded? Separately or individually? Which commit message is used? (At least, I can’t find it documented anywhere). You can merge git commits using git rebase -i (amongst other commands). But gerrit appears to have functionality for patches depending on other patches; perhaps this is what happens? But I’m not brave enough to try.
  • Likewise, I can’t find anywhere explaining what happens if you have repo downloaded a patch before uploading a new one. Is that marked as a dependency?
  • gerrit appears to assume that your commit messages are formatted according to an unwritten standard for Git commit messages.
  • I can’t find documented how reviewers for patches get allocated. It seems to be a bit of a black art. I’ve had a few patches sail through, but my very first one – an obvious typo fix in a comment – is still sitting there, presumably awaiting somebody. Who? Who knows.

On the whole, though, the system worked exactly like it said on the tin! I can’t wait to see how easy or hard it is, in practice, to get changes submitted to the Symbian Foundation…

Git versus Mercurial

January 27th, 2009

A few months ago I adopted Mercurial for a project on which I was working. At that time I was faced with a choice between Mercurial or Git, and Mercurial seemed the right choice because it behaved similarly to Subversion, with which nearly everyone is familiar. Git, however, seemed to be developed by-hackers-for-hackers. Although this often yields the best feature set it does, in my view, tend to lead to confusion (and occasionally architectural omissions).

(That said, I was tempted by Git; I’d closely followed the discussion which led to its development on the Linux kernel mailing list years ago, so I felt ahead of the game!)

Only four months later, and it looks to me like I backed the wrong horse. The list of open source projects using Mercurial is impressive – OpenJDK, Mozilla, OpenSolaris, Symbian OS – but they tend to be corporate efforts, and therefore aren’t likely to be trend-setters. All the cool kids seem to be hanging around the Git block – Linux, Perl,, Qt, Ruby on Rails, Wine, Samba, GHC, Scratchbox, VLC and not to mention Android. That sort of momentum of grass-roots-led open source projects is, I reckon, bound to push Git ahead irrespective of the merits of the two.

So what are the merits of the two? From a few hours’ research, most people seem to agree on these facts:

  • Mercurial behaves more like subversion, in terms of command-line syntax. Some see this is as good, some as bad.
  • Git has squillions of commands. Likewise.
  • Git is rubbish on Windows. Subversion has the excellent TortoiseHg. Efforts to produce an equivalent for Git seem to have stopped.
  • Git has no knowledge of file renames. Most people seem to think this is a disadvantage, but the more enlightened opinions note that this is actually an advantage, because Git will automatically notice portions of files that are similar, and can therefore keep track of changes even if a file is split into two. Allegedly. Cool!
  • Git has freakishly good Subversion integration. It appears you can almost ask it to treat a Subversion branch on a server as just another Git branch.
  • The physical structure of Mercurial is possibly more suited to integration with other tools/IDEs/etc., in that there are clear libraries. Tools would need to launch processes to integrate with Git.
  • Both are a bit rubbish with projects composed of many smaller projects, though it sounds like Git’s submodules support is more ‘core’ than the Subversion ‘forest’ extension designed to support this. I may be wrong.

I’ve also heard two other things, but I can’t justify them: one is that Git has a more coherent branching strategy than Mercurial (there’s no reason why you need to create a new repository for a new branch), and the other is that Git is rubbish with UTF-16 text.

Behind the scenes, the concepts are very similar. The only real difference is that Mercurial works on changesets, which can only build on the relevant prior changeset, whilst Git works in terms of entire trees of files (which, behind the scenes, may be stored as deltas).

So, when I eventually switch from Subversion to a distributed system, I’ll probably be switching to Git. I hope there’s a viable path for progress on the Windows version by then.

Calling Eclipse/OSGi Plug-in APIs from outside OSGi – part three

January 27th, 2009

Previously I have tried to explain how it’s possible to call Eclipse APIs from outside Eclipse. It was all a bit painful.

I decided to automate the process as much as possible. I’ve now got a “Run in OSGI” API which enables you to write code like this:

package com.macrobug.testOutside;

import com.macrobug.osgiWrapper.OsgiWrapper;
import com.macrobug.testinside.IResults;
import com.macrobug.testinside.MyAPI;

public class Main {

  public static String resultsHolder;
  public static void main(String[] args) {
    try {
      System.out.println("Results holder is "+resultsHolder);
    } catch (Exception e) {
  public static class MyRunnable implements Runnable {
    public void run() {
      MyAPI myapi = new MyAPI();
      IResults results = myapi.doFirstApi();
      resultsHolder = results.getResults();

Note that MyAPI is an API exposed by a normal Eclipse plug-in, whilst this code is running in a normal Java VM outside of Eclipse.

The code works by creating two new class loaders. Firstly, it fabricates an entire new Eclipse plug-in which depends upon the packages you want. It steals the class loader from that plug-in; this class loader has access to the APIs in those packages. It then creates a new class loader which delegates some requests to that class loader, but also allows access to whatever classes can be loaded via the normal means.

Finally it asks that new class loader to load the runnable, and then runs it.

The code is here if you’re interested.

But I’m not sure I’ll ever use it, or advocate its use to my customers. It’s still just too horrendous. The point of this whole exercise is to allow my customers, who may have arbitrarily complex Java systems involving their own wacky class loaders (e.g. in a Java web serving environment) to embed my code and use my APIs, even if I use OSGi internally. But this solution won’t work for that: it will only work where the existing system is simple.

So I am still not sure what to do. I really, really, want to use OSGi inside my projects but that makes it virtually impossible to embed them into an existing complex Java world. It’s daft.

Paternity leave and perspective

January 19th, 2009

Probably the greatest thing about being self-employed (or being a director of one’s own company, no matter how big it is) is the flexibility to take time off when it suits you, as opposed to the rules put in place by your organisation. So, I’ve taken the whole of January off as paternity leave. It’s great. I feel greatly sorry for my friends who are also becoming Dads, and have had to go back to work after just two weeks. It can’t be good for them, or their kids, or probably their work either – I know how sleepy I’d be if I had to stick to 9-5 hours.

Anyway, this month off is also the longest period I’ve had away from my Macrobug work – ever. Unsurprisingly, that gives a good sense of perspective about the endeavour.

When I started off, I actually had a fairly solid game-plan. It was more detailed than I ever wrote in this blog, and ultimately would have led to Macrobug being the domain specialist for a particular type of tool, across all platforms on all types of devices. Starting on Symbian OS was a good stepping-stone because the tool in question was more easily possible on Symbian OS, and in fact would have been most useful on Symbian OS compared to other platforms.

However, problems reared their ugly heads. Serious commercial development on Symbian OS is expensive, ultimately. It involves paying Symbian heaps of money to get a version of Symbian OS with source code (or, in my case, spending equivalent heaps of time trying to negotiate a free version with various conditions attached), as well as paying a lot of money for the compiler etc.

It would be nice to think that with Nokia’s purchase of Symbian, and the resulting spirit of openness, many of these problems would disappear. I do believe they will. But this has been accompanied by a marked reduction in my customer base. Previously I was aiming at Symbian, UIQ, Motorola, Sony Ericsson and ultimately I hoped to market to Nokia once I’d had experience. Of that list, only Nokia and Sony Ericsson are left (and most of the bits of Sony Ericsson I know have disappeared too). I have no suitable contacts in the other Symbian OS licensees which are sure to pop up. Symbian’s merge with Nokia has also changed parts of their tools roadmap on which I was relying.

The other thing that became clear about my original tools plans was that their benefits are not easy to quantify financially. I can’t definitively say “this tool will save you £100,000 per year”. That’s a serious problem, because even though the tools are really useful and would save heaps of time (and money) for their purchasers, it’s hard to prove that.

So developing the tools that I had originally in mind on Symbian OS looks tricky. In the mean time, then, I’ve been doing other things. I’ve developed a completely different series of tools. They’re very Symbian-specific, which is a shame. But the benefits they produce are financially quantifiable, and I have successfully sold them. In addition, of course, I’ve been doing lots of tools-related contracts and trying to establish Macrobug as a place to come if you have a need for a Symbian-related tool. This has been moderately successful, and actually very enjoyable. (That’s why I’m only now getting this sense of perspective when I take time off). But as everyone knows, doing endless contracts does not equal a career.

The perspective I’ve gained from my month off, then, suggests that I’m at a crossroads. Do I:

  1. Continue doing contracts a lot of the time.
  2. Develop tools which are Symbian-specific but whose benefits are financially quantifiable. Successfully sell those tools – maybe.
  3. Develop tools which correspond to my original plan, and could enable Macrobug to grow to be a cross-platform vendor, dominant in its own small field.
  4. Give up and do something else, probably a full-time job.

I don’t know the answer. I fear that I might have to go for the third option if I have any ambition!

But having kids when self-employed also has another impact: it makes it very appealing to go for the financially safer options – which are any of the others!

Parallels bug fixes

January 12th, 2009

Just a quick note to say that Parallels has fixed both of the two bugs that mean the Symbian emulator wouldn’t run from a shared Mac drive. This first one was a problem with file attributes. The second one is excessive case-sensitivity of the file system. Top marks to Parallels for fixing them! (Fixed in build 3810).


December 11th, 2008

The smartphone software market has been busy lately. Android comes along; Nokia open-sources Symbian OS; the iPhone becomes wildly popular.

Ten years ago, Symbian was formed. It was the “open” alternative. Now, people look at Symbian and – until the Foundation announcement – ask what on earth is open about Symbian? Surely it’s more closed than Android etc.? I get asked these questions all the time.

Ten years ago, things were easy. The alternatives were closed platforms built by the phone manufacturers themselves: NOS, OSE etc. They had no APIs for developers (well, Java on phones was just beginning to take off) and they were entirely closed. Symbian was open in two key ways: firstly, it had APIs which third parties could use to add to the device software; and secondly, all the source code was available to the licensees, and the development costs could therefore be shared. That was radical ten years ago in the mobile marketplace!

Today, the industry has moved on. “Open” today doesn’t mean just sharing source code between a few large companies – it means publishing it on the web and letting anyone change it. But we’re going to see the same battle fought on exactly the same things: the openness of the APIs, and the degree to which the code is open source. Again, the Symbian Foundation is pushing the envelope. On both counts, it is in principle more ‘open’ than any of the alternatives.

Allow me to explain by way of a diagram.

Ade’s openness graph

Some caveats for nit-pickers.

  1. Symbian (pre-foundation) had different API classifications – available to all, partners only, or nobody. Even if you were using just the ‘available to all’ category, it still had more APIs than Android (where you can’t run native code on real devices) or the iPhone (where you can’t even run a background task). When the foundation becomes active, presumably all the ‘partners only’ APIs will become available to everyone, which will wipe the floor with the competitors.
  2. No, you really can’t run native code on Android devices. Yet..
  3. No, you really can’t run background processes on an iPhone..
  4. I have no idea about LiMo and friends. I’m assuming they fit the mould of Motorola’s Linux Java platform, where the kernel is GPLed but the only APIs available are Java. This may be a gross disservice to LiMo, as I think it is intended to have native APIs. But to be honest, I don’t count it as one of the major players at the moment.

What does this tell us?

For one thing, Android is substantially less open than Symbian OS on both counts. This may change, of course. But right now, there’s nothing to stop handset makers taking the Android code, and altering it willy-nilly to create purely proprietary software. Furthermore, you, as a third party developer, can’t run native code. So you can’t port your existing software. You can’t talk to hardware, beyond the ways that Google gives you. You certainly can’t run emulators or different execution environments such as Python.

Even before the Foundation move, it could be argued that the Symbian APIs were more open than the Android ones. Symbian and Nokia have jumped through a lot of hoops to produce a reasonable POSIX compatibility layer which enables lots of existing software to work on Symbian devices, relatively unchanged.

So: Symbian currently looks like the most open platform. Sadly, Symbian has some major practical issues to sort out which prevent the platform from appearing open. In addition, the APIs are certainly hard to develop against (but since Symbian is the only platform that allows different runtime environments such as Python, it could be argued that doesn’t matter in the long term).

Finally, credit where it’s due: Android and Google really forced Nokia to make this change. Full credit to Nokia, though, for such a bold step.

(All trademarks and logos are owned by their respective owners).

Running OSGI/Eclipse plug-ins from within a normal Java application – Part 2 of 2

November 24th, 2008

In Part One I explained how to start up a whole Eclipse/OSGi plug-in system from a normal Java application. This worked fine, except there was no way to exchange data between the two.

I’ve finally figured out a way. (It was more difficult than I expected!) So here is Part Two, where I describe how.

First of all, let’s talk about what is not possible.

  • It’s not possible to allow external code (the ‘testExternal’ project from the last article) to access classes within any existing plug-in in the embedded Eclipse. At least, not without using reflection. (See this bug).
  • It’s not possible to allow the plug-ins to load code from JARs and libraries outside of Eclipse. (Exporting the Eclipse product fails).

So what are we left with?

Fortunately, there’s a way of creating a Third Kind of thing. This is called a ‘framework extension bundle’ (I think). This is a type of Eclipse plug-in which is an extension to the OSGi framework. The various classloaders running in the Eclipse world always check with any such extensions to see if a class can be loaded through them. So, the classes in such plug-ins are globally available to all other Eclipse plug-ins. Better still, Eclipse doesn’t attempt to load the classes in this plug-in… it assumes it will have been loaded as part of the process of loading Eclipse itself. That means you can load it using the classpath mechanism, which means it’s accessible to your code outside Eclipse as well as your code inside it.

Specifically, here’s what you need to do.

  1. As before, organise your code into internal code which will run inside Eclipse, and external code which will run outside. But this time, you need a third thing: crossover code which will provide the data structures that need to be accessible from the internal and external code.
  2. Create a new Eclipse Plug-in Project for this crossover code.
  3. Create classes, etc. inside that plug-in.
  4. Now the clever bit: in the MANIFEST.MF for that plug-in, add the following line:
    Fragment-Host: system.bundle; extension:=framework
    That means that the classes in this plug-in are regarded as part of the system – just like java.lang.String and similar. They will be made available to all plug-ins within the whole Eclipse system. Furthermore, they will be loaded using the standard Java class loader and the normal class path, which means they’re also accessible outside Eclipse.
  5. In your ‘internal’ plug-in (which as you’ll remember was an Eclipse IApplication) you can now access these data structures, fill in details, and return them. Note that you do not need to specify your ‘crossover’ plug-in as a dependency of your internal plug-in: it’s always available.
  6. Specifically your IApplication might look like this:

    public class MyInternal implements IApplication {

      public Object start(IApplicationContxt context) throws Exception {
        IThingy foo = ; // retrieve from other plug-ins etc.
        ResultsHolder rh = new ResultsHolder(); =;
        return rh;

      public void stop() {


  7. In your ‘external’ project, add the ‘crossover’ project to the build path.
  8. In your ‘external’ code, take the value returned from and cast it to the ResultsHolder. You can now access its fields, call its methods etc.

Now, when you run your external system, ensure you add the ‘crossover’ plug-in to the class path. Bob’s your uncle.

Obviously it’s a right pain to have to ‘trans-ship’ information from the data structures used within Eclipse into other classes which you can access externally. But right now, that seems unavoidable.

Running OSGI/Eclipse plug-ins from within a normal Java application – Part 1 of 2

November 13th, 2008

Eclipse is a great plug-in system for Java. But superficially, it appears a bit like some allege the GPL to be – a virus! It appears that, if any of your code is an Eclipse plug-in, the whole Java system has to be.

Not true, it turns out! You can embed a whole Eclipse system, with all sorts of plug-ins, in an existing Java interpreter process. But frankly, it’s a fiddle. This note explains how. A subsequent posting will explain how you can exchange information between the Eclipse and non-Eclipse parts of the process.

But first of all, why? If you’ve got some code in Eclipse plug-ins that you wish to run from an existing Java system, your options are either to spawn a new process to run it, or to follow this recipe and run the plug-ins inside your existing Java interpreter. Why would you want to do the latter?

  • No overhead of starting a new process
  • No overhead of recompiling the class file ‘hotspots’ to native code each time
  • No overhead of running the ‘static’ cod to construct static data members
  • Easier exchange of information – freely pass Java classes back and forth

Obviously, there’s not usually any point in spawning a whole Eclipse GUI from some external Java code. But there are many libraries and whole systems written as Eclipse plug-ins, making use of the powerful OSGi plug-in system at its core. Using those is often something that an external Java program would want to do.

Enough of the ‘why’. What about the how?

  1. Work out exactly how you need to interact with your Eclipse system. You’re going to be running some code inside Eclipse, which will need to perform all the interactions with your external code. Your ‘internal’ code has full access to al the Eclipse plug-in APIs. Ideally, the information exchange between the ‘internal’ and ‘external’ code should be minimal, since it’s tricky.
  2. Create a new Eclipse plug-in project. This will house your ‘internal’ code. Let’s assume we’re going to call it ‘testInternal’.
  3. Edit the MANIFEST.MF for testInternal needs to depend upon the Eclipse plug-in(s) whose APIs you wish to use, and also on org.eclipse.core.runtime.
  4. Under Extensions, add an extension to the extension point org.eclipse.core.runtime.applications. You’ll need specify a ‘run’ element pointing at a class which implements IApplication. Go ahead and create that class. Meanwhile, the relevant part of your plugin.xml should look something like:
  5. In your testInternal class that you just created, fill in the code you want to run into your start method. It’s a good idea, for now, to put some statements that will show output indicating whether this code has run. For example:
      public Object start(IApplicationContext context) throws Exception {
        System.out.println("Got into start");
        String[] args = (String[]) context.getArguments().get(
        // Interact with Eclipse plug-ins as much as you like
        System.out.println("Exited start");
        return EXIT_OK;
  6. Note that in the above code, we’re not really attempting any information exchange with the world outside of Eclipse plug-ins. If you happen to want to exchange Strings, you’re fine: you can read the arguments using the code shown. You can return any type of object at all, but as we’ll see later there are problems related to class loaders, so in practice you will need to return one of the core Eclipse objects such as a String, a Long etc.
  7. In this project, create a new Product Configuration. (File – New – Other – Plugin Development – Product Configuration).
  8. This Product Configuration should include all the plug-ins that you need. That includes your testInternal plug-in, but also all those on which it depends. (The “Select Required” button is helpful here). Make sure you specify your newly created ‘application’ extension as the one to launch.
  9. Export this Product Configuration. File – Export – Eclipse Product – Directory.
  10. You’ve now got a standalone Eclipse installation which will run your ‘application’ code and call into the plug-ins it needs to call into. It’s worth testing this. Simply run the Eclipse executable that’s been created, and you should see the results of the print statements in your code.
  11. Now we need to attack the ‘external’ code and get it to run the ‘internal’ code from within the same java process. But first, an interlude: here’s how the normal Eclipse launch process works. It is relevant, trust me!
    • The eclipse.exe launcher is complex, but its net effects when launching Eclipse amounts to approximately: java -jar org.eclipse.osgi.XYZ.jar -configuration configuration/ .... (Here, XYZ is just a version number).
    • The java process looks for, and finds, a ‘main’ function (as specified in the manifest inside that jar) within org.eclipse.core.runtime.adaptor.EclipseStarter.
    • The ‘main’ function initialises OSGi and Eclipse, loads plug-ins according to the configuration file at configuration/config.ini and then passes control to the start function of the relevant IApplication.
  12. We need to do the same in our code. Fortunately, org.eclipse.core.runtime.adaptor.EclipseStarter has some static methods we can call, to mimic the action of its main. Here’s the code you need.
      private static final String INSTALL_DIR = "...";
      public static void launchTestInternalFromOutside() {
        System.setProperty("eclipse.application", "testInternal.testInternal");
        System.setProperty("osgi.configuration.area", INSTALL_DIR+"/configuration");
        Object o;
        try {
          System.out.println("About to start Eclipse");
          o = String[]{"myParam"},null);
          if (o != null) {
          } else
            System.out.println("o was null");
        } catch (Exception e) {
  13. Obviously, you also need to add the org.eclipse.osgi.XYZ.jar to the classpath of this code.
  14. INSTALL_DIR is the directory of the Eclipse that you exported.
  15. The various properties are the parameters that Eclipse and OSGi needs to get started. (I couldn’t find these documented anywhere; a bit of debugging revealed what they needed to be, but you may have less luck – fiddle with them).
  16. And that’s it. Run this code, and you should see the whole Eclipse framework started, plug-ins loaded, and your ‘testInternal’ code run all from the same Java process.

So what are the caveats?

  • Data exchange between your ‘internal’ and ‘external’ code is tricky. The reason is that Eclipse plug-ins use their own class loaders (as explained in this excellent article). So, if your internal code returns a com.frangible.Sprocket, and your external code tries to cast that to a com.frangible.Sprocket, you’ll get a ClassCastException. Huh? That’s because there are two separately loaded versions of the Sprocket class, loaded by different class loaders. As far as Java is concerned, they’re separate classes. I haven’t exactly worked out the best way round this, but Part Two of this article will reveal my findings when I do. If I do.
  • That method is slow. In all fairness, it has to load a lot of code. There are other static methods in the same class which should, hopefully, allow you to ‘start’ Eclipse once but then ‘run’ it many times. Presumably, this means all the heavyweight plug-in loading will occur once, and subsequent times you’ll be able simple to run your testInternal code without all that overhead. Worth looking into.
  • Something strange is going on upon exit. Control is always returned to my launchTestIntrnalFromOutside method, but the process doesn’t necessarily actually exit. Perhaps Eclipse has some sort of shutdown hook. I have yet to look into this.

So, this is a work in progress but it’s a start. Good luck!

Distributed version control systems

October 31st, 2008

One of the projects I’m working on right now involves two developers in different continents.

For various reasons it would be awkward to set up a proper source code repository on a mutually accessible server, so instead I’m experimenting with a distributed version control system: Mercurial. For those who haven’t come across these before, the idea is that each developer has their own complete repository. Into this repository they check in their own changes. They can also ‘pull’ changes from anyone else’s repository.

There are no restrictions on where you can pull changes from, nor the order in which you take them (apart from the inevitable merging pain if you do something illogical).

So far, it’s looking really good! What’s really surprised me is how similar the command-line syntax is to Subversion. If you’re familiar with Subversion, you might want to try playing with Mercurial, as I think distributed systems are probably going to be the way of the future…

PS it will be interesting to see what the Symbian Foundation does for this sort of thing.