Wednesday 1 December 2010

GPS route correlations

I’ve been running for a while – mostly toward a better 10km this year – and have been playing with a few GPS apps while doing it (Buddy Runner for Android was good, but now I have a Garmin 405 running watch which is working out nicely).

However – one thing the software I’ve used hasn’t seemed to do is correlate routes that I run regularly. What I’d like to do is compare progress on the same route; one way of doing this would be to tag each route as I stored it. Another way would be for the software to recognize that it was likely to be the same route and automatically group it with its peers. Bonus points for asking me whether it really was the same route an suggesting a link, like Android does with your contact list.

A usable algorithm to correlate these routes isn’t hard, if we don’t worry about being too accurate: the gps route data is XML, and we can build a repository of route data quite easily, especially as some helpful types have already trodden that path. After that, finding matching routes, for me, goes as:

  1. Compare new route with each existing route.
  2. For each point on the new route, determine the closest point in the existing route being compared.
  3. Sum the closest distances for each point in the new route and the existing route.*
  4. If the total distance difference is less than some fraction of the total existing route length, and the total distance of each is the same within some tolerance, suggest it as a matching route. Or use to build a sorted list of matching existing routes.

* The final measure calculated depends on the number of points in the route; we can simplify things by assuming that for any given route there’ll be a similar number of points, which may be reasonable if you use the same piece of equipment every time you run. Alternatively we could average over deltas for each original route point.

So, first of all we need to parse and load the GPX XML. I worked up a static class that can load GPX and create a list of waypoints. I’m exporting the GPX from SportTracks, which means that for each run I have a single track, with a single track segment, with a set of track points. This means the code is a little simpler, at the expense of portability:

        public static Track Load(string gpxFilePath)
        {
            Track track = new Track();

            XNamespace gpx = XNamespace.Get("http://www.topografix.com/GPX/1/1");

            var doc = XDocument.Load(gpxFilePath);

            var tracks = from trk in doc.Descendants(gpx + "trk")
                         select new
                         {
                             Name = (string)trk.Element(gpx + "name"),
                             Points = (
                                  from trackpoint in trk.Descendants(gpx + "trkpt")
                                  select new
                                  {
                                      Latitude = (string)trackpoint.Attribute("lat").Value,
                                      Longitude = (string)trackpoint.Attribute("lon").Value,
                                      Elevation = (string)trackpoint.Element(gpx + "ele").Value,
                                      Time = (string)trackpoint.Element(gpx + "time").Value
                                  }
                                )
                         };

            string name = (string)tracks.First().Name;
            foreach (var point in tracks.First().Points)
            {
                track.Add(new Waypoint
                {
                    Longitude = Convert.ToDouble(point.Longitude),
                    Latitude = Convert.ToDouble(point.Latitude),
                    Elevation = Convert.ToDouble(point.Elevation),
                    Time = DateTime.Parse(point.Time)
                });
            }

            return track;
        }

So from this we can build a list of routes. Next we need to implement the algorithm above … however, we have a slight problem in that we have locations in Longitude and Latitude, and we need a way of working out the distance between them points. We can do that by working out the great circle distance between the two using a quick calculation – like this with the Haversine formula:

 /// 
        /// Calculates the great circle distance between two waypoints in km using the
        /// Haversine formula (e.g. http://www.movable-type.co.uk/scripts/gis-faq-5.1.html)
        /// 
        /// First waypoint
        /// Second waypoint
        /// Distance in km
        private double GreatCircleDistance(Waypoint first, Waypoint second)
        {
            double lon1 = toRadians(first.Longitude);
            double lat1 = toRadians(first.Latitude);
            double lon2 = toRadians(second.Longitude);
            double lat2 = toRadians(second.Latitude);

            double R = 6378.1;
            double dlon = lon2 - lon1;
            double dlat = lat2 - lat1;
            double a = Math.Pow(Math.Sin(dlat / 2.0), 2.0)
                + Math.Cos(lat1) * Math.Cos(lat2) * Math.Pow(Math.Sin(dlon / 2.0), 2.0);
            double c = 2.0 * Math.Asin(Math.Min(1, Math.Sqrt(a)));
            return R * c;
        }

        private double toRadians(double degrees)
        {
            return (degrees * Math.PI / 180.0);
        }

Now we can implement a method for Track that calculates the total track distance:

        /// 
        /// Returns total distance in km.
        /// 
        public double TotalDistance()
        {
            double distance = 0.0;
            for (int i = 1; i < _points.Count; i++)
            {
                distance += GreatCircleDistance(_points[i - 1], _points[i]);
            }
            return distance;
        }

And we can finally implement the main part of the algorithm above, which calculates the total minimum distance between each point on one route with each point on the other. I set up a list of tracks, and treat the first one as the new track:

            foreach (var track in tracks)
            {
                double totalDiff = 0.0;
                foreach (var newPoint in tracks[0])
                {
                    double min = Double.MaxValue;
                    foreach (var existingPoint in track)
                    {
                        double diff = Track.GreatCircleDistance(newPoint, existingPoint); 
                        if (diff < min)
                        {
                            min = diff;
                        }
                    }
                    totalDiff += min;
                }
                Console.WriteLine(totalDiff);
            }

The first thing I spot is that the total differences are a lot larger for similar routes than I assumed they’d be. For example, I tested with 3 routes – two over the same ~4 mile loop, and one ~13 mile out and back. The total diff between the first and each came back as:

Route 1 distance, km: 6.10723439167184
Route 2 distance, km: 6.42205099887688
Route 3 distance, km: 21.1510356703256
Total diff between route 1 and route 1: 0
Total diff between route 1 and route 2: 2.78080312304104
Total diff between route 1 and route 3: 32150.8929484652

Clearly the difference between route 1 and route 1 should be zero, but I didn’t expect the distance between two “identical” routes – 1 and 2 – to be of similar order to the total route distance. However, it just means that the threshold between “similar” and “different” needs adjustment – or that we just provide a list of possible matches in metric-sorted order.

Next step – take a look at performance, as there’s almost certainly some improvements to be made. After that, maybe create a module for something like SportTracks that hooks when you enter a new route, compares with existing routes and prompts you to link the new one with an existing group, if you like.

Monday 29 November 2010

Conway’s Game of Life (as an Android live wallpaper)

Having found Conway’s Game of Life to be a useful kata in C#, I thought I’d use it to explore some other technology. I’ve had an HTC Desire Android smartphone for a little while, so thought I’d see how hard it was to port something over.

I was hit by a number of gotchas while writing it – mostly down to using the API for the first time, and by not using Eclipse, instead using notepad++ and jdb to code and debug. jdb gave me a little trouble, but soon figured it out (on stackoverflow and here), realising that I had to make a socket connection to the emulator, and not the shared memory connection I seemed to be defaulting to.

I worked up a very simple app by just porting the core Life code I create in kata to Java, and then writing a minimal presenter over the top. The presenter comprises an Activity and an View; the Activity was just responsible for spinning up the backend engine – the View. The View is responsible both for keeping the evolution of the cells going, and displaying the current state. Dirty, but it worked.

Spot the glider ...In this case, and like many of the examples, there’s a message handler pump that keeps the model going. Messages to the renderer are posted delayed by the desired frame rate.

Converting the app to a live wallpaper sees the Activity transformed into a WallpaperService, which means we need to override the method onCreateEngine – in the case of a live wallpaper a specific Engine class actually drives the underlying model, and presents you with a SurfaceHolder on which to render the wallpaper.

As a  result, LifeEngine was reworked to inherit from Engine and render using a SurfaceHolder rather than a Canvas. The only real wrinkle there is that the Surface doesn’t get wiped between renders, so you need to wipe it yourself.

I wanted to pop up a dialog or other window when the user tapped the screen – that turned out to be not as straightforward as I’d assumed from a wallpaper. Handling the tap is easy enough, but bringing up a dialog less so.

Currently you can download the wallpaper here on the market, and the project should be up soon.

Tuesday 23 November 2010

Conway's Game of Life

Conway’s Game of Life must be one of the most continuously re-implemented algorithms in the history of software development. In case you’ve not seen it, the Game of Life is played out with cellular automata, or cells that are switched on or off by a set of rules. The rule set is applied to the all cells in one step, creating a new state or generation repeatedly.

A cellular automata implementation shares much with other problems often solved with stepwise and parallelizable modelling – as say some approaches to quantum electrodynamics, fluid mechanics and other physical models – so I figured it was as good a motif as any to explore some new technologies, and maybe work up some kata along the way.

The first step was to build up a core library code that implements the Game of Life. For a first implementation we need a grid of cells on which to play out the game, and a rule set that transforms the grid from one state to another.

This immediately sounded like a good basis for relatively quick kata. There are a couple of useful principles involved: practically, it’s immediately amenable to unit testing as we explicitly define the state we expect from any transformation; and the outline of the algorithm – applying a rule set to a whole state to transform it into a new state – is a useful one to remember.

So, to outline the kata, we need:

  1. A 2d grid that reports whether a cell at x,y is on (alive); allows us to set on/off state at x,y; allows us to clear (all off) and reset (random on/off); and perhaps display a simple representation of the grid to stdout.
  2. A rule set that transforms a 2d grid and returns a new 2d grid with some rule set applied.
  3. Unit tests should express the rules in terms of expected start and end grid states.

The rules – well they’re written up in many places, but to summarise:

  1. Cells with fewer than 2 neighbours die.
  2. Cells with 2 neighbours that are alive, stay alive.
  3. Cells with 3 neighbours stay alive if alive, or come to life if dead.
  4. Cells with 4 or more neighbours die.

The last time I went through this kata my set of C# NUnit tests looked like this – but I’d say the idea is to start from scratch each time:

    [TestFixture]
    public class CoreTests
    {
        [Test]
        [ExpectedException("System.ArgumentOutOfRangeException")]
        public void StubGridDoesntTakeNegativeWidth()
        {
            var g = new StubLifeGrid(-1, 1);
        }

        [Test]
        [ExpectedException("System.ArgumentOutOfRangeException")]
        public void StubGridDoesntTakeNegativeHeight()
        {
            var g = new StubLifeGrid(1, -1);
        }

        [Test]
        [ExpectedException("System.ArgumentOutOfRangeException")]
        public void StubGridOutOfRangeWidthRequestFails()
        {
            var g = new StubLifeGrid(2, 2);
            g.AliveAt(3, 1);
        }

        [Test]
        [ExpectedException("System.ArgumentOutOfRangeException")]
        public void StubGridOutOfRangeHeightRequestFails()
        {
            var g = new StubLifeGrid(2, 2);
            g.AliveAt(1, 3);
        }

        [Test]
        [ExpectedException("System.ArgumentOutOfRangeException")]
        public void StubGridSetOutOfRangeWidthFails()
        {
            var g = new StubLifeGrid(2, 2);
            g.Set(3, 1, true);
        }

        [Test]
        [ExpectedException("System.ArgumentOutOfRangeException")]
        public void StubGridSetOutOfRangeHeightFails()
        {
            var g = new StubLifeGrid(2, 2);
            g.Set(1, 3, true);
        }

        [Test]
        public void DiesWithNoNeighbours()
        {
            ILifeGrid g = new StubLifeGrid(3, 3);
            // Starts alive
            g.Set(1, 1, true);
            
            var r = new BasicRuleSet();
            g = r.Transform(g);

            Assert.IsFalse(g.AliveAt(1, 1));

            // Starts dead
            g = new StubLifeGrid(3, 3);
            g.Set(1, 1, false);

            g = r.Transform(g);

            Assert.IsFalse(g.AliveAt(1, 1));
        }

        [Test]
        public void DiesWithOneNeighbour()
        {
            ILifeGrid g = new StubLifeGrid(3, 3);
            // Starts alive
            g.Set(1, 1, true);
            g.Set(0, 0, true); // create all possible and test

            var r = new BasicRuleSet(); // static, in fact?
            g = r.Transform(g);

            Assert.IsFalse(g.AliveAt(1, 1));

            // Starts dead
            g = new StubLifeGrid(3, 3);
            g.Set(1, 1, false);
            g.Set(0, 0, true);

            g = r.Transform(g);

            Assert.IsFalse(g.AliveAt(1, 1));
        }

        [Test]
        public void StaysAliveWithTwoNeighbours()
        {
            ILifeGrid g = new StubLifeGrid(3, 3);
            g.Set(1, 1, true);
            g.Set(0, 0, true);
            g.Set(0, 1, true);
            g.Display();

            var r = new BasicRuleSet(); // static, in fact?
            g = r.Transform(g);
            g.Display();

            Assert.IsTrue(g.AliveAt(1, 1));

            // starts dead, stays dead
            g = new StubLifeGrid(3, 3);
            g.Set(1, 1, false);
            g.Set(0, 0, true);
            g.Set(0, 1, true);
            g.Display();

            r = new BasicRuleSet(); // static, in fact?
            g = r.Transform(g);
            g.Display();

            Assert.IsFalse(g.AliveAt(1, 1));
        }

        [Test]
        public void BornWithThreeNeighbours()
        {
            ILifeGrid g = new StubLifeGrid(3, 3);
            g.Set(1, 1, false);
            g.Set(0, 0, true);
            g.Set(0, 1, true);
            g.Set(0, 2, true);
            g.Display();

            var r = new BasicRuleSet(); // static, in fact?
            g = r.Transform(g);
            g.Display();

            Assert.IsTrue(g.AliveAt(1, 1));
        }

        [Test]
        public void StaysAliveWithThreeNeighbours()
        {
            ILifeGrid g = new StubLifeGrid(3, 3);
            g.Set(1, 1, true);
            g.Set(0, 0, true);
            g.Set(0, 1, true);
            g.Set(0, 2, true);

            var r = new BasicRuleSet(); // static, in fact?
            g = r.Transform(g);

            Assert.IsTrue(g.AliveAt(1, 1));
        }
    }

Friday 19 November 2010

jdb on Windows: connecting to the android emulator

I’ve been trying to connect

jdb
to the android emulator for a little while, and have been met repeatedly with:

jdb -sourcepath ./src -attach localhost:8700

java.io.IOException: shmemBase_attach failed: The system cannot find the file specified
        at com.sun.tools.jdi.SharedMemoryTransportService.attach0(Native Method)
        at com.sun.tools.jdi.SharedMemoryTransportService.attach(SharedMemoryTransportService.java:90)
        at com.sun.tools.jdi.GenericAttachingConnector.attach(GenericAttachingConnector.java:98)
        at com.sun.tools.jdi.SharedMemoryAttachingConnector.attach(SharedMemoryAttachingConnector.java:45)
        at com.sun.tools.example.debug.tty.VMConnection.attachTarget(VMConnection.java:358)
        at com.sun.tools.example.debug.tty.VMConnection.open(VMConnection.java:168)
        at com.sun.tools.example.debug.tty.Env.init(Env.java:64)
        at com.sun.tools.example.debug.tty.TTY.main(TTY.java:1010)

Fatal error:
Unable to attach to target VM.

Not so great. So: it appears that jdb on Windows defaults to a shared memory connection with a remote VM. Can we specify a different connector? Turns out we can:

C:\Users\tim\Documents\Android\Life>jdb –sourcepath .\src -connect com.sun.jdi.SocketAttach:hostname=localhost,port=8700

Set uncaught java.lang.Throwable
Set deferred uncaught java.lang.Throwable
Initializing jdb ...
> 

Beforehand you need to do some setup -- for example, see this set of useful details on setting up a non-eclipse debugger. It includes a good tip for setting your initial breakpoint -- create or edit a

jdb.ini
file in your home directory, with content like:

stop at com.alembic.life.standalone.Life:14

and they'll get loaded and deferred until connection.

Sunday 3 October 2010

empty XmlNodeList and avoiding a null check

Now, why does XmlNode.SelectNodes(string xpath) return null if it doesn’t find anything?

I have some existing code that reads some parameters from XML, and passes some objects and those parameters into a method that filters those objects into two sets – one that is sensitive in some way to those parameters, and one that’s not.

FilterProducts(
    XmlNodeList processes, 
    XmlNodeList processAttributes, 
    out List<Product> sensitiveFolio,
    out List<Product> nonSensitiveFolio);

I’m drawing the list of processes and processAttributes from XML – and in some cases people don’t define any attributes, wanting to just build a folio sensitive to a list of processes. In this case the XmlNodeList returned is null – and when I pass that to FilterProducts it blows up.

After the null pattern, then, I figured what I wanted to do was actually pass an empty XmlNodeList rather than a null. FilterProducts could then just pass over the attributes, and work out what was dependent on the processes only. Then I realised that what I really needed to do was dig through FilterProducts and refactor it because it was doing too much – at least compared to the usage pattern was now different to that which originally formed it.

However; both these can happen in sequence. I could deliver the simpler – using the existing code and an empty list, which wouldn’t affect any other code, then taking a look at refactoring FilterProducts, which might affect other parts of the codebase. So, populate processAttributes with a successully parsed XmlNodeList, or a new EmptyNodeList if that's null:

XmlNodeList processAttributes = 
    myConfigNode.SelectNodes("/processAttributes") ?? new EmptyNodeList; 

and then defined EmptyNodeList by subclassing XmlNodeList:

private class EmptyNodeList : XmlNodeList 
{ 
    public override XmlNode Item(int index) 
    {
        throw new NotImplementedException(); 
    }

    public override IEnumerator GetEnumerator() 
    { 
        return new EmptyNodeListEnumerator(); 
    }

    public class EmptyNodeListEnumerator : IEnumerator 
    { 
        public bool MoveNext() 
        { 
            return false; 
        }

        public void Reset() 
        { 
            throw new NotImplementedException(); 
        }

        public object Current 
        { 
            get 
            { 
                throw new NotImplementedException(); 
            } 
        } 
    }

    public override int Count 
    { 
        get 
        { 
            return 0; 
        } 
    }     
}

That works fine, delivering some code, and leaving a look at refactoring some code used in multiple places for later.

Monday 20 September 2010

refactoring an enum

Ran into an interesting detour today – had to consider how to make the behaviour of a certain plugin dynamically configurable. Unfortunately the behaviour of the plugin was controlled by an enum – the principle reason for making the value set dynamically configurable was to enable us to add new behaviours without recoding the enum, building and deploying.

To start with, clearly there’s been a design decision that values compile-time checking of this configuration, which we’d weaken if we were to actually refactor like this.

public enum ReportType : int
{
 Simple = 1
}

In principle I could do this by refactoring the enum to a class; the only decisions then are how to ensure backward compatibility if we’re not going to refactor all the client code, and how to get at dynamic values.

To ensure backward compatibility I could add either public consts or public properties – I figured the latter more useful, in case I needed actual logic in order to generate numbers for existing enum values.

public class ReportType
{
 public static int Simple { get { return 1; } }
}

Then, to access dynamically configured values I just added a static constructor to populate a lookup table, and a method for clients to use to do the lookup based on a string. The same method could also return the older, statically configured values.

public static class ReportType
{
 private static Dictionary<string, int> _lookup = new Dictionary<string, int>();

 static ReportType()
 {
     _lookup.Add("Simple", 1);
     _lookup.Add("Complicated", 2);     // or configure from somewhere
 }

 public static int Simple { get { return GetByName("Simple"); } }

 public static int GetByName(string reportType)
 {
     if(_lookup.ContainsKey(reportType))
     {
         return _lookup[reportType];
     }
     else
     {
         throw new ArgumentException("Unknown report type requested", reportType);
     }
 }
}

(One of) The question(s) then is -- how do we start handling these new dynamic errors? Will the client be prepared for them?

A friend suggested that I stick with the original enum and work up an extension method to extend the behaviour to include dynamically configured values – similarly to the class refactor above. This is extremely tempting – the extension method could determine what the last value in the enum was at runtime, and dynamically configure itself to avoid already-used indexes.

It’s tempting until you remember that extension methods are applied to instances, not types, so I wouldn’t be able to use it.