Friday, October 31, 2008

A hack too far

I was just looking at the PackUriHelper class, with a view to implementing it in Mono. One of it's many methods converts a regular Uri into a 'pack' Uri. For example:
converts to

So I says to myself, this is easy, it's just combining two Uris. Simple!


Uri original = new Uri ("");

Uri complete = new Uri ("pack://" + original.OriginalString); // FAILS
Uri complete = new Uri (new Uri ("pack://", original)); //FAILS
Uri complete = new Uri (new Uri ("pack://", original.OriginalString)); // FAILS

How about escaping the second string...
string escaped = Uri.HexEscape (original.OriginalString);
Uri complete = new Uri ("pack://" + escaped); // FAILS

So at this stage I've lost all faith in humanity, so i try a basic test just to make sure i'm not insane. I'll try create a pack uri object myself, just to prove that it really is possible to parse them.

Uri complete = new Uri ("pack://http:,,main,page.html?query=val#middle/");


You can't do that. While they can *generate* that Uri for you, you can't generate one for yourself. Funnily enough, they do register a custom parser for the pack scheme, it's just incapable of parsing pack URI's. Don't ask me why.

After several hours and a lot of wasted time I finally came up with this:

Uri complete = new Uri (new Uri ("pack://"), escaped);

This is the *only* way to create a Pack URI. You have to hex escape the original uri and then call the constructor overload which takes a Uri and a string.

That is one of the stupidest things I've ever seen.

Saturday, October 25, 2008

MonoTorrent 0.60

MonoTorrent 0.60 is being prepared for release. 0.50 was released a mere 3 weeks ago, so why 0.60 already?

  1. There was a big regression in 0.50 with regards to download speeds. Transfers were a *lot* slower than 0.40. To backport this fix would be very complex as there have been a lot of changes since 0.50 was branched.
  2. DHT support is now good enough for me to activate by default. This was my milestone for releasing 0.60.
  3. There have been a number of other important bugs fixed aswell (including a critical one with UdpTrackers) which need to be released.
  4. There have been a number memory/performance optimisations since 0.50 which would be nice to release ;)

So, if you're using MonoTorrent, I urge you to check out the branch (when it is created) from

Until I create the branch, check out revision 117059 of monotorrent from svn. If you have any bugs/issues, let me know and I'll try to get a fix in for 0.60.

Monday, October 20, 2008

Walking can be tricky...

(No, it's not just a knee-high stocking with sticky tape, it's a cast)

Tuesday, October 14, 2008

Saturday, October 11, 2008

Sparsely populated, just the way I like it

The NTFS filesystem has support for sparse files, but this has to be specially enabled when creating a file. I was originally linked to a blog post on the idea, but unfortunately the license pretty much forbids me from using that code.

So I spent most of yesterday evening and this morning googling API documentation and eventually came up with a fairly basic implementation of sparse file support. The only two operations supported are:
  1. Creating a sparse file
  2. Setting the size of the sparse data.
The benefits of this are seen only on the NTFS filesystem, but what it gives you is the ability to write pieces at arbitrary places in the file without having to preallocate up to that point. There's a bit of overhead, but other than that you only use up the space which you've actually written to, i.e. only the pieces which have been downloaded.

Finally, if you don't have an NTFS filesystem supporting sparse files, you will not be affected by this. Those of you on HFS+ can never get this support, and those of you on ext3, I'm unsure how to enable sparse files. I think it's enabled by default, but if not, any recommendations on setting this up would be appreciated.

Monday, October 06, 2008

MonoTorrent 0.50 - The Good, The Bad, and the seriously awesome

It's release time! Yes, MonoTorrent 0.50 has hit. There have been a lot of changes since the last release, and this time, it's more than just under the hood fixes. There are several reasons why the new release is so much better than previous releases, I've listed the more important ones below, but first, the packages!

You can grab a precompiled binary suitable for Windows, Mac OS and Linux.
You can grab the sourcecode here as a .tar.gz archive.
There are also packages available on the OpenSuse build service, and of course the tag for the release can be gotten from svn.

Now, to the features and updates.

WebSeeding Support
There is provisional support for Http Web Seeding. This means when you're hosting a torrent, you can add standard Http servers as 'seeds'. No extra configuration is needed. This is still an experimental feature, and still has some corner cases where it doesn't work, all bug reports on this are welcome!

IP Address Banning
You can now ban individual IP addresses or IP Address ranges. Block lists from Emule, PeerGuardian and SafePeer are supported out of box by the built in parser, and any custom list can easily by loaded so long as you can parse the list into IPAddress objects. Internally the banlist is stored using the extremely efficient RangeCollection written by Aaron Bockover.

Efficent Torrent Streaming
Thanks to the efforts of Karthik Kailash and David Sanghera, we now have a special downloading mode in MonoTorrent which allows you to efficiently stream audio/video. Psuedo random piece picking is used to ensure you download pieces from a 'high priority' range before anything else. User code can set this 'High priority' range to be the next X bytes of data. When everything in the high priority range is downloaded, standard rarest-first picking is used.

Peer Exchange
uTorrent style Peer Exchange support is supported thanks to the tireless efforts of Olivier Dufour. This extension allows peer information to be passed across a bittorrent connection. In practice this means that if the tracker only gives you 1 peer, you can discover (potentially) hundreds more via peer exchange.

Enhanced compatibilty with broken clients

There are still clients out there which transmit corrupted BEncodedDictionary objects. These guys need to read the spec and ensure that their dictionaries keys are sorted
using a binary comparison. In the cases where the order appears to not matter, I've implemented support for ignoring the error. This should reduce the number of clients which are disconnected due to sending corrupt messages - this means higher performance.

Simplified Threading API
The core of MonoTorrent has undergone a complete rewrite. Previously, all the worker threads interacted with the core by taking out locks, then doing their work. This meant that implementing something as trivial as cancelling a pending asynchronous request was actually pretty hard. That method was actually horrendously prone to deadlocking the engine.

Nowadays all the worker threads add a task to the main thread, and the main thread does all the work. "What about the performance" i hear you ask, well, it performs the exact same, but it's so much easier to maintain and add new features to.

It also means the engine should be deadlock free, because there are no locks anymore. Nice.

NUnit Tests
As with all big software projects, regressions are bad. A year ago I had virtually no NUnit tests. Nowadays there are over 130 NUnit tests for the engine. While this doesn't even test 1/2 the code in MonoTorrent, each test adds that little bit more certainty that I don't regress.

There are also a bunch bugfixes here and there, and more big features in the pipeline. As a taster, DHT support is already active and enabled in SVN should you wish to test it out.

Friday, October 03, 2008

Who loves lamdas?

I finally started a project where I can use all the new fanciness. How would you process a list of names so that each name is printed out 'i' times?

int i = 5;
IEnumerable<string>() names = new List<string>() { "Alan" };
names.ForEach (n => i.Times().Apply(j => Console.WriteLine(n)));

All your lambdas are belong to us!

Thursday, October 02, 2008

Webserver for NUnit tests

What's the best way of setting up a simple HTTP file server to be used purely for NUnit tests. What I'm looking for is a way to set up a server so that I can host some files in. My NUnit tests will then initiate HTTP requests to 'download' those files.

Ideally whatever method I choose won't require me to write temporary files to disk. Currently what I do is host a HttpListener and manually fulfill the requested data by creating an in-memory byte[] and writing that into the response. I'd like to do something like:

webServer.Add("path/where/file/is", CreateFakeByteArray());

Then the server will fulfill requests from this faked byte[].

The reason for this setup is that I need a better way to test my implementation of WebSeeding in MonoTorrent without having to code up my own buggy file server ;)

Hit Counter