Monday, December 22, 2008

A workaround!

So there exists *one* way and one way only to safely set the Range header when using 64bit indices. This method was found with the help of NotZhila on #mono:

MethodInfo method = typeof(WebHeaderCollection).GetMethod
"AddWithoutValidate", BindingFlags.Instance | BindingFlags.NonPublic);

HttpWebRequest request
= (HttpWebRequest) WebRequest.Create ("");

long start = int32.MaxValue;
long end = int32.MaxValue + 100000;

string key = "Range";
string val = string.Format ("bytes={0}-{1}", start, end);

method.Invoke (request.Headers,
new object[] { key, val });

You need to call the protected method of the WebHeaderCollection class so you can add the "Range" header without being forced to go through the broken HttpWebRequestAddRange method. I tried about a half dozen more legitimate methods, but the API is so locked down that it was impossible.

For example, HttpWebRequest.Headers is a get/set property, but if you set a new collection, it checks to make sure "Range" isn't there, and then just copies the keys from your new collection into it's internal one and ignores the collection you just gave it. What this means is that:

CustomCollection c = new CustomCollection ();
request.Headers = c;
Console.WriteLine (request.Headers == c);

prints false.


Anyway, the hack works. If you need 64bit indices when downloading files via HttpWebRequest, here's your guaranteed working hack, on both Mono and MS.NET.

Saturday, December 20, 2008

System.Net.WebRequest -> System.Net.WebSuck

I can see that throughout the lifetime of the .NET framework, no-one realised that files greater than 2GB exist on the web.

System.Net.WebRequest.AddRange (int startOffset, int endOffset)

Thankfully I don't have to rewrite my own class from scratch, I can cannibalise one from Mono and add in a workaround, though that's likely to be a pain in the ass too :(

Wednesday, December 17, 2008

PiecePicking and runtime method overriding

Deciding which chunk of data to download next is a pretty complex task in the bittorrent world. You have to take into account a number of factors, such as:
1) You may want to prefer to download rare pieces first, maybe not.
2) You may want to pick a piece randomly, or maybe you want to download from the start of the file and get pieces in order
3) The user may set a higher priority on a certain file, so you should prefer its pieces.
4) The user may want to ignore certain files and never select their pieces.

Then imagine trying to NUnit test all the combinations and you'll find that it becomes very difficult to ensure the implementation is correct and impossible to extend with new behavior because the interactions become too complex.

The basic premise I had when redesigning this area of code was this: When I want to change the picking behaviour, I want to override one single method and then just slot the picker into the pipeline. So here's a reduced version of the PiecePicker class (3 methods instead of 10) and an example of how it's implemented.

public abstract class PiecePicker
protected PiecePicker(PiecePicker picker)
this.picker = picker;

void CheckOverriden()
if (picker == null)
throw new InvalidOperationException("This method must be overridden");

public virtual void Initialise(BitField bitfield, TorrentFile[] files, IEnumerable<Piece> requests)
picker.Initialise(bitfield, files, requests);

public virtual bool IsInteresting(BitField bitfield)
return picker.IsInteresting(bitfield);

public virtual MessageBundle PickPiece(PeerId id, BitField peerBitfield, List<PeerId> otherPeers, int startIndex, int endIndex, int count)
return picker.PickPiece(id, peerBitfield, otherPeers, startIndex, endIndex, count);

public class RandomisedPicker : PiecePicker
Random random
= new Random();

public RandomisedPicker(PiecePicker picker)


public override MessageBundle PickPiece(PeerId id, BitField peerBitfield, List<PeerId> otherPeers, int startIndex, int endIndex, int count)
MessageBundle message;
int midpoint = random.Next(startIndex, endIndex);
if ((message = base.PickPiece(id, peerBitfield, otherPeers, midpoint, endIndex, count)) != null)
return message;
return base.PickPiece(id, peerBitfield, otherPeers, startIndex, midPoint, count);

Here's a usage example:
// StandardPicker provides an implementation for every method in PiecePicker
PiecePicker p = new RandomisedPicker(new StandardPicker);

So what happens is this:
1) p.PickPiece is called which results in RandomisedPicker.PickPiece being called
2) This splits the range between startIndex/endIndex into two and then makes two calls to base.PickPiece.
3) base.PickPiece will call standardPicker.PickPiece (because the 'picker' is non-null)
4) standardPicker.PickPiece will see if there are any pieces to request, and choose the first available one

Suppose I wanted to be able to ignore all the pieces which I already own, well, that's easy, I just use an IgnoringPicker (not a great name I'll admit):

public class IgnoringPicker : PiecePicker
BitField bitfield;
BitField temp;

public IgnoringPicker(BitField bitfield, PiecePicker picker)
this.bitfield = bitfield;
this.temp = new BitField(bitfield.Length);

public override MessageBundle PickPiece(PeerId id, BitField peerBitfield, List<PeerId> otherPeers, int startIndex, int endIndex, int count)
// In binary operations: temp = peerBitfield & (~bitfield);
return base.PickPiece(id, temp, otherPeers, startIndex, endIndex, count);

public override bool IsInteresting(BitField bitfield)
return base.IsInteresting(temp);

Now i just change my usage to:
Picker picker = new StandardPicker ();
= new RandomisedPicker (picker);
= new IgnoringPicker (myBitfield, picker);

The biggest benefit is that each individual picking logic unit can very easily tested. They can then be combined in any order you want to change the picking behaviour. Random->Rarest->Standard would be different to Rarest->Random->Standard. The former will give you the rarest piece in a random set, the latter will give you a random piece from the rarest set.

The implementation of a logic unit is also trivial - you only have to override the behaviour you want to change rather than implementing a massive interface where for 90% of the methods you end up calling basePicker.Method anyway. So I'm quite happy with this change. This code already passes all the NUnit tests from the old picker as well as all the new tests I've written, so it should be going live in the near future. There's still some more work to be done before it's complete.

Saturday, November 08, 2008

MonoTorrent 0.62

MonoTorrent 0.62 has now been released which addresses a few major and minor issues with the 0.60 release. Here's the details:

* Fixed regression in message handling which resulted in 0.60 not transferring properly. Caused by not running the right NUnit tests before tagging.
* Performance optimisation for sending/receiving messages
* Fixed bug creating torrents with 2gb+ files with TorrentCreator
* Fixed issue with udp sockets in the Dht code which could cause dht to stop sending/receiving messages

I'd highly recommend upgrading from 0.60 to 0.62 as soon as possible.

Binary -
Source -

Thursday, November 06, 2008

Are you mapping those dlls?

One of the issues with writing cross platform applications is that if you do P/Invoke into a native library, the name of that library changes depending on the OS. Mono has built in support for selecting the right file at runtime. The problem is that it's hard to ensure that you've correctly mapped all the methods you P/Invoke.

So I wrote a little tool to do that for you. It was inspired by an attempt by Aaron Bockover which parsed the raw cs files. I figured it'd be much better to just parse the compiled assembly ;)

using System;
using System.Collections.Generic;
using System.Xml;
using Mono.Cecil;
using System.IO;

namespace DllImportVerifier
class MainClass
static void Main (string [] args)
args = new string [] { Environment.CurrentDirectory };
if (args.Length != 1) {
Console.WriteLine ("You must pass the path to the assemblies to be processed as the argument");

List<string> assemblies = new List<string> ();

if (Directory.Exists (args[0])) {
assemblies.AddRange (Directory.GetFiles (args [0], "*.dll"));
assemblies.AddRange (Directory.GetFiles (args [0], "*.exe"));
else if (File.Exists (args [0])) {
assemblies.Add (args [0]);
else {
Console.WriteLine ("{0} is not a valid file or directory", args [0]);

foreach (string assembly in assemblies)
ProcessConfig (assembly);

static void ProcessConfig (string assemblyName)
AssemblyDefinition assembly = null;
try {
assembly = AssemblyFactory.GetAssembly (assemblyName);
} catch {
Console.WriteLine ("{0} is not a valid .NET assembly", Path.GetFileName (assemblyName));

List<string> dlls = new List<string> ();
try {
XmlTextReader doc = new XmlTextReader (assemblyName + ".config");
while (doc.Read ()) {
if (doc.Name != "dllmap")

string dll = doc.GetAttribute ("dll");
if (!dlls.Contains (dll))
dlls.Add (dll);
} catch {
// Ignore malformed or invalid config files. If the config is malformed,
// drop any successfully parsed dll maps

List<string> unreferenced = VerifyReferences (assembly, dlls);
foreach (string dll in unreferenced)
Console.WriteLine("Assembly: '{0}' references '{1}' without mapping it", Path.GetFileName (assemblyName), dll);

static List<string> VerifyReferences (AssemblyDefinition assembly, List<string> dlls)
List<string> unreferenced = new List<string> ();
foreach (TypeDefinition type in assembly.MainModule.Types) {
foreach (MethodDefinition method in type.Methods) {
if (!method.IsPInvokeImpl)

string dll = method.PInvokeInfo.Module.Name;
if (!dlls.Contains (dll) && !unreferenced.Contains (dll))
unreferenced.Add (dll);

return unreferenced;

Wednesday, November 05, 2008


Did you ever have a case where you have a big dataset which can be processed in parallel, so long as all threads finish Step 1 before any thread starts Step 2? Thing is, there's no built in class to handle this case.

AutoResetEvent won't do it because it because it only signals *one* of the waiting threads, not *all* of them. What you need is a manual reset event and some complex handling.

I give you BarrierHandle:

using System;
using System.Threading;

class SpectralNorm
public class BarrierHandle : System.Threading.WaitHandle
int current;
int threads;
ManualResetEvent handle = new ManualResetEvent (false);

public BarrierHandle (int threads)
this.current = threads;
this.threads = threads;

public override bool WaitOne()
ManualResetEvent h = handle;
if (Interlocked.Decrement (ref current) > 0) {
h.WaitOne ();
else {
handle = new ManualResetEvent (false);
Interlocked.Exchange (ref current, threads);
h.Set ();
h.Close ();

return true;

public static void Main(String[] args)
int threadCount = 5;//Environment.ProcessorCount;

// Lets assume that there's 20 units of data for each thread
int[] dataset = new int [threadCount * 20];

// This is the handle we use to ensure all threads complete the current step
// before continuing to the next step
BarrierHandle barrier = new BarrierHandle(threadCount);

// Fire up the threads
for (int i = 0; i < threadCount; i++) {
ThreadPool.QueueUserWorkItem (delegate {
int jjj = i;
try {
Step1 (dataset, jjj * 20, 20);
barrier.WaitOne ();
Step2 (dataset, jjj * 20, 20);
barrier.WaitOne ();
Step3 (dataset, jjj * 20, 20);
} catch (Exception ex) {
Console.WriteLine (ex);

System.Threading.Thread.Sleep (3000);

private static void Step1 (int[] array, int offset, int count)
Console.WriteLine ("Step1: {0} - {1}", offset, offset + count);
Thread.Sleep (500);
private static void Step2 (int[] array, int offset, int count)
Console.WriteLine ("Step2: {0} - {1}", offset, offset + count);
Thread.Sleep (500);
private static void Step3 (int[] array, int offset, int count)
Console.WriteLine ("Step3: {0} - {1}", offset, offset + count);
Thread.Sleep (500);

Monday, November 03, 2008

MonoTorrent 0.60

So here are the bugfixes and features for 0.60 as compared to 0.50:

Bug fixes:
* Fixed critical regression in 0.50 whereby transfers would be incredibly slow.
* Fixed issue where announce/scrape requests to offline trackers may never time out
* Added a few memory optimisations when reading piece data from disk.
* Add the ability to report an alternate IP/Port combo to the tracker
* Optimised the encrypted handshake - its now a bit faster
* Fixed bug where wrong message ID was sent for extension messages
* Fixed bug where pieces which do not exist would be requested from webseeds.
* Fixed several bugs in TorrentCreator class where relative paths were used instead of absolute paths.
* Disabled UdpTracker support as the current implementation locked up until a response was received.
* Fixed a possible issue with multi-tier torrents.
* Added the ability to tell whether peers come from PeerExchange, DHT or the tracker.
* Some big performance boosts for webseeds by using a better method of picking pieces.
* Fixed a corner case whereby certain torrents may not get their last piece requested.
* When hosting a tracker,"/announce" is appended when you use the construtor overload which takes an IPEndPoint.

New Features:
* Implemented DHT support.
* Implemented sparse file support on the NTFS file system.

All in all, an awesome release. I have to give a shout out to Matthew Raymer who has been tirelessly testing MonoTorrent for the last few weeks. Without his constant bug reports, some of those bugs would never have been found, such as the one with UdpTrackers. Great stuff.


There are also packages for openSUSE in the openSUSE Build Service for those of you interested in those.

Friday, October 31, 2008

A hack too far

I was just looking at the PackUriHelper class, with a view to implementing it in Mono. One of it's many methods converts a regular Uri into a 'pack' Uri. For example:
converts to

So I says to myself, this is easy, it's just combining two Uris. Simple!


Uri original = new Uri ("");

Uri complete = new Uri ("pack://" + original.OriginalString); // FAILS
Uri complete = new Uri (new Uri ("pack://", original)); //FAILS
Uri complete = new Uri (new Uri ("pack://", original.OriginalString)); // FAILS

How about escaping the second string...
string escaped = Uri.HexEscape (original.OriginalString);
Uri complete = new Uri ("pack://" + escaped); // FAILS

So at this stage I've lost all faith in humanity, so i try a basic test just to make sure i'm not insane. I'll try create a pack uri object myself, just to prove that it really is possible to parse them.

Uri complete = new Uri ("pack://http:,,main,page.html?query=val#middle/");


You can't do that. While they can *generate* that Uri for you, you can't generate one for yourself. Funnily enough, they do register a custom parser for the pack scheme, it's just incapable of parsing pack URI's. Don't ask me why.

After several hours and a lot of wasted time I finally came up with this:

Uri complete = new Uri (new Uri ("pack://"), escaped);

This is the *only* way to create a Pack URI. You have to hex escape the original uri and then call the constructor overload which takes a Uri and a string.

That is one of the stupidest things I've ever seen.

Saturday, October 25, 2008

MonoTorrent 0.60

MonoTorrent 0.60 is being prepared for release. 0.50 was released a mere 3 weeks ago, so why 0.60 already?

  1. There was a big regression in 0.50 with regards to download speeds. Transfers were a *lot* slower than 0.40. To backport this fix would be very complex as there have been a lot of changes since 0.50 was branched.
  2. DHT support is now good enough for me to activate by default. This was my milestone for releasing 0.60.
  3. There have been a number of other important bugs fixed aswell (including a critical one with UdpTrackers) which need to be released.
  4. There have been a number memory/performance optimisations since 0.50 which would be nice to release ;)

So, if you're using MonoTorrent, I urge you to check out the branch (when it is created) from

Until I create the branch, check out revision 117059 of monotorrent from svn. If you have any bugs/issues, let me know and I'll try to get a fix in for 0.60.

Monday, October 20, 2008

Walking can be tricky...

(No, it's not just a knee-high stocking with sticky tape, it's a cast)

Tuesday, October 14, 2008

Saturday, October 11, 2008

Sparsely populated, just the way I like it

The NTFS filesystem has support for sparse files, but this has to be specially enabled when creating a file. I was originally linked to a blog post on the idea, but unfortunately the license pretty much forbids me from using that code.

So I spent most of yesterday evening and this morning googling API documentation and eventually came up with a fairly basic implementation of sparse file support. The only two operations supported are:
  1. Creating a sparse file
  2. Setting the size of the sparse data.
The benefits of this are seen only on the NTFS filesystem, but what it gives you is the ability to write pieces at arbitrary places in the file without having to preallocate up to that point. There's a bit of overhead, but other than that you only use up the space which you've actually written to, i.e. only the pieces which have been downloaded.

Finally, if you don't have an NTFS filesystem supporting sparse files, you will not be affected by this. Those of you on HFS+ can never get this support, and those of you on ext3, I'm unsure how to enable sparse files. I think it's enabled by default, but if not, any recommendations on setting this up would be appreciated.

Monday, October 06, 2008

MonoTorrent 0.50 - The Good, The Bad, and the seriously awesome

It's release time! Yes, MonoTorrent 0.50 has hit. There have been a lot of changes since the last release, and this time, it's more than just under the hood fixes. There are several reasons why the new release is so much better than previous releases, I've listed the more important ones below, but first, the packages!

You can grab a precompiled binary suitable for Windows, Mac OS and Linux.
You can grab the sourcecode here as a .tar.gz archive.
There are also packages available on the OpenSuse build service, and of course the tag for the release can be gotten from svn.

Now, to the features and updates.

WebSeeding Support
There is provisional support for Http Web Seeding. This means when you're hosting a torrent, you can add standard Http servers as 'seeds'. No extra configuration is needed. This is still an experimental feature, and still has some corner cases where it doesn't work, all bug reports on this are welcome!

IP Address Banning
You can now ban individual IP addresses or IP Address ranges. Block lists from Emule, PeerGuardian and SafePeer are supported out of box by the built in parser, and any custom list can easily by loaded so long as you can parse the list into IPAddress objects. Internally the banlist is stored using the extremely efficient RangeCollection written by Aaron Bockover.

Efficent Torrent Streaming
Thanks to the efforts of Karthik Kailash and David Sanghera, we now have a special downloading mode in MonoTorrent which allows you to efficiently stream audio/video. Psuedo random piece picking is used to ensure you download pieces from a 'high priority' range before anything else. User code can set this 'High priority' range to be the next X bytes of data. When everything in the high priority range is downloaded, standard rarest-first picking is used.

Peer Exchange
uTorrent style Peer Exchange support is supported thanks to the tireless efforts of Olivier Dufour. This extension allows peer information to be passed across a bittorrent connection. In practice this means that if the tracker only gives you 1 peer, you can discover (potentially) hundreds more via peer exchange.

Enhanced compatibilty with broken clients

There are still clients out there which transmit corrupted BEncodedDictionary objects. These guys need to read the spec and ensure that their dictionaries keys are sorted
using a binary comparison. In the cases where the order appears to not matter, I've implemented support for ignoring the error. This should reduce the number of clients which are disconnected due to sending corrupt messages - this means higher performance.

Simplified Threading API
The core of MonoTorrent has undergone a complete rewrite. Previously, all the worker threads interacted with the core by taking out locks, then doing their work. This meant that implementing something as trivial as cancelling a pending asynchronous request was actually pretty hard. That method was actually horrendously prone to deadlocking the engine.

Nowadays all the worker threads add a task to the main thread, and the main thread does all the work. "What about the performance" i hear you ask, well, it performs the exact same, but it's so much easier to maintain and add new features to.

It also means the engine should be deadlock free, because there are no locks anymore. Nice.

NUnit Tests
As with all big software projects, regressions are bad. A year ago I had virtually no NUnit tests. Nowadays there are over 130 NUnit tests for the engine. While this doesn't even test 1/2 the code in MonoTorrent, each test adds that little bit more certainty that I don't regress.

There are also a bunch bugfixes here and there, and more big features in the pipeline. As a taster, DHT support is already active and enabled in SVN should you wish to test it out.

Friday, October 03, 2008

Who loves lamdas?

I finally started a project where I can use all the new fanciness. How would you process a list of names so that each name is printed out 'i' times?

int i = 5;
IEnumerable<string>() names = new List<string>() { "Alan" };
names.ForEach (n => i.Times().Apply(j => Console.WriteLine(n)));

All your lambdas are belong to us!

Thursday, October 02, 2008

Webserver for NUnit tests

What's the best way of setting up a simple HTTP file server to be used purely for NUnit tests. What I'm looking for is a way to set up a server so that I can host some files in. My NUnit tests will then initiate HTTP requests to 'download' those files.

Ideally whatever method I choose won't require me to write temporary files to disk. Currently what I do is host a HttpListener and manually fulfill the requested data by creating an in-memory byte[] and writing that into the response. I'd like to do something like:

webServer.Add("path/where/file/is", CreateFakeByteArray());

Then the server will fulfill requests from this faked byte[].

The reason for this setup is that I need a better way to test my implementation of WebSeeding in MonoTorrent without having to code up my own buggy file server ;)

Monday, September 29, 2008

tee-sharp a.k.a. CopyStream

How many times do you have one stream, but you actually want to write the data to multiple places simultaneously[0]? Well, now you can[1]!

I took an hour and spun up this awesome asynchronous beast of a stream splitter. There is an optimisation that could be applied to it: Reads can be performed at the same time as writes. I figured that for a 1.0 implementation, this was good enough. If anyone wants to try their hand at making the read perform in parallel with the writes, feel free. Patches are welcome ;)


EDIT: There's also a 'deliberate' bug there. 10 points to the first person to spot it and bonus 10 points if you can fix it with less than 5 lines of extra code.

Wednesday, September 24, 2008

Captchas, a bridge too far?

I was just trying to open a new gmail account when I encountered the unsolvable captcha. I couldn't for the life of me figure out what was written in that horrible mangled imagine of what i can only assume were standard roman-alphabet letters. I can only assume that I am in fact illiterate and that would explain why I couldn't read it.

Thankfully, google have a second option. They have a little link which reads out the captcha for you. It turns out that I can't understand english either. I couldn't make out a word of what was said. So, if anyone out there can speak english, could you please decypher this for me:

Sunday, September 21, 2008

It lives!

Revision 113625 - Enable DHT by default in MonoTorrent.


Saturday, September 20, 2008

How verbose is too verbose?

A common thing to do in code is to perform an action on each element in an array of objects. In C# there are two main ways to write this:
// Lets just assume this list of strings has been populated with lots of strings
List<string> allStrings = GetLotsOfStrings ();

// Method 1: The for loop
for (int i = 0; i < allStrings.Count; i++)
DoStuff (allStrings[i]);

// Method 2: The foreach loop
foreach (string s in allStrings)
DoStuff (s);
However, both of those methods are far too verbose. There is another way, which is much much nicer!
How awesome is that, eh?

Wednesday, September 03, 2008

MonoTorrent is teh awesome

How many lines to host a torrent server?

Any ideas?


Check it out yourself

So what happened in SoC 2008?

So, after ~3 months of hacking while travelling up the east coast of Austrlia, what exactly have I managed to accomplish in this years SoC? Well, quite a lot :) Here's a list of new stuff and upcoming stuff in no real order:

DHT support is available in SVN. It's pretty much complete, but lacks some real world testing. There are currently about 35 NUnit tests covering all important modules in the code. I need to give this a week or two of solid testing and then I'll be enabling it by default. A few updates will need to be applied to MonoTorrent so that the 'private' flag will be obeyed now that we have DHT support.
IP Address Banning
Awesome support for this has been added to SVN using a combination of my own code and code written by The Great Bocky. It uses an extremely efficient way of storing IP Address ranges so that they can be represented by two integers plus a little overhead. There is a parser which supports all the main ban lists and users can parse other formats manually and add the addresses in that way.
Extended Messaging Protocol
Support for the LibTorrent extension protocol is complete. This allows custom messages to be sendable to remote peers over the standard bittorrent protocol. So if you require the ability to send arbitrary data to a remote peer and have them react in a special way, you can!
Http Seeding (Web Seeding)
Support for the getright style Http Seeding is complete. This (better) specification allows a standard HTTP server act as a seed with no special software required. If MonoTorrent decides that there aren't enough peers available in the swarm to allow the torrent to complete, it will automatically start downloading the necessary chunks from the server.
Peer Exchange
Support for this has been completed by Olivier Dufour. This pretty cool idea allows peers you are connected to to send you details about other peers which are active in the swarm. This way you gain information about more peers even if the tracker goes offline.
Faster SHA1
Due to both an algorithm change and architecture changes, hashing performance has nearly doubled. This means that hashing a file takes less than 1/2 the time it used to *and* that CPU usage while downloading is reduced.
Better Encryption
There was a bug in header-only encryption that prevented it from working correctly before. As MonoTorrent always defaulted to Full Encryption, this wasn't such a huge issue before. Along with this bugfix, encryption is now more performant than before - using less CPU and less memory. The code also shrunk considerably in size and is much more maintainable than before.
Deadlock Free
It's now impossible to deadlock the library. This isn't so important for the end-user, but for anyone programming with MonoTorrent it's great news. If you are extending the library to add extra functionality internally, it's now easy to ensure that you do everything in a thread-safe and non-deadlocking manner.
Abort long connection attempts
Sometimes an operating system might wait an incredibly long time before aborting a connection attempt. This meant that if MonoTorrent tried to connect to a peer that was no longer available, sometimes the OS would take up to 150 seconds to abort the attempt. Worst case scenario is that the first 5 peers you connect to all take 150 seconds to abort and it looks like MonoTorrent is doing nothing. Now MonoTorrent hard-aborts a connection attempt if it takes more than 10 seconds.
Streaming torrents ahoy!
Two guys Karthik Kailish and David Sanghera have created a new way to download a torrent with MonoTorrent. Generally speaking, the rarest piece of a torrent is downloaded first, then the second rarest and so on. This new code allows you to specify a range of bytes which is High, Medium or Low priority. Then, from within these ranges the rarest first algorithm is active. For example if you are a video player and you want to start playback from byte 1000, you can tell MonoTorrent that the range from 1000 -> 5000 is important so those bytes are downloaded first, which allows you to start playback as soon as enough of that high-priority data has arrived.
Unit Tests
The number of tests covering MonoTorrent has doubled over the summer, from 55 up to 111. Every test makes the liklehood of accidently introducing a bug less and less. I like tests ;)
Banshee Plugin
Finally, while not quite related to MonoTorrent itself - A plugin for banshee has been created which allows the downloading of torrent based podcasts. It's still a work in progress, but hopefully that can be cleaned up and completed pretty soon.
So with all these changes and features, i'm hoping to push out the next release of MonoTorrent by the end of september. This release is unlikely to include DHT, but I hope to have a second release shortly afterwards which will include DHT.
Anyway, I'm off to pack my bags now so I'm ready to head to Ha Long City at 7am in the morning. I'm enjoying my last week in Viet Nam at the moment. It's been a blast, though sometimes i wonder if they deliberately decide to not understand what I say just because I'm mispronouncing it slightly. 'Ho Chi Minh' isn't *that* hard to understand, is it?

Thursday, July 17, 2008

Socks proxies in .NET

I'm just wondering if anyone out there knows a library which has a nice MIT/X11 (or similar) license which will allow me to use a socks based proxy. Alternatively, 10 points to the first person who writes the wrapper to do it ;)

I had found one before, but I seem to have misplaced the link and google is failing me now.

EDIT: Ideally it'd be available in source form so i don't have to bundle a binary. I *really* don't want to bundle a library if I can avoid it.

Friday, June 20, 2008

MonoTorrent 0.4 and Monsoon 0.15

MonoTorrent 0.40 has been released. There weren't many changes feature-wise, but there's been quite a lot of under the hood changes. Details can be found on

Also, Monsoon 0.15 has been released. The release notes are available and your packages can be gotten from here.

Fun times, eh?

MonoTorrent 0.50 is slated for a few weeks time (don't hold me to this). There are a bucket load of features in the works which will definitely kick some ass. I've been getting some great patches recently from Olivier Dufour, which he has detailed in his post. These should all make 0.50. I've also been getting some awesome work from Karthik Kailash, and his friend David (whose second name i can't find now), implementing a fancy debugging GUI which exposes all the internals in a nice GUI to make it easy for me to detect bugs/issues. He's also implementing Ono support, which helps get faster transfers; Bit-tyrant like unchoking which prioritises peers who reciprocate data resulting in faster transfers along with a new Piece Picking algorithm which allows you to stream a media file via torrent efficiently.

I'm unsure how many of those features will hit 0.50, it depends on when they hit SVN and how much testing i can get in. But hopefully a few of them will get there.

Wednesday, June 11, 2008

MonoTorrent - Expanding your universe

As i'm sure everyone has heard at this stage, Banshee 1.0 has been released. It's a huge step up from the old 0.13.x releases, and well worth checking out!

So, now that banshee has some kickass podcast support, along with video support, wouldn't it be nice if you could download video podcasts which have a .torrent payload?

Wouldn't it be awesome if there as a .NET based torrent library, that maybe was exposed via a DBus service that could be integrated with banshee with just a few lines of code. Of course, once you've integrated the actual torrent downloading, how do you make banshee realise that .torrent files need to be handled specially? Well, write another few lines of code.

So all in all, because of banshees awesome extension framework, i wrote less than 200 lines of code of banshee code to enable banshee to download torrents. I was surprised by how easy everything was. I was up with creating the new extension within about 10 minutes. So, if you're interested in this extension, attach yourself to the bug report and you'll be able to keep up-to-date with the latest happenings.

After all this, what exactly does it look like when you download a torrent podcast? Well, it looks exactly like it does for a regular podcast download. You don't have to do anything special, it's all just MAGIC! Check out the screencast.

Sunday, May 25, 2008


Why the hell did my blogspot account have titles disabled by default? Did i stupidly manage to turn them off the day i opened my blogspot account?

Anyway, i have titles... finally!

Thursday, May 15, 2008

The Summer of Code is starting soon, and I'm a student once again. So what are the big plans this year? Well, hopefully a lot! Here's a brief outlook on what to expect during this summer. Note, these are in no particular order

1) DHT support in MonoTorrent. This is probably the most time-consuming feature that is planned.

2) DBus based daemon for monotorrent. The idea behind this is to allow applications to consume torrent files without worrying about interop-ing with .NET. This would provide a system-wide torrent service which any application, or many simultaenous applications, could take advantage of.

This daemon will expose a simplified API as compared to the monotorrent library itself.

3) HTTP/Socks proxy support.

4) A proper NUnit testing framework. All the essentials are now implemented in MonoTorrent to allow me to test stuff deep inside monotorrent relatively easily. Now i just need to implement my test harness and then some tests (work on that has started as of yesterday actually :) ).

5) Implement support for both the Azureus and Libtorrent messaging protocols. Support is mostly there for the Libtorrent protocol. This will also allow user-defined messages to be sent via the torrent connection.

6) As per usual, i'll be spending time with a profiler seeing where performance can be improved. Don't expect too much from this though, things are already pretty good ;)

Now that we have a sweet GUI for monotorrent, which is going to be available in Suse 11.0 (other distros also have packages these days), we need to keep improving it. Once again there are in no particular order:

1) Get Monsoon portable to both MacOS and Windows. 95% of this work has been completed already. So it's nearly done. Once this has been completed, i need to look into packaging installers for these platforms (or someone can offer to do that for me ;) ).

2) Next we get cracking on the buglist and try and resolve those. Quite a number of cosmetic things need to be looked at and there are also a few bigger issues, such as that memory issue (which is still proving to be quite elusive!).

3) General nicing up of everything. There isn't really a firm plan for monsoon yet. A more detailed timeline and suchlike will be created as soon i can get together with buchan after my exams and see where we want to take this.

All in all, a busy summer.

Friday, May 09, 2008

It's funny. I always thought that DRM should be used as a way to make life difficult for people pirating software/music. I never though it would be used as a way to convince people that pirating software is the better way:

Seriously, what are these people thinking?

Wednesday, April 30, 2008

I haven't done a mono-specific post in a while, so i'm glad i can change that with some good news! A few days ago, Scott Peterson, (who took part in the Summer of Code last year to port banshee to windows) was complaining that monsoon took an hour to hash a 60GB torrent. I was a bit surprised at this, because it shouldn't be that slow! I did know that SHA1 hashing in mono was a tad slow, but no way should it be *that* slow.

Anyway, he decided to fire up a profiler and get cracking on optimising the hell out of the SHA1 class in mono, and i decided that i'd skip some study and do the same. We had two main aims in this:
  1. To make the fastest possible implementation using no unsafe code
  2. To make the fastest possible managed SHA1 implementation
Aim 1:
In order to do this, we couldn't use unsafe code, everything had to be done without pointers. We also had a constraint that we couldn't make the size of the class significantly longer, in fact, *reducing* the lines of code was also an aim because the more lines of code, the slower it is for the JIT to process.

We did a number of iterations on the code, testing different things out. We fully unrolled the three main loops, we partially unrolled some of em and benchmarked everything in between. The things we learned in the end were:

1) In this particular algorithm, there was a great benefit to using local variables rather than fields. That gave a biggish boost. EDIT: This is because mono doesn't perform array out of bounds check removal (abcrem) on fields. It only performs it on local vars. As array access is *very* frequent, removing these checks is a huge benefit.

buff[i] = ((buff[i-3] ^ buff[i-8] ^ buff[i-14] ^ buff[i-16]) << 1)
| ((buff[i-3] ^ buff[i-8] ^ buff[i-14] ^ buff[i-16]) << 31));

The above code actually performed significantly slower than:
uint temp = buff[i-3] ^ buff[i-8] ^ buff[i-14] ^ buff[i-16];
buff[i] = (temp << 1) | (temp >> 31);

3)Re-rolling some of the loops did not affect performance significantly. The performance delta between when all three of the loops were fully unrolled and when two of them were only partially unrolled was less than 6% on my system, but the IL was reduced by a fairly massive proportion.

4) Massive methods perform slower. We found that by splitting the 'ProcessBlock' method up, performance increase noticeably. Bear in mind that when we originally tested this, we had all three big loops fully unrolled and they were all in the same method. Still, it's worth bearing in mind.

So, what was the final performance delta with all these changes?

Core 2 Duo T7400 @ 2.16GHz
2.54x faster

Athlon 64 3200+
3.10x faster

Pentium 4 @ 2.80GHz
1.36x faster

Xeon X5355 @ 2.66GHz
1.38x faster

64bit PPC:
1.60x faster

EDIT: It seems I've mixed up my numbers when i was recording them yesterday. Some of the numbers compare against SVN head and some compare against the version in Mono 1.9. I think the first 2 results are against the version in 1.9 and the last three are against SVN head.

Not too shabby.

Aim 2:
More to come on this later, we're still in the optimisation process, but i think it's fair to say that the unsafe version is quite a fair bit faster thant the safe version so far.

Monday, April 21, 2008

So opensuse 11.0 beta 1 has been released. I decided i'd be adventurous and give it a whirl, so i fired up my mac and downloaded the ISO while i was in college. When i got home i realised that i had never actually burned an ISO on my mac before, and there didn't seem to be any built in software to do it.

So, i fired up google and expected to have to download trial software to burn the image and suffer all sorts of hassle and annoyance. I was pleasantly surprised to find that it was frikin simple to burn an iso:

$ hdutil burn image.iso

It's now 5 miutes later, and i'm just about to reboot into the Live CD environment, very nice!

Saturday, April 19, 2008

MonoTorrent 0.30 has been tagged and released. All i have to do now is update the website. Here's the short changelog.

Highlights include:
* Amazing extensibility - Active connections can be injected from any source you want. So if your application already has an active connection to a peer, that connection can be passed to MonoTorrent and it will be used.
* Udp Tracker support implemented - Can now use the low bandwidth udp protocol if the tracker supports it.
* Initial support for the libtorrent messaging protocol
* Implemented semi-intelligent memory buffer to reduce disk reads/writes.
* Fixed several race conditions when Stopping/Unregistering torrent managers.
* Fixed issue whereby monotorrent would stop connecting to new peers
* Can now handle torrents with 1,000's of files gracefully.
* Implemented IPV6 support.
* 15% faster hash checking
* Enhanced the accuracy of the ratelimiting code when a global limit is applied
* Per-file progress correctly updated when FastResume data is loaded
* If a file is set to 'Do Not Download', it now will definitely not be downloaded.
* Abort a connection attempt if it takes more than 10 seconds to complete. Some operating systems default to 3 minute timeouts which kills performance.
* Some speed and memory enhancements, as always.

* Correctly removes zombie peers (peers who crash before telling the tracker they're stopping)
* Added ability to compare peers based on an arbitrary key rather than only based on IP.
* Minor speed and memory enhancements

A precompiled binary can be found here.
A tarball can be found here

Coinciding nicely with this is the release of Monsoon 0.11.3. The changelog looks something as follows:
* Adding/Removing/Renaming of labels is completely context-menu driven now
* Can drag and drop torrents to add and remove them from a label.
* Fixed several issues persisting state across application restarts
* Global rate limits can be set by clicking on the labels which display the global download/upload rates
* Now takes advantage of the new FastResume API.
* Correctly invoking libgtk and libX11.
* Now supports nat-pmp through the use of a newer mono.nat
* Multi-select enabled in the file view
* When creating a torrent, hashchecking is skipped if you choose to seed it immediately
* You will always be prompted if you choose to remove/delete a torrent
* Made Monsoon fully translatable
* Added tooltips to the main items

Monsoon can be gotten via 1-Click install or the GNOME:Community repository:

Tuesday, April 08, 2008

So, some quick news on the MonoTorrent front. As i said a week or two ago, Monsoon is going to be part of the Suse 11 distribution (woo!). Since then a few things have happened.

1) Monsoon has hit feature freeze and has been branched. This is the version that will be included with suse. If you want to check out the code and put it through it's paces use this url:
If you find any bugs, please use the novell bugzilla to file a report. Some time early next week, this will be officially tagged and released. So make any bug reports earlier to increase the chance that they'll be fixed.

2) MonoTorrent itself has also been branched for it's 0.3 release. If you're a developer using MonoTorrent, check the code out from:
Bug reports are welcome (as always), same place as above (Except use the MonoTorrent module, not the Monsoon module). I'll have full release notes available when the release is made. Some pretty cool stuff has been done along with the usual bug quashing ;)

3) Finally, mono-curses has been updated again to run against MonoTorrent 0.30. So if you want a slick cool ncurses GUI for MonoTorrent, check the code out from:
Finally, I just want to add: All Your Torrent Are Belong To Us - Use Monsoon! ;)

Friday, April 04, 2008

There was a meeting yesterday for openSUSE Gnome, one of the important decisions for the day (in my eyes) was the decision about which BitTorrent client was going to be bundled with suse.

Torrent default app decision:
Monsoon seems the more dynamic app, good response from its maintainer
Transmission seems to have a better UI, actively developed, although a bit mac centric
BitTorrent-gtk very basic but should work for basic needs
AI: add both monsoon and transmission, monsoon as default
AI: suseROCKS to run tests with both apps
AI: FunkyPenguin to package Transmission today
AI: vuntz to move Transmission and monsoon to autobuild and drop gnome-btdownload
We're in :)

Monsoon had an open bug report on making it translatable, after the above decision was made, the priority on translations became pretty critical ;) Meebey volunteered to go and get translations all set up and spent a few hours yesterday getting that all sorted and creating the first translation (German). Olivier Dufour, who created a Winforms based GUI for MonoTorrent has also volunteered to do a French translation.

So, if anyone out there wants to translate Monsoon into their native language (or at least one they're good at ;) ), please join us on our new irc channel, #monsoon on We'll get you sorted out.

Saturday, March 29, 2008

I got some amazing news just a few minutes ago. I was just chatting away in #mono about how i was pushing a new release out soon, when anf6 (who is also called Alan) piped up with this:
|anf6| alan: I've been working on a WebUI for MonoTorrent, most of the functionality works ^_^
|alan| anf6: are you serious?
|anf6| yes
|alan| :o
It turns out that anf6 had been chatting to the developer of the uTorrent WebUI (Directrix) about the possibility of creating a MonoTorrent backend for the UI. Directrix was all for that! So, without further ado, here's the obligatory screenshot:

All i can say is: WOW! I can't wait for this to hit release!

UPDATE: I'd just found out that less than 300 lines of code were needed for the monotorrent -> uTorrent WebUI integration. How slick is that? That includes hosting MonoTorrent and a mini host to serve the pages.

Friday, March 28, 2008

A few days ago, i heard the suse team are looking for a new default torrent client for Suse 11.0. So of course I jumped in saying "Well, why not Monsoon?". So i fired off a response with a ton of links suggesting that Monsoon was the most amazing torrent client ever and there was no need to review any other clients. Of course that didn't work, but Monsoon has been taken into consideration as a viable candidate.

Andrew Wafaa took it upon himself to do a little review of his own, and Monsoon fared pretty well. I was happy enough with that. Some of the points mentioned on the thread include:

1) Applications which need wizards are bad. For a default client, you want something rough and ready.
2) Something minimalistic but packed with features is good. No massive complicated apps like Azureus.

Of course, 2 is a bit of a contradiction. It's quite hard to get both. In GUIs to expose a feature or some statistic, you need a new GUI element. So there's a fine balancing act between keeping the GUI clean and small while still exposing all the nice features in an easy to use way.

Here's a few screenshots of Monsoon anyway, which i hope expose the full diversity of what it can do.

The first time you start Monsoon, you aren't greeted by a wizard, everything is set up nicely for you. Everything is shown to you by default.

When you load torrents, they automatically start. If the files already exist (like the kubuntu one did), monsoon check that file to see how much of it is valid data and then resume the download using it. Standard stuff.

The ramp-up time is fairly fast. After 30 seconds Monsoon can connect to 50 people (the default limit for a torrent). Note, it is smart in how it connects to people. It doesn't have more than 5 pending connection attempts at any one time. This prevents overloading of cheaper routers.

For the people out there who want a really minimalistic client, you can hide everything. The GUI can be reduced down to something even slimmer than Transmission should you so wish. There is also plans to implement a mini-mode, which will consist of just a single progress bar per torrent.

So what if you want to set the global upload/download rate? Why, it's simple! Just click on the global upload/download speed indicators and away you go.

If you want to organise your torrents, it's just a simple context menu. You can add/remove/rename the labels with a right click, or double click. Slick, eh? There is also a patch in the works which will allow you to drag'n'drop torrents from the main view into a label and drag'n'drop them between labels. This will replace the existing way of doing it which involves ticking checkboxes in the 'Options' menu.

Wednesday, March 26, 2008

There's now a proper issue tracker for Monsoon and MonoTorrent. It's being hosted on the novell bugzilla (thanks guys!) so file your bugs there if at all possible. It makes it easier for me to monitor and record what exactly goes on.

The issue tracker can be found at in the 'Monsoon' section of Mono:Tools. If you have a bugzilla account already, this link should bring you right to the submission form:

Sunday, March 09, 2008

This is just a shout out to any budding Artistes, (especially that guy who did all those cool looking MonoDevelop icons): I'm looking for a nice logo for Monsoon. I have no ideas in mind, so any suggestions are welcome. Drop a comment here or send me an email if you have anything to show me :)
This is just a quick follow up from my last post. I decided to do a few benchmarks to see what transmission was like as compared the Monsoon (the GTK# gui for the MonoTorrent library). I've heard about tranmission before, but i've never used it. I was pretty shocked by the results. This is a quick summary of what I found:

1) Memory usage.
Transmission used less memory than MonoTorrent. That came as no surprise to me. No matter how efficently I code, i cannot ever get as efficent as an app written in C/C++. Mostly because in a .NET based app, the .NET runtime/jit must be loaded which consumes a few MB of memory. Percentage wise, yes, the difference is huge, but if you take a look at overall memory, it's not massive.

MonoTorrent: 35 Res / 20 Shared
Transmission: 20 Res / 13 Shared

This figure was gotten after extensively using the GUI by opening menu's and flicking around pages.

2) Hashing performance
I suppose another important metric - how long does it take to hash a complete file.
MonoTorrent: 95 seconds
Transmission: 85 seconds

This difference is fairly negligible. A full hash will rarely be performed. However, i suppose it was worth measuring. One optimisation i could make in monotorrent which would reduce that gap slightly would be if i read the next chunk of data off disk asynchronously while hashing the current chunk. At the moment it's all sequential, but i suppose it could easily enough be made parallel. It shouldn't make a huge difference though.

3) Download speeds
This would be the most important metric. This is where the biggest surprise came.
MonoTorrent: 15 seconds - 400kB/sec, 30 seconds - stabilised at 550-600kB/sec (maxed my connection) and connected to 50 people, the maximum allowed by my settings.
Transmission: 15 minutes: still at less than 50kB/sec and still only connected to 6 peers.

What am i doing wrong with transmission to make it so slow? It's not NAT (even though transmissions uPnP support cannot detect my uPnP enabled router) because i manually forwarded it in the end. I was using both svn head (r97353) of MonoTorrent and svn head (r5227) of Transmission when i ran this quick test.

EDIT: Just as i finished this, Transmission managed to connect to 3 additional people and one of them had a massive upload capacity which let Transmission reach ~480kB/sec. Still, why did that take so long? These results were consistent every time i started/stopped both transmission and Monsoon. Monsoon consistently maxed out my connection quickly whereas transmission consistently took forever to even break 40 kB/sec.

UPDATE: I just want to add that i tested using the ubuntu-7.10-desktop-i386.iso torrent on Suse 10.2.

Tuesday, March 04, 2008

So, a thought struck me. Ubuntu currently has a horrible default bittorrent client. It's about to be replaced by Transmission which, while definately a step up in the world, is still feature lacking (in my opinion) as compared to other available clients, most especially the great work done on the GTK Gui for MonoTorrent.

I propose that that the MonoTorrent GUI should be bundled with Ubuntu. MonoTorrent supports everything transmission does (except for Peer Exchange[1] and scheduling[2]) along with these extra cool features:

* Fast Peer Extensions - allow you to start a torrent faster
* Udp Tracker Protocol - allows you to use the bandwidth saving UDP protocol for tracker communications
* LibTorrent Extension Protocol - Ok, this is only partially supported. While it's implemented in SVN, it's not fully tested or enabled by default yet.
* RSS Feeds - Automagic downloading from RSS feeds. (see video)
* Organise downloads into tags (see video)
* Multi-Tracker Protocol support - If the main tracker is down, no worries! You can use the backup ones!
* Retarget Files before/while they are downloading i.e. you can rename a file as you download!
* Can monitor a folder and automatically load and download new torrents
* Minimizes to the gnome notification area.

The GUI including all supporting libs is a 384 kB package, and i think in the region of 600kB when installed[3]. What we're looking at is 600kB for a fully fledged, feature rich, uber snazzy bittorrent client. The best thing is [b]all[/b] the dependencies are already on the live CD.

So, if there are any Ubuntu devs out there, would you like to consider MonoTorrent as the default bittorrent app? If not, why not? I'll gladly do my best to fix any issues you have. Leave your comments below and lets get some discussion going!

[1] It'd take a few hours to complete support for this with tests.

[2] I did actually receive a patch to implement this a long while ago, but in the end, it fell by the wayside. I am a horrible person :(

[3] The package can be reduced a fair bit by splitting out the Tracker component of MonoTorrent using the Linker and removing bundled libraries which are available as packages nowadays. I'd guesstimate at least 100kB can be saved from the installed size from these optimisations. 70kB by removing the bundled DBus stuff and at the very least, 30kB by shrinking MonoTorrent.dll.

UPDATE: I'd just like to point out that an official name has been chosen for the GUI: [b]Monsoon[/b]. The next release will have everything rebadged to this name.

Wednesday, February 27, 2008

Its that time of the year when all us Europeans gather together for a big competition to decide who has the most friends. It's called the Eurovision. Last year, we did spectacularly. We were within inches of achieving our goal of finishing with zero points until those albanian b******s decided to give us 5 points. I swear, that was our plan all along!

Anyway, we learned our lesson last year. This time we're going to put in an entry that simply cannot fail! Ladies and gents, i present to you, Dustin The Turkey, singing our Eurovision entry. Save us all.

Friday, February 22, 2008

After my rather lengthy post about all the cool stuff I was doing with MonoTorrent, i was asked this question:
I'm very curious about what are the real possibilities of MonoTorrent library. I mean, I always thought Bittorrent was just a protocol for P2P file exchange, but when I see so much modularity on MonoTorrent... The question is, how can MonoTorrent help me as a programmer?
Well, that's a pretty good question.

I'm going to talk briefly about the three important new features that are (or will soon) be available:

1) Custom Peer Connections
Aim: To allow the user to route all peer traffic over whatever medium they want.

Use case 1: You are writing an application which requires that all connections are encrypted end-to-end. This means you have complicated routines to set up the connections. You want to add the ability to transfer files.

Use case 2: You want to route bittorrent traffic over a network such as Tor.

Use case 3: Restrictive firewall. Suppose only certain kinds of traffic are allowed through a particular firewall you have. You can push the bittorrent traffic inside a different protocol to allow it to pass through the firewall.

In case 1 and 2, we assume that normally it is impossible for monotorrent to create the connections itself. So what you do is create the necessary connections manually, then wrap them in the IConnection interface and pass them directly into monotorrent. Everything else is automatic.

For case 3, it might be possible to implement a HttpPeerConnection, whereby you push the bittorrent messages inside of a HTTPWebRequest and then send that to the remote peer and he decodes it back into it's orginal form for processing. Note: I don't recommend you do this to avoid getting around a firewall in work. Also, if you do implement something like that, only other clients who implemented that feature would be able to communicate with you, but it is an interesting application.

2) Custom Trackers
Aim: To allow someone to add peer data into the engine manually.

Use case 1: You want to type in your friends ip:port so you can directly connect to him

Use case 2: In your application, you keep a list of people you can/should connect to, and you know that MonoTorrent can create connect to them directly.

Use case 3: You want to implement an alternative peer source like the bittorrent DHT protocol

In all these cases, you just have to implement two simple methods to allow MonoTorrent to get data from your source. For case 1, whenever you enter the ip:port combination, you just raise an event with that IP:Port stored in the event args.

Case 1 is a manual event, and so MonoTorrent would never query that 'Tracker' for peer details, but for case 2 and 3, it's more fun. All you do is wait for MonoTorrent to call the 'Announce' method on your tracker, then you go off and find the peers and raise the AnnounceComplete event when you're ready. It's as easy as pie.

3) Custom Messages

Use case 1: You're lazy and don't want to define your own messaging protocol or message format and want things to 'Just Work (tm)', so just piggy back all your messages over the MonoTorrent system.

Use case 2: You want/need to send a few pieces of information between torrent clients but don't want to have to set up a separate communication channel to do so in.

Use case 3: You want to make Bittorrent 2.0.

You need to implement a slightly more complex interface than the previous two examples, but it's nothing amazingly difficult. The helper methods for reading/writing the basic types from a byte[] are all there to make creating a message that much easier. So you're left with only having to implement a mere 4 abstract methods. Whilest this feature isn't finished as yet, it will be as simple as defining your custom message, then making a call like:

byte messageId = 9;
Message.Register(messageId, delegate { return new MyCustomMessage(); });


Message.register(messageId, delegate (TorrentManager m) {
return new MyCustomMessage(m.SomeData, m.OtherData);

Once that's done, you can queue up your message to be sent and react to it's arrival with ease. What you might want to send, i don't know. But the ability is there for you to send it.

So in brief, MonoTorrent can be used as a drop in replacement for either '1 to 1', '1 to many', 'many to 1' or even 'many to many' transfers. I'm not saying it *should* be used for all those cases, it may be the heavy weight alternative to a simple Socket.Send, but the possibility is there. You can send customised messages to the other clients so that they can perform custom logic and you can insert peer details easily from any source.

Monday, February 18, 2008

It was on 28th January that i released MonoTorrent 0.20. Here we are, less than a month later and already i have some cool new stuff to blog about, this is a lot better than the time gap between the last two MonoTorrent releases.

So, what's new? LOTS! Let me give a brief overview of what's possible nowadays.

1) Custom Peer Connections
It's now possible to plug monotorrent directly into another application and have monotorrent uses connections created by that application. The advantage of this is that you can tunnel MonoTorrent through anything now. You can create an encrypted connection whatever way you want and then just pass it straight into monotorrent and it'll be used. This also makes it easy to push traffic through networks like Tor (or whatever).

2) Custom Trackers
First, i'm not talking about the server aspect of MonoTorrent. I'm talking about the clientside class used to deal with announcing/scraping to a server. It's now possible to (very easily) implement your own tracker class. What use is this you say? Well, quite a bit actually! If you want to be able to add peer data into monotorrent (but not active connections as i described above) you can subclass the MonoTorrent.Client.Tracker class and do it from there.

Suppose you want to implement a new protocol for client-tracker communications. If you want MonoTorrent to support it, just inherit from MonoTorrent.Client.Tracker, register that implementation with the TrackerFactory and you're done! You don't have to modify one letter of MonoTorrent source code to add in support for your new protocol.

3) Custom Messages
Ok, so technically speaking, this still isn't possible yet, however all the architecture to enable it is there. I just need to expose it publicly in a nice way. Anyway, when i implemented support for the libtorrent extension protocol, I realised that the way i handled peer messages just wouldn't work. Basically i used a giant switch statement to select the correct message to decode. This meant that all messages had to be defined within MonoTorrent and had to exist at compile. There was no way for the user to define their own message type.

So, instead i updated the API to use some .NET 2.0 loveliness to allow for custom messages to be defined outside of MonoTorrent and still allow them to be decoded and handled correctly.

4) Custom Writers
If i was to pick a favourite change out of the list, it would be the custom messages, but this comes a close second ;) You can now define you're own 'PieceWriter' class. What this allows you to do is to redirect disk read/write requests wherever you want them to go. Of course, the logical first implementation is a MemoryBuffer.

Yes, MonoTorrent will now cache reads/writes in memory where prudent. For example, each 'piece' you download consists of multiple 'blocks'. Previously each block was written straight to disk as it arrived. When all the blocks had arrived, they were all read straight back off the disk so that a SHA1 hash of the piece could be generated. This was very inefficient.

5) NUnit test-able!
Yeah, so this isn't really a new feature. In fact, it's something i should've been doing all along, but due to the implementation of MonoTorrent, certain things were far too awkward to test. Previously, in order to test internal logic i'd have to:
  1. Create a fake .torrent file and write it to disk.
  2. Create fake data and write it to disk so that it can be 'downloaded' by one client and 'seeded' by the other
  3. Instantiate a MonoTorrent.Tracker and load the torrent into it
  4. Instantiate two MonoTorrent.Clients and load the torrent into them.
  5. Hit 'start' and measure what little i could
This method means that it's quite possible temp files will be left on the disk - not good. Also, if something does go wrong, there is no way for me to find out exactly what went wrong. The Client would close the connection and all i'd get would be a ConnectionClosed event. Finally, it was just plain awkward! Creating a fake torrent and the necessary fake files was a pain in the ass.

Once the above changes had been complete, it became so much easier. Now I create a fake torrent in memory (no need to write it to disk!). I create one MonoTorrent.Client using the in-memory torrent and redirect its reads/writes using a custom PieceWriter. This writer just peforms in-memory operations to fill in the fake data as required. I then create a custom PeerConnection and pass one end of the connection into the engine and hold the other end myself.

This means when i send a message into the engine, i can receive the reply and verify that it is as expected using NUnit tests. Pretty sweet.

In other news, the following other nifty things have been implemented:

A) Udp Tracker support
I finally got around to implementing UDP tracker support. It has been requested by a few people, so here it is. Enjoy!

B) Better FastResume support
There were a few issues with the old fast resume support. In certain advanced use cases, fast resume support just plain didn't work. This has now been fixed. It's now up to the user to handle where/how fast resume data is stored. This is nearly finished, but not quite. It's on my TODO list for the coming week.

C) Message Bundles
An entirely invisible change, but one i want to talk about. Whenever a piece is downloaded succesfully, a 'have' message should be sent to all peers you're connected to. This is used as a way to keep track of who has what. However, have messages are tiny and typically you'd complete a few pieces a second. This means that several times a second, for each connected peer, a single 12 byte message has to be sent. This is hardly worth the effort of calling socket.BeginSend!

So, why not encode several messages at a time into the send buffer and send em all at once? Well, that just doesn't work. The architecture just does not allow for that to happen. One message and one message alone can be encoded and sent at a time.

The solution? The MessageBundle. The MessageBundle is just another implementation of the PeerMessage base class, except the message bundle can store multiple messages inside it. When you call Encode() on the bundle, it encodes all it's stored messages into the buffer. This allows me to send multiple messages at a time, but for all intents and purposes, MonoTorrent thinks i'm sending one.

Now HaveMessages are delayed and bundled together.

I'll release all this fanciness sometime in the near future.

Thursday, February 07, 2008

What kind of people do credit card companies like to give credit cards to? People who always pay their bills? People who occasionally forget and pay a few days late? How about people who forget quite often, or can't afford to pay the full bill in one month?

Well, it looks like the last group is exactly who Egg want! They have issued notice to 7% of their customers informing them that due to their bad credit rating, their credit card is being cancelled with no right to appeal. According to a fair number of customers, they have excellent credit ratings and always paid their bill in full and on time. It looks like they're being dumped because they're just not profitable.

It's kind of ironic that the 'best' customers are the ones being dumped in favour of customers who may not actually be able to pay their bills.

Hit Counter