How many times do you have one stream, but you actually want to write the data to multiple places simultaneously[0]? Well, now you can[1]!
I took an hour and spun up this awesome asynchronous beast of a stream splitter. There is an optimisation that could be applied to it: Reads can be performed at the same time as writes. I figured that for a 1.0 implementation, this was good enough. If anyone wants to try their hand at making the read perform in parallel with the writes, feel free. Patches are welcome ;)
[0] http://tirania.org/blog/archive/2008/Sep-24.html
[1] http://www.monsoon-project.org/jaws/data/files/TeeStream.cs
EDIT: There's also a 'deliberate' bug there. 10 points to the first person to spot it and bonus 10 points if you can fix it with less than 5 lines of extra code.
Monday, September 29, 2008
Wednesday, September 24, 2008
Captchas, a bridge too far?
I was just trying to open a new gmail account when I encountered the unsolvable captcha. I couldn't for the life of me figure out what was written in that horrible mangled imagine of what i can only assume were standard roman-alphabet letters. I can only assume that I am in fact illiterate and that would explain why I couldn't read it.
Thankfully, google have a second option. They have a little link which reads out the captcha for you. It turns out that I can't understand english either. I couldn't make out a word of what was said. So, if anyone out there can speak english, could you please decypher this for me:
http://monotorrent.com/GoogleSucks.wav
Thankfully, google have a second option. They have a little link which reads out the captcha for you. It turns out that I can't understand english either. I couldn't make out a word of what was said. So, if anyone out there can speak english, could you please decypher this for me:
http://monotorrent.com/GoogleSucks.wav
Sunday, September 21, 2008
Saturday, September 20, 2008
How verbose is too verbose?
A common thing to do in code is to perform an action on each element in an array of objects. In C# there are two main ways to write this:
However, both of those methods are far too verbose. There is another way, which is much much nicer!
How awesome is that, eh?
// Lets just assume this list of strings has been populated with lots of strings
List<string>allStrings = GetLotsOfStrings ();
// Method 1: The for loop
for (int i = 0; i < allStrings.Count; i++)DoStuff (allStrings[i]);
// Method 2: The foreach loop
foreach (string s in allStrings)DoStuff (s);
allStrings.ForEach(DoStuff);
Wednesday, September 03, 2008
So what happened in SoC 2008?
So, after ~3 months of hacking while travelling up the east coast of Austrlia, what exactly have I managed to accomplish in this years SoC? Well, quite a lot :) Here's a list of new stuff and upcoming stuff in no real order:
DHT
DHT support is available in SVN. It's pretty much complete, but lacks some real world testing. There are currently about 35 NUnit tests covering all important modules in the code. I need to give this a week or two of solid testing and then I'll be enabling it by default. A few updates will need to be applied to MonoTorrent so that the 'private' flag will be obeyed now that we have DHT support.
IP Address Banning
Awesome support for this has been added to SVN using a combination of my own code and code written by The Great Bocky. It uses an extremely efficient way of storing IP Address ranges so that they can be represented by two integers plus a little overhead. There is a parser which supports all the main ban lists and users can parse other formats manually and add the addresses in that way.
Extended Messaging Protocol
Support for the LibTorrent extension protocol is complete. This allows custom messages to be sendable to remote peers over the standard bittorrent protocol. So if you require the ability to send arbitrary data to a remote peer and have them react in a special way, you can!
Http Seeding (Web Seeding)
Support for the getright style Http Seeding is complete. This (better) specification allows a standard HTTP server act as a seed with no special software required. If MonoTorrent decides that there aren't enough peers available in the swarm to allow the torrent to complete, it will automatically start downloading the necessary chunks from the server.
Peer Exchange
Support for this has been completed by Olivier Dufour. This pretty cool idea allows peers you are connected to to send you details about other peers which are active in the swarm. This way you gain information about more peers even if the tracker goes offline.
Misc
Faster SHA1
Due to both an algorithm change and architecture changes, hashing performance has nearly doubled. This means that hashing a file takes less than 1/2 the time it used to *and* that CPU usage while downloading is reduced.
Better Encryption
There was a bug in header-only encryption that prevented it from working correctly before. As MonoTorrent always defaulted to Full Encryption, this wasn't such a huge issue before. Along with this bugfix, encryption is now more performant than before - using less CPU and less memory. The code also shrunk considerably in size and is much more maintainable than before.
Deadlock Free
It's now impossible to deadlock the library. This isn't so important for the end-user, but for anyone programming with MonoTorrent it's great news. If you are extending the library to add extra functionality internally, it's now easy to ensure that you do everything in a thread-safe and non-deadlocking manner.
Abort long connection attempts
Sometimes an operating system might wait an incredibly long time before aborting a connection attempt. This meant that if MonoTorrent tried to connect to a peer that was no longer available, sometimes the OS would take up to 150 seconds to abort the attempt. Worst case scenario is that the first 5 peers you connect to all take 150 seconds to abort and it looks like MonoTorrent is doing nothing. Now MonoTorrent hard-aborts a connection attempt if it takes more than 10 seconds.
Streaming torrents ahoy!
Two guys Karthik Kailish and David Sanghera have created a new way to download a torrent with MonoTorrent. Generally speaking, the rarest piece of a torrent is downloaded first, then the second rarest and so on. This new code allows you to specify a range of bytes which is High, Medium or Low priority. Then, from within these ranges the rarest first algorithm is active. For example if you are a video player and you want to start playback from byte 1000, you can tell MonoTorrent that the range from 1000 -> 5000 is important so those bytes are downloaded first, which allows you to start playback as soon as enough of that high-priority data has arrived.
Unit Tests
The number of tests covering MonoTorrent has doubled over the summer, from 55 up to 111. Every test makes the liklehood of accidently introducing a bug less and less. I like tests ;)
Banshee Plugin
Finally, while not quite related to MonoTorrent itself - A plugin for banshee has been created which allows the downloading of torrent based podcasts. It's still a work in progress, but hopefully that can be cleaned up and completed pretty soon.
So with all these changes and features, i'm hoping to push out the next release of MonoTorrent by the end of september. This release is unlikely to include DHT, but I hope to have a second release shortly afterwards which will include DHT.
Anyway, I'm off to pack my bags now so I'm ready to head to Ha Long City at 7am in the morning. I'm enjoying my last week in Viet Nam at the moment. It's been a blast, though sometimes i wonder if they deliberately decide to not understand what I say just because I'm mispronouncing it slightly. 'Ho Chi Minh' isn't *that* hard to understand, is it?
Subscribe to:
Posts (Atom)