Monday, January 28, 2008

MonoTorrent 0.20 has just been shoved unwillingly out of it's cosy spot in SVN into full public view. The last time i tagged and released was way back on July 4th. Far too long ago! Full details of the release can be read here. Download links here aswell.

Here's the changelog:
Client
Features
Linear piece picking possible
- This mode of operation chooses pieces towards the start of the file where possible. This should not be a publicly settable option in a GUI. It could be automatically set until the first 20 pieces in a torrent are received to allow for previewing. For general usage, this is a very inefficient way to download.
Full choke/unchoke algorithm implemented (Andy Henderson)
- Full tit-for-tat algorithm implemented. This improves download rates and upload rates and is pretty slick.
Sub-optimal implementation of code used to connect to peers made optimal
- Fixed corner case where a connection wouldn't be cleaned up correctly
- Significantly faster startup performance in cases where remote peers don't support encryption and encryption is enabled
Rate limiting for Disk IO implemented
Configurable amount of open filestreams
Reduced disk thrashing when multiple torrents hashing simultaneously
Files are not preallocated anymore
TorrentManager will never block when .Start() is called
The Tracker base class made more extensible (Eric Butler)
- An example usage would be if a person had peer details stored in a database. They could implement a 'DataBaseTracker' class, register it with the engine, then monotorrent could announce to that tracker and retrieve peers from the database. Thanks to Eric, this is hugely simplified and is now trivial to implement.




Bug Fixes
Piece Requesting
Fixed bug where a piece could be requested twice
When a corrupt piece is received, ensure all contributing peers are marked
Fixed a possible null ref when removing pending piece requests when a connection is closed

Torrent Creator
When creating an (optional) MD5 hash in the torrent creator, open the files in read-only mode (Roger Zander)
Creating torrents no longer spins up an extra thread when using the synchronous method (Eric Butler)
Fixed possible race conditions when creating a torrent using the asynchronous methods
Private key is retained when creating torrents

Misc
Per-file progress is updated correctly.
Fixed detection of when a tracker supports scrape - It should have been case insensitive
Added extra check to make sure a scrape request is only performed if the tracker supports scraping
Added extra logic checks to help prevent the loading of incorrect fastresume data
When a peer sends a HaveAll message - it is correctly marked as being a seeder
PieceHashed event fired under all circumstances



Tracker
The MonoTorrent.Tracker got a major rewrite. Highlights include:
- more than 1000x memory reduction for trackers hosting a large number of torrents. 1000 torrents can be hosted in ~30kB. Previously would have required ~30MB.
- Faster more efficient announce handling. For large trackers, lookups are significantly faster
- Vastly simplified announce handling. It is now trivial to write code to handle incoming connections from differing sources. A typical example would be to create an AspNetHandler to handle connections in an ASP.NEt project. It requires less than 20 lines of code to be written.



It should be fairly solid, with a lot of cool feature enhancements under the hood. There are a few cool things which i managed to implement this time round like rate limiting disk IO, not pre-allocating 100% of the file on startup, being able to properly handle a torrent with 100,000 files in it. Good stuff.

Plans for the future are pretty cool aswell. I'll blog more about that later though.

Saturday, January 26, 2008

Today i'm going to talk about optimisation a little, this is mostly in the MonoTorrent Tracker context, but some of the ideas still apply to other situations.


Sometimes you hear people talking about how they want to optimise their code to make it faster, and they ask questions like 'what's the fastest way to multiply by 2, bitshift or regular multiplication?', or 'Should I null out objects the very second I'm finished using them or just wait for them to fall out of scope?'. These kind of questions are probably the epitome of premature optimisation. These kind of optimisations won't make your application faster or better.

I had done a lot of work on the Tracker code (the 'server' portion of the bittorrent specification) recently. I had precomputed values which are used regularly, i had optimised the hashcodes used in my dictionary lookups, i had reduced the amount of data that needs to be kept in-memory significantly. I wanted to see what effect all this had on the actual running of the tracker. This was the first time i had run a benchmark on the tracker.

I went on the net, found a big tracker and checked it's stats. Using these, i decided that this was benchmark was representative of a heavy real-world load:


1) Load 2000 torrents into the engine, each of which contains 1000 peers.
2) Hammer the server with 1000 requests a second choosing a random torrent and random peer from the list and make a fake request from them to the server.


So, once that was written, i fired up the tracker and ran the benchmark. My system locked up and i was forced to hard-reboot. What had gone wrong! I started the benchmark again, but monitored memory and CPU usage carefully. I was surprised to find that memory usage was rocketing, which is what caused the massive slowdown of my system! I couldn't understand why. I did a few quick calculations to figure out how much memory i'd expect the tracker to use, they put final memory usage at far less than 300MB. I quickly whipped out my allocation profiler and began optimising. Here are the 'issues' i fixed:

1) The objects i was using as the 'key' in a dictionary were being recreated every time i used them. Typically this means that for every request to the tracker, at least a dozen complex objects were created/garbage collected needlessly. In this kind of scenario, the objects should be declared as 'static readonly' and reused. I implemented this change.

The benchmark still couldn't run.

2) I decided that the next problem was that i was pre-generating two byte[] for each peer when they were added to the server. This was so that a request could be fulfilled by simply copying pregenerated byte[]. I changed this to generate the byte[] at request time rather than storing it in memory. I expected this to fix the issue.

The benchmark still couldn't run after this change.


3) Finally, i noticed there were a huge number of hashtable related objects being retained in memory. This was a bit weird. There shouldn't have been that many around. A few minutes of checking the code made me realise that the probably cause was keeping a NameValueCollection object in-memory for each peer. I rewrote the peer class to extract the necessary information from the collection and then dump it, rather than holding a reference to it.

The benchmark could run!

Now, the memory improvements were gigantic. Previously i had stats like this:
Active Torrents: 500
Active Peers per Torrent: 500
Memory: 350MB

Now i had:
Active Torrents: 500
Active Peers per Torrent: 500
Memory: 40MB

Active Torrents: 2000
Active Peers per Torrent: 1000
Memory: 140MB

There is no way in hell I'd ever have found the cause of the issue unless i had run a profiler. So, for anyone who is trying to make their code run fast and efficiently, you need a profiler. You can't get away without it.

Thursday, January 10, 2008

As you've probably heard by now, Aaron has just released a new version of Banshee has just. Good stuff! As part of the blogpost detailing the changes and features in the new version he says:

Alan McGovern has nearly completely rewritten our MTP support and we are close to enabling it by default. The next release should have solid MTP support, and we hope to roll 0.13.3 within the next two weeks. The new MTP support uses libmtp instead of libgphoto2. This decision was made for a number of reasons, though I am really not informed enough to convey them properly.

Well, i never detailed the reasons for the switch in my previous blogpost, so i may as well do it now. I'll also mention the current issues with libmtp for those of you interested in what those issues are.

Reasons for the switch:
The primary reason is that to use libgphoto as the mtp backend, you need svn head of libgphoto and svn head of libgphoto-sharp. This issue is actually a pretty major issue. If banshee wanted to ship MTP support, it would have to ship it's own version of libgphoto and libgphoto-sharp. The other alternative was to enable MTP support by default, but the end-user wouldn't be able to use it until they compiled and installed libgphoto themselves or their distro packaged the required version of libgphoto. This version is likely to be released in the next month or two (i believe), but could take several months to propagate through the various distros.

With libmtp, banshee can run with version 0.2.0 or higher, which means anything newer than Aug 4th 2007. This is a much more workable solution.


Secondly, API: The libmtp API is aimed at mp3 players. Banshees API is also aimed at mp3 players, this makes it significantly easier to map the banshee API to the libmtp API. Several tasks such as getting track listings, uploading tracks, and retrieving tracks are much simpler in libmtp.

Thirdly, with libgphoto, downloading or uploading a track requires copying the entire file to memory in C#, then pushing that into libgphoto (which duplicates the memory usage) then the data is pushed to the device. There is an alternative method whereby i create a unix file descriptor which is then passed into libgphoto and the file is read/written to that. However, this is not ideal as there's a good chance that half written temp files can be left over in the event of a crash.

With libmtp i can pass in a file path and libmtp will write the file directly from that path or read it directly from that path. This means the memory copying is completely removed resulting in much better performance.

Fourthly, Whilest libgphoto can (technically speaking) run on windows, the necessary code hasn't been written to allow this to happen. libmtp can be compiled and run on windows (or so the documentation says). As there is work to make banshee run on windows, this is an advantage.

Finally, playlists. I'm still unsure how to create playlists using libgphoto. It'd require a fair bit of messing around and experimenting for me to figure out exactly how it should work. With libmtp, this is trivial to do.


However, libmtp has some pretty big issues at the moment, which only really came to light a few hours before the release.

If you connect an MTP device and banshee loads it up, connecting a second DAP (of any kind) will make libmtp choke. Once there's an active device, any call into libmtp to list available devices throws a (non-fatal) error. This means if you plug in a second device, you get an annoying error message popping up. If you do own two devices and were trying to copy from one to the other, this is impossible.

There was also an issue linking a device reported by libmtp to a device reported by HAL. There is no sure-fire way of doing this using the libmtp API. Under libgphoto this was possible.There are a few possible workarounds for this, but none of them are ideal.

Sunday, January 06, 2008

Banshee now has support for MTP devices in trunk. It's a long and torturous story, but here's the short of it:

Way back when, the work on implementing MTP support was started by a guy called Patrick. At the time, there were two options:
1) libgphoto
2) libmtp

libmtp was still in it's early stages of development and wasn't mature enough to be a viable option, so libgphoto was chosen althought it wasn't that much better. At this time, libgphoto didn't have metadata reading/writing implemented, didn't show any stats on the different filesystems in the device. Note: libmtp was no better at the time. So, a C# binding to libgphoto was started and basic support for syncing was eventually pushed into banshee.

Fast forward about 18 months. libgphoto has matured significantly in its support for mp3 players. Track metadata has full read/write support, you can get stats on the filesystems, patches and optimisations have been pushed in so that all the basic operations are fast. However, there's still issues with long filename support - there are hardcoded limits in libghoto itself which means if it encounters a path or filename longer than the limit, it just bails out. For mp3 players, this was a serious issue as these limits were quite frequently reached. For instance, if you were a fan of Sufjan Stephens, you were out of look. His incredibly long track titles like 'Out of Egypt, Into the Great Laugh of Mankind, and I Shake the Dirt From My Sandals as I Run' really throw a spanner in the works

Around this time, i stepped in to complete the binding and complete banshee MTP support. I pestered poor Marcus for quite a while until i got various fixes/updates into libgphoto such as the change to allow unlimited filename length and path length, and also to remove 'long' types from the public API (as a 'long' type can be 32bit or 64bit depending on architecture and leads to nasty complications binding it in C#). I also completed the initial revision of the libgphoto-sharp API restructuring. The entire endevour probably took about 2 months from start to finish.

However, the problem now was that in order to have MTP support in banshee, you needed SVN head of libgphoto, svn head of libgphoto2-sharp and a pot of luck that you could get it installed without screwing up your packaged version. A new release of libgphoto was several months away and probably still a further few months until it managed to trickle down into the various distros.

It was then that i was reminded by Aaron that libmtp still existed and was quite mature. I took a quick look at the API showed that it was tailored specifically for the kind of tasks that banshee was trying to do. Better yet, it had a faster release cycle, it didn't require svn head of anything in order to get all the features that were required and it was already available in nearly every distro.

I spent about 4 days writing a brand new C# wrapper for libmtp and then replaced the libgphoto-sharp code in banshee with the new libmtp-sharp code. While there are things in libmtp that are missing as compared to libgphoto, the simplicity of the new API makes it trivial to implement features which previously would have been quite complicated. For example, Playlist support under libmtp is trivial to implement, yet under libgphoto i'm not 100% sure how i'd go about it!

What i regret most of all out of this is that i put Marcus through so much hassle for minimal benefits as Banshee is no longer using libgphoto. While i do feel bad about putting so much effort into the libgphoto-sharp binding to never use it myself, it was still the best decision from banshee's point of view.

So, if anyone out there has an MTP device and wants to make sure that their device works with banshee before the release (so i can fix any bugs should there be any), check out the guide here. Post any bug reports on http://bugzilla.gnome.org under the DAAP component in Banshee.

Hit Counter