This is just a quick follow up from my last post. I decided to do a few benchmarks to see what transmission was like as compared the Monsoon (the GTK# gui for the MonoTorrent library). I've heard about tranmission before, but i've never used it. I was pretty shocked by the results. This is a quick summary of what I found:
1) Memory usage.
Transmission used less memory than MonoTorrent. That came as no surprise to me. No matter how efficently I code, i cannot ever get as efficent as an app written in C/C++. Mostly because in a .NET based app, the .NET runtime/jit must be loaded which consumes a few MB of memory. Percentage wise, yes, the difference is huge, but if you take a look at overall memory, it's not massive.
MonoTorrent: 35 Res / 20 Shared
Transmission: 20 Res / 13 Shared
This figure was gotten after extensively using the GUI by opening menu's and flicking around pages.
2) Hashing performance
I suppose another important metric - how long does it take to hash a complete file.
MonoTorrent: 95 seconds
Transmission: 85 seconds
This difference is fairly negligible. A full hash will rarely be performed. However, i suppose it was worth measuring. One optimisation i could make in monotorrent which would reduce that gap slightly would be if i read the next chunk of data off disk asynchronously while hashing the current chunk. At the moment it's all sequential, but i suppose it could easily enough be made parallel. It shouldn't make a huge difference though.
3) Download speeds
This would be the most important metric. This is where the biggest surprise came.
MonoTorrent: 15 seconds - 400kB/sec, 30 seconds - stabilised at 550-600kB/sec (maxed my connection) and connected to 50 people, the maximum allowed by my settings.
Transmission: 15 minutes: still at less than 50kB/sec and still only connected to 6 peers.
What am i doing wrong with transmission to make it so slow? It's not NAT (even though transmissions uPnP support cannot detect my uPnP enabled router) because i manually forwarded it in the end. I was using both svn head (r97353) of MonoTorrent and svn head (r5227) of Transmission when i ran this quick test.
EDIT: Just as i finished this, Transmission managed to connect to 3 additional people and one of them had a massive upload capacity which let Transmission reach ~480kB/sec. Still, why did that take so long? These results were consistent every time i started/stopped both transmission and Monsoon. Monsoon consistently maxed out my connection quickly whereas transmission consistently took forever to even break 40 kB/sec.
UPDATE: I just want to add that i tested using the ubuntu-7.10-desktop-i386.iso torrent on Suse 10.2.
Subscribe to:
Post Comments (Atom)
5 comments:
Comparing two torrent clients by measuring the download speed of one download is like comparing a Ferrari to a Porsche by measuring the time they take to cross a big city. I dare say the results are pretty much useless. :)
In general, i'd agree, but not in this case. I specifically choose that ubuntu torrent because it has such a high seed/peer ratio. I can max out my 6meg connection every single time i benchmark with that torrent.
The fact that Transmission could not do that, even allowing Transmission more than 10 times the time it took Monsoon to max my connection is an issue.
MonoTorrent had connected to 30 peers within about 15 seconds, 50 peers within about 30. Transmission had trouble breaking 8 peers.
I have no idea what logic Transmission uses to connect to peers, but Monsoon is limited to 5 simultaneous connection attempts. It will never attempt to connect to more than 5 people at one time. Even with that, Monsoon was much faster.
Finally, these results were very very repeatable. Every time i started up the clients, i saw the same behaviour - over 2 days. Transmission was consistantly slow.
I'd recommend you check out the code for both transmission and monsoon that i used and try it yourself. You'll get exactly the same results.
Hello again, Alan,
One possibility to explain this difference is that Transmission >= 0.72 reenabled outbound connection throttling to around 3 new ones per sec and less than 10 half-open connection at any given time.
Before, this limit was not implemented and it caused many cheap NAT SOHO routers to melt down and bring an entire network to a standstill as the client tried to map more connections.
If Monsoon doesn't do something like this by default, it should at least be configurable to support this.
Now of course this might be a case where Transmission could benefit from more optimization (i.e. give up after a few seconds if it looks like a peer isn't responding to go try another one), but clearly letting the reins loose entirely has its downsides.
Oops obviously I suck at reading blog replies before hitting reply, but other than this possibility I don't know why Transmission would be slower. I'd be interested in finding out, however. Perhaps it's the way Transmission picks new peers that's consistently less intelligent than what Monsoon does.
P.S. I'm impressed by the resource usage and performance stats of Monsoon -- it shows great promise!
The full algorithm for MonoTorrent is as follows:
Every 'tick' attempt to connect to a peer.
Every time a connection attempt finishes (successfully or unsuccessfully), try to connect to another peer.
'connecting to a peer' will abort unless these conditions are true:
1) There are less than 5 existing half-open connections.
2) We haven't reached the maximum total connections overall
3) We haven't reached the maximum connections for that torrent
That's it.
There is no throttling per-second, but there is a limit on the number of half-opens. Even allowing for a 3 connection attempt per second limit, transmission should have been able to connect to more than 8 people after 10 minutes.
It might be worth making transmission log the reason why a connection is closed. It could be a buglet anywhere in the codebase which is making transmission prematurely close connections.
Post a Comment