Just to respond publicly to a few questions about monotorrent that have been asked over and over:
1) MonoTorrent does have a GUI! In fact, it has two. One of them is a console GUI, the other is a gui created using GTK#. This is *not* the same as winforms in .NET. It is an alternative framework for creating GUI's. While i consider the GUI very limited (which it is), it is usable. I would much prefer a Winforms based GUI so the application would be 100% portable .net with no other dependancies but i don't have the time to finish my 1/2 created one.
2) MonoTorrent does use less ram than azureus. Considerably less. Very very much so. Monotorrent does use more ram than uTorrent. But not that much more. If you disable the disk cache in uTorrent completely, monotorrent only uses 4-5 megabytes of ram more than utorrent. If you leave the disk cache enabled in uTorrent, then monotorrent uses slightly less ram than uTorrent.
3) MonoTorrent does not support encryption. I looked at the specs a while ago, found them confusing, created some initial support but never got back to it. Once again, time is a factor. I just don't have enough.
4) MonoTorrent is not a "Linux App" or a "Microsoft application". It's a cross-platform bittorrent library.
5) Yes, monotorrent can be used on whatever tracker you want. Private/public/personal, i don't care. It works. (Just so long as it's not a udp based tracker ;) ).
6) MonoTorrent is not related to the disease known as mono (also known as glandular fever in non-american areas) ;)
7) Yes, i do like people to contribute code. But please talk to me first and keep in touch when writing your code. The last thing i want to do is to say "no" after you spend 3 weeks writing a bucketload of code for monotorrent because i've already written the code, or you've done it wrong or whatever.
Wednesday, February 28, 2007
Monday, February 26, 2007
BitTorrent Distributed Hash Table (DHT):
I got bored over the weekend and decided to take a break from hacking on the main MonoTorrent libraries and/or Mono.Xna and instead work a little on some DHT support for MonoTorrent.
Unfortunately the only documentation is this rather undetailed document. There's quite a few places where important details are left undefined and other things just aren't mentioned or just defined in vague terms.
I'm keeping notes on these things to hopefully get a more concise and detailed specification together so that other developers attempting to implement DHT can have a better spec to work off. Also it might help with removing bugs in existing implementations.
For example one detail that wasn't mentioned in the spec was that you should never send out the details for a node (another bittorrent peer) unless you have checked that the node is still active. Some implementations don't do this and so it's possible that you will still be getting DHT requests for a torrent you turned off several days previously.
Finally, i've been bugging the developer of libtorrent for some tips on my own implementation, so thanks to him for asking my questions! It's appreciated.
Unfortunately the only documentation is this rather undetailed document. There's quite a few places where important details are left undefined and other things just aren't mentioned or just defined in vague terms.
I'm keeping notes on these things to hopefully get a more concise and detailed specification together so that other developers attempting to implement DHT can have a better spec to work off. Also it might help with removing bugs in existing implementations.
For example one detail that wasn't mentioned in the spec was that you should never send out the details for a node (another bittorrent peer) unless you have checked that the node is still active. Some implementations don't do this and so it's possible that you will still be getting DHT requests for a torrent you turned off several days previously.
Finally, i've been bugging the developer of libtorrent for some tips on my own implementation, so thanks to him for asking my questions! It's appreciated.
Saturday, February 17, 2007
MonoTorrent news:
MonoTorrent has gotten increasingly stable as time has gone on. I found the cause of a few long running bugs which have now been crushed into tiny tiny pieces. There shouldn't be any more ArgumentExceptions being thrown due to passing the wrong IAsyncResult into an End*** method. That was caused be me forgetting to clear out the Download/Upload queue when a peer is disconnected.
uPnP support has been enabled in MonoTorrent using Mono.Nat. So all you people with uPnP routers no longer have to worry about manually creating the port mapping in your router. It'll all be done automagically (all going well ;) ).
Disk writes are now fully asynchronous, but now will automatically throttle download speed if you are downloading faster than your harddisk can write. So you won't ever get 100s of megs of ram being used and 100% cpu usage when exceeding your write speed.
Upload and download speed calculations have been improved drastically (ish) for torrents. What i did before was calculate each individual peers upload and download speed, then sum up that for all connected peers to see a torrents overall download rate.
While this worked, it meant that any errors in calculation for each individual peer got magnified resulting in a fairly inaccurate overall download speed in certain situations. The quick solution was to give the Torrent it's own dedicated ConnectionManager.
There've been a few other minor changes here and there including enhancing download performance. All in all, some good stuff.
MonoTorrent has gotten increasingly stable as time has gone on. I found the cause of a few long running bugs which have now been crushed into tiny tiny pieces. There shouldn't be any more ArgumentExceptions being thrown due to passing the wrong IAsyncResult into an End*** method. That was caused be me forgetting to clear out the Download/Upload queue when a peer is disconnected.
uPnP support has been enabled in MonoTorrent using Mono.Nat. So all you people with uPnP routers no longer have to worry about manually creating the port mapping in your router. It'll all be done automagically (all going well ;) ).
Disk writes are now fully asynchronous, but now will automatically throttle download speed if you are downloading faster than your harddisk can write. So you won't ever get 100s of megs of ram being used and 100% cpu usage when exceeding your write speed.
Upload and download speed calculations have been improved drastically (ish) for torrents. What i did before was calculate each individual peers upload and download speed, then sum up that for all connected peers to see a torrents overall download rate.
While this worked, it meant that any errors in calculation for each individual peer got magnified resulting in a fairly inaccurate overall download speed in certain situations. The quick solution was to give the Torrent it's own dedicated ConnectionManager.
There've been a few other minor changes here and there including enhancing download performance. All in all, some good stuff.
Mono.XNA - the good, the bad and the ugly
There's been a good few changes for Mono.XNA recently. Firstly, we've moved to google project hosting. The idea of having an integrated issue tracker, integrated wiki, integrated email alerts was just to tempting to resist. Also we now have the ability to give write access to anyone we want as opposed to having to go through novell to grant users write access to the mono SVN which can take time.
So, the new URL for the SVN/Wiki/Issue Tracker etc is here: http://code.google.com/p/monoxna/
Feel free to drop in and start coding.
As part of the move, we've now toughened up on the contributing guidelines. Code must have tests, must follow the coding guidelines etc. Links to the relevant documentation about contributing can be seen on our google mailing list: http://groups.google.com/group/monoxna
Also, a little on my own work on XNA. I "reverse engineered" the .XNB format about 2-3 weeks ago. The only implementation so far is the Texture2DReader implementation i coded together. It's probably the most basic .XNB type in XNA, so it should serve as a good example as to how to code future files together. Unfortunately even the Texture2D format took quite a while to figure out. For anyone trying to hack some code together to decode other types, it isn't easy. You will spend hours with a hex editor.... well, it's quite possible you're better at it than me, but it took me hours :p
Still, that's another step forward. Things are looking bright for XNA, all we need now is a dozen coders to start hacking and planning and coding their way through the classes and then we can really start getting stuff done!
There's been a good few changes for Mono.XNA recently. Firstly, we've moved to google project hosting. The idea of having an integrated issue tracker, integrated wiki, integrated email alerts was just to tempting to resist. Also we now have the ability to give write access to anyone we want as opposed to having to go through novell to grant users write access to the mono SVN which can take time.
So, the new URL for the SVN/Wiki/Issue Tracker etc is here: http://code.google.com/p/monoxna/
Feel free to drop in and start coding.
As part of the move, we've now toughened up on the contributing guidelines. Code must have tests, must follow the coding guidelines etc. Links to the relevant documentation about contributing can be seen on our google mailing list: http://groups.google.com/group/monoxna
Also, a little on my own work on XNA. I "reverse engineered" the .XNB format about 2-3 weeks ago. The only implementation so far is the Texture2DReader implementation i coded together. It's probably the most basic .XNB type in XNA, so it should serve as a good example as to how to code future files together. Unfortunately even the Texture2D format took quite a while to figure out. For anyone trying to hack some code together to decode other types, it isn't easy. You will spend hours with a hex editor.... well, it's quite possible you're better at it than me, but it took me hours :p
Still, that's another step forward. Things are looking bright for XNA, all we need now is a dozen coders to start hacking and planning and coding their way through the classes and then we can really start getting stuff done!
Saturday, February 03, 2007
Asynchronous Process - Can't live with it, can't live without it
There are often times where you have two tasks running simultaenously at different speeds which depend on each other. For example, imagine transferring a file from computer X to computer Y. The simplistic approach is for computer X to transfer a part of the file to computer Y, wait for computer Y to write the data to the disk, then transfer the next part. This is a "pipelined" approach. It's good enough for most purposes, but if you have a large communication delay between the two computers, performance will suffer. However this method is very stable as you know exactly how much memory is needed and how much CPU time is needed to transfer and write a single block of data.
A second approach is for computer X to pass a part of the file to computer Y and (without waiting for Y to finish writing the data to disk) begin transferring the next part. This second approach can result in a *much* faster transfer rate when there is a large communication delay between the two computers simply because computer X no longer has to wait for the "I have written the data to disk" reply from computer Y.
However, the problem with the second approach is that if computer X sends 100 blocks of data a second and computer Y can only write 50 blocks of data a second to the disk, we either end up losing data by just dumping it or we end up storing it all in memory until we can write it to disk and memory usage climbs dramatically!
So, in order to gain the performance of the second method with the stability of the first method, you need some way to send feedback to computer X that it is sending data to fast. This is the problem i've just encountered in MonoTorrent when i converted the disk writes to be fully asynchronous to avoid blocking the main downloading thread. When i seed off my local machine, i can't write fast enough so i end up storing 1000's of chunks of data in memory until i tell my seeding client to stop. How easy this will be to fix, i'm not sure. But i am sure it's doable ;)
There are often times where you have two tasks running simultaenously at different speeds which depend on each other. For example, imagine transferring a file from computer X to computer Y. The simplistic approach is for computer X to transfer a part of the file to computer Y, wait for computer Y to write the data to the disk, then transfer the next part. This is a "pipelined" approach. It's good enough for most purposes, but if you have a large communication delay between the two computers, performance will suffer. However this method is very stable as you know exactly how much memory is needed and how much CPU time is needed to transfer and write a single block of data.
A second approach is for computer X to pass a part of the file to computer Y and (without waiting for Y to finish writing the data to disk) begin transferring the next part. This second approach can result in a *much* faster transfer rate when there is a large communication delay between the two computers simply because computer X no longer has to wait for the "I have written the data to disk" reply from computer Y.
However, the problem with the second approach is that if computer X sends 100 blocks of data a second and computer Y can only write 50 blocks of data a second to the disk, we either end up losing data by just dumping it or we end up storing it all in memory until we can write it to disk and memory usage climbs dramatically!
So, in order to gain the performance of the second method with the stability of the first method, you need some way to send feedback to computer X that it is sending data to fast. This is the problem i've just encountered in MonoTorrent when i converted the disk writes to be fully asynchronous to avoid blocking the main downloading thread. When i seed off my local machine, i can't write fast enough so i end up storing 1000's of chunks of data in memory until i tell my seeding client to stop. How easy this will be to fix, i'm not sure. But i am sure it's doable ;)
Subscribe to:
Posts (Atom)