Sunday, November 26, 2006

When profiling MonoTorrent (under MS.NET, i haven't managed to get profiling under mono working yet. That should lead to a nice comparison), i noticed that about 85% (or more) of my ongoing allocations are due to the 100's of asynchronous sending and receiving methods that i fire every second. For example, if i set my download rate to 400kB/sec, there would be ~200 socket.BeginReceive calls every second. Add in a few BeginSend's and that's quite a lot of calls being fired.

Now, the thing is that when each of those operations finishes there is a bit of managed overhead to allow my callback to be executed (such as System.Net.Sockets.OverlappedAsyncResult, System.Threading.ExecutionContext etc). Now, these allocations are all pretty short term, 95% of them will be cleared up during the next garbage collection. The only ones that won't be cleared up are ones for connections that are running at that moment in time.

Now, it struck me then that wouldn't it be nice if there was a way of detecting "temporary" variables and being able to deallocate them immediately when they go out of scope. This way some items could be destroyed practically immediately after a method ends which reduces the work the GC has to do and reduces the memory overhead of having objects hanging around that arent going to be used more than once.

Take this method for an example:
public void Example()
{
MyObject a = new MyObject(15);
a.DoACalculation();
Console.WriteLine(a.ResultFromCalculation);
return;
}

Now, it should be possible to do a scan and see that an object 'a' is instantiated, then it is only used to calculate a value (for example 15*15+15) and then print that result to the console. A reference to the object never leaves the scope of the method, therefore the object could be classified as a "Fast GC Object" and could be deallocated as soon as the "return" statement is hit.

Also, take a look at this.
public void Example2()
{
using(MainForm form = new MainForm())
form.ShowModal();
return;
}

In this case, *everything* to do with that form could be GC'ed as soon as the return statement is hit. These "Fast GC" objects would never need to be tracked by the GC as such as it is known at the JIT stage when each object will be allocated and when each object will be destroyed.

Now, i've been told that this idea is nothing new (and i'm not surprised). The art of deciding what objects can be GC'ed fast is known as Escape Analysis. The question is, what is the real world benefit for this kind of Garbage Collection. Does it provide an appreciable difference to overall memory usage? Does it reduce time spent in Garbage Collection? Is it exceptionally difficult to implement? Can the JIT be modified to generate this kind of code?

It should be possible to do a mockup to test the theory without changing anything in the Mono runtime. It should be possible to take the bytecode for any .NET application and run a program on it which will check which methods have objects which can be fast GC'ed. Once this data has been stored, the application being tested can be run with a profiler attached which just monitors how many times each method is hit during normal use.

With the knowledge of how many times each method is being hit and which objects are able to be fast GC'ed it should be possible to calculate what the benefit would be to use this new method of garbage collection. A statistic like 50% of all object allocations could be changed to be "Fast GC Objects" would be nice. Then again, if the realworld statistics said that 95% of applications would have less than 5% of their objects suitable for "Fast GC" would mean this method is near useless.

Saturday, November 18, 2006

With less than a month to go to my christmas exams, my coding time as been reduced to zero. Pretty much everything else has been put on standby now (including writing content for the website) until the exams are over.

Everything is pretty much ready for a release i think, but i need to run a few tests first to make sure. Then it's beta testing time.

I've also received an interesting email yesterday. Someone has taken the time to rejig the code so that the client library will now run on the .NET compact framework. This means that you can now download .torrents on your smart phone, or whatever portable device you have that supports the .net Compact framework. So now not only can monotorrent run on all your favourite desktop OS's, but now it can be run on some of your favourite portable machines too.

Also with the ability to put mono on embedded devices, it's quite possible that if an interested party could be found, monotorrent could be stuck on a settop box on top of your television and could stream content directly to a HD for later viewing. The possibilities are endless!

Tuesday, November 07, 2006

So i have a few things to mention today.

I've very kindly been offered some hosting and a free webpage created for MonoTorrent on the condition that i continue developing the library. That's something i think i can manage ;) So in a week or two, i'll be posting a link to the new site along with one or two exciting announcements. I bet ye can hardly wait ;)

Development is still going well on MonoTorrent. I implemented EndGame mode today (which had completely slipped my mind as i was either working on local torrents, or just testing briefly on large torrents so i never reached the final few % of the download). This means that the final stages of downloading a torrent will go a *lot* faster now. Previously the last few percent would either never finish (due to deadlock condition mentioned below) or finished really slowly.

I also fixed a problem where i'd request a piece off a peer, but they'd never send the piece i requested. This meant that they stayed in a limbo where i thought i had pending requests off them, but they had no intention of fulfilling those requests. Hence no other peer would be able to request those pieces.


For those of you interested in developing for the library, i've included a new Class Description document in the SVN which gives a *very* brief description of what each class does. I'll do my best to update the XML comments within the classes aswell at some stage. But i promise nothing ;)

Once again, be ready for a few surprise announcements over the next week or so. It'll (hopefully) be worth the wait.

Sunday, November 05, 2006

This is just a wild stab into the dark here, but i'm hoping my newfound fame (subtle promotion of monocast: http://www.mono-cast.com/?p=28) can find me someone with a few hours/days to spare who can throw together a website for me.

What i'm looking for is a nice, simple, easy to maintain website for MonoTorrent. Hosting i can sort out, i have the domain registered aswell, i just need content to put up! So if anyone would be interested in slapping together the following, that'd be great:
1) Homepage: Simple enough design, just a place where i can write announcements and stuff
2) A FAQ/Code Examples page
3) A page where i'll list different releases of the library.
4) A page where i can list all the features etc and mention licensing etc.

Like i said, nothing to fancy. So long as i can update things easily, i'll be happy. I just want something that's nice enough to look at and easy to browse. If anyone is interested in taking up the challange, give me a shout. I'd really appreciate it. Don't go writing anything without contacting me first, just to make sure that work isn't duplicated (or triplicated) and i can actually use the resulting code.

Thanks!

Saturday, November 04, 2006

I was off at a Muse concert earlier tonight. I have to say, it was brilliant! They (literally) blew me away! The lights and videos were amazing. The only bad thing i'd say is that Matt Belemy (the lead singer) doesn't really talk to the crowd at all. You'd be lucky to get a "hello dublin" out of him ;) But all in all, a good night!

Thursday, November 02, 2006

I finally got rate limiting pegged. Method 4 out of my last post with a few slight modifications was what finally worked for me, i knew i was onto a winner!

As i found out, relying directly on DownloadSpeed to implement rate limiting made it next to impossible to get accurate rate limiting, but without using the current DownloadSpeed it's impossible to get accurate rate limiting. A bit of a conundrum, isn't it. So, the solution is to use what is best described as a feedback loop a.k.a. a Proportional Intergral controller for those of you not studying electronic engineering ;)

My final algorithm works as follows:
1) Each chunk of data i send is (ideally) exactly 2kB in size.
2) Every second i calculate the error rate in my download speed. The Error rate is the difference between the download speed i want and what i'm currently downloading at.

int errorRateDown = maxDownloadSpeed - actualDownloadSpeed;

3) I also have stored the error rate from the previous second which i then use to get a weighted average. This average is weighted to give the error rate from the last second more importance than the error rate in the current second.

int averageErrorRateDown = (int)(0.4 * errorRateDown + 0.6 * this.SavedErrorRateDown);

4) Finally, to calculate the number of 2kB blocks i should be transferring this second, i just run the simple calculation:
int numberOfBlocksToRequest = (maxDownloadSpeed + this.averageErrorRateDown) / ConnectionManager.ChunkLength;

As you can probably see, the closer i get to my requested download speed, the smaller and smaller averageErrorRateDown will get, thus reducing the number of blocks i request. If i'm downloading to slowly, averageErroRate will be a large positive number (increasing the number of blocks). If i download too fast, averageErrorRate will become a large negative number (decreasing the number of blocks).

Then every time i want to send a block, if numberOfBlocksToRequest is greater than zero, i decrement it by one and send the block. If numberOfBlocksToRequest is zero, i just hold the block in memory in a queue and as soon as i'm allowed, i'll send it. Obviously a FIFO queue is used ;)

The only issue with this method is that the size of the blocks i send aren't always exactly 2kB in size, so this method will always settle at a download rate just below the requested download rate. But that can be compensated for.

Hit Counter