When profiling MonoTorrent (under MS.NET, i haven't managed to get profiling under mono working yet. That should lead to a nice comparison), i noticed that about 85% (or more) of my ongoing allocations are due to the 100's of asynchronous sending and receiving methods that i fire every second. For example, if i set my download rate to 400kB/sec, there would be ~200 socket.BeginReceive calls every second. Add in a few BeginSend's and that's quite a lot of calls being fired.
Now, the thing is that when each of those operations finishes there is a bit of managed overhead to allow my callback to be executed (such as System.Net.Sockets.OverlappedAsyncResult, System.Threading.ExecutionContext etc). Now, these allocations are all pretty short term, 95% of them will be cleared up during the next garbage collection. The only ones that won't be cleared up are ones for connections that are running at that moment in time.
Now, it struck me then that wouldn't it be nice if there was a way of detecting "temporary" variables and being able to deallocate them immediately when they go out of scope. This way some items could be destroyed practically immediately after a method ends which reduces the work the GC has to do and reduces the memory overhead of having objects hanging around that arent going to be used more than once.
Take this method for an example:
public void Example()
MyObject a = new MyObject(15);
Now, it should be possible to do a scan and see that an object 'a' is instantiated, then it is only used to calculate a value (for example 15*15+15) and then print that result to the console. A reference to the object never leaves the scope of the method, therefore the object could be classified as a "Fast GC Object" and could be deallocated as soon as the "return" statement is hit.
Also, take a look at this.
public void Example2()
using(MainForm form = new MainForm())
In this case, *everything* to do with that form could be GC'ed as soon as the return statement is hit. These "Fast GC" objects would never need to be tracked by the GC as such as it is known at the JIT stage when each object will be allocated and when each object will be destroyed.
Now, i've been told that this idea is nothing new (and i'm not surprised). The art of deciding what objects can be GC'ed fast is known as Escape Analysis. The question is, what is the real world benefit for this kind of Garbage Collection. Does it provide an appreciable difference to overall memory usage? Does it reduce time spent in Garbage Collection? Is it exceptionally difficult to implement? Can the JIT be modified to generate this kind of code?
It should be possible to do a mockup to test the theory without changing anything in the Mono runtime. It should be possible to take the bytecode for any .NET application and run a program on it which will check which methods have objects which can be fast GC'ed. Once this data has been stored, the application being tested can be run with a profiler attached which just monitors how many times each method is hit during normal use.
With the knowledge of how many times each method is being hit and which objects are able to be fast GC'ed it should be possible to calculate what the benefit would be to use this new method of garbage collection. A statistic like 50% of all object allocations could be changed to be "Fast GC Objects" would be nice. Then again, if the realworld statistics said that 95% of applications would have less than 5% of their objects suitable for "Fast GC" would mean this method is near useless.