Tuesday, October 31, 2006

Rate limiting: Not as simple as you might think

Firstly, some background info. Every socket.BeginReceive() i call downloads data in 2kB chunks. By receiving 2kB at a time, i can get more accurate download speeds recorded and (hopefully) more accurate rate limiting. Download speeds are averaged over 8 seconds with data recorded each second. This provides a resonably stable but accurate download rate. A torrents download speed is calculated by summing up the download speed of each active client.

Method 1:
while(torrentManager.DownloadSpeed() < this.MaxDownloadSpeed)

This method seems like it would be a good way to limit download speed, except it fails miserably in areas where there is high bandwidth available. I.e. in a LAN situation, due to the fact that i only update peer download speeds every second, if a LAN client could easily transfer 1-2MB in that second (call it second[0]), so if my rate limit was 100kB/sec, the client would then stop downloading chunks off all peers until the 8 second averaging period has passed, and so it thinks that download speed is 0. Then the cycle starts over again and the algorithm could request another 2 megabytes in the first second, and then be followed by an additional 8 seconds of nothing.

Method 2:
Keep a count of how many 2kB chunks i transfer per second: chunksCount;
At the end of every second, after i calculate the current download speed, i perform the following calculation:

downloadSpeed = torrentManager.DownloadSpeed();

// This is the difference in speed between what i'm currentl doig and what i should be doing.
// A negative number means i'm going to fast. If 20480 is returned, it means i'm 20kB/sec
// to slow
speedDifference = torrentManager.MaxDownloadSpeed - downloadSpeed;

// The number of additional chunks i need to download in order to reach my rate limiting
// target is the speedDifference / chunkSize. i.e. 20480 / 2048 = 10 chunks extra per second

chunksCount+= speedDifference / 2048;

This tells me how many chunks i need to send this (and every) second to reach my quota. So i can then run code like:

while(chunksSent++ < ChunksCount)

However this runs into problems that if i overshoot download speed (due to inaccurate MaxDownloadSpeed for the same conditions as mentioned in method 1) i end up with a hugely oscillating chunksCount. This algorithm suffers a similar problem to the one above in high bandwidth situations, but not to the same extent.

Method 3:
Similar to above, but with a slight twist.

Assume every chunk i try to download is exactly 2048 bytes (not always true). Therefore the "ideal" number of chunks to download a second is:
int idealNumberOfChunks = torrentManager.MaxDownloadSpeed/2048;

Then take a double called "multiplyAmount" and initially set it equal to 1.0.

What i do now is at the end of every second i do the following:
// I'm going to fast, so i need to reduce my multiply amount
if(torrentManager.DownloadSpeed > this.MaxDownloadSpeed)
multiplyAmount *= 0.95;

// I'm going to slow, i need to increase my multiply amount
multiplyAmount *= 1.05;

ChunksCount = idealNumberOfChunks * multiplyAmount;

Basically what i do at the end of every second is see if i'm going to slow or too fast. If i'm going to slow, i increase my multiply amount (which retains it's value between calls to my method). So if i'm consistantly too slow, my multiplyAmount will slowly increase from 1.0 right up until 5.0 (which is a hardcoded max for multiplyamount).

What this should result in is that the ChunksCount will initially start off equal to the idealNumberOfChunks but if i'm going too slow, it'll request more and more chunks above the ideal amount, but if i'm going to fast, it'll start requesting less and less chunks. Unfortunately this algorithm didn't really work either. It tends to go too fast, then too slowly, then too fast. This is caused by MultiplyAmount increasing and decreasing every second. But it is by far the best method so far (and the one i've currently left implemented).

Ratelimiting is definately not such a simple thing to do. Even keeping track of the DownloadSpeed is tricky, with my code occasionally running riot and calculating 100kB/sec when actual rate is 50kB/sec.

So if anyone has any ideas on how to implement some accurate ratelimiting, that'd be great! Or if my badly explained ideas don't make sense, ask me to explain it a bit better.


John said...

This is a classical control problem

Try implementng a simple PI (proportional-inegral) controller.

Should smooth out your oscillations and allow better tracking of the limit

There is buckets of sample code for PI controllers on the internet but here is some pseudocode

diff = target_rate - actual_rate
integral += diff
command = diff*cP + integral*cI

command can be whatever you want it to be, providing it maps linearly to download rate. cP and CI are constants weighting the influence of the integral and proportional differences

Through you experiments above you have implemented a simple proportional controller without knowing it (sans the exponential in method 3). Do some reading on control and you will see adding an integrator will improve the tracking and damp the oscillations.

Good Luck

massi said...

You say that multiplyAmount is capped at 5.0.
One nicer idea would be go make it configurable, having as parameter the max munbernof concurrent connections (if I understood its effect, of course).

Alan said...

Well, the point in the multiply amount is that my idealNumberofChunks is not realistic. It should be resonably close as most of time when i receive a "chunk" from a peer, it will arrive within a short amount of time. Hence if every chunk i receive is 2kB, and my download speed limit is 50 kB/sec, i can receive 25 chunks per second.

Now, the problem is that sometimes a 2kB chunk could take 5 seconds, or a chunk won't actually be exactly 2kB, or due to minor inefficiences downloading 25 chunks a second like this won't result in a download speed of averaging exactly 50kB/sec.

Suppose i am downloadig 50 chunks per second and each chunk is 1kB in size (just for example). In this case my multiply amount will slowly increase to 2.0, thus giving me a download chunk allowance of idealChunks*2.0, which is 50 chunks. Then 50 chunks (twice my ideal) @ 1kB each is what i need to download each second to achieve my download rate of 50kB/sec.

The multiplyAmount is used to make my "ideal" amount of chunks more closely match the "realistic" amount.

The number of chunks needed is completely indepenant of the number of concurrent connections. I could download all the chunks from one peer, or 10 chunks from 10 peers, or 50 chunks from 1 peer, and 10 from 5 other peers. So i cant use numberOfConcurrentConnections to aid my algorithm.

Alan said...

So basically i'd expect the multiply amount to be somewhere between 0.5 and 1.5 during "accurate" rate limiting. Thats what i'd expect anyway... there are a few cases where it would reach 5 (such as when there arent enough peers to reach the download limit)gf

Alan said...

Preliminary results: Perfect. I'll write up a new blog post detailing my new method. It works, seemingly quite accurately, but i'm not quite understanding how it works. Thinking about it logically, i should be be undershooting by a consistant amount (i think). But i'll leave those musings for a seperate post.

Thanks a lot john!

Pensee said...

Off topic: just give a look a this new library/client http://code.google.com/p/deluge-torrent/

Alan said...

I've taken a look at deluge-torrent. It's a python wrapper around the C++ libtorrent library. So it's running on some pretty stable code. The interface is a little cluttered at the moment (the bottom pane is quite crowded) but it is looking nice!

Every time i look at other bittorrent GUI's and i look at the one i have here, i want to make a new better one :( Tis a pity i don't have the time at the moment.

consejo comprar yate said...

Quite effective info, thanks so much for the post.

Hit Counter