message followed by an interest message. Optimize this but not sending
those messages in that case. This is better becasue we don't risk to
trigger a choke from the receiving peer.
be shared by several peers. At least in end game.
* Link blocks with the peers we are loading them from and vice versa.
* Limit the number of requests / peer in end game too.
* Improve end game by using some sort of round robin for block requests.
writing to a peer. If more requests arrive they will be ignored.
When all pieces have been sent to the peer, in order for it not to
wait on the ignored requests, its state will be reset by a choke
followed by an unchoke message.
Without this limit there was no bound on how much memory btpd would
consume to satisfy a greedy peer.
* Use the new net_bufs where it makes sense.
* Take advantage of the reference count on net_bufs and only allocate
the (un)choke and (un)interest messages once.
Each piece must have at least one byte for their block bit array,
or they will collide causing great confusion in btpd. The calculation
was done wrong so this could happen for small torrents (blocks / piece < 8).
policy_subr.c:
* Add test for correctness.
* Add missing call to cm_on_piece_full in cm_new_piece.
information on what data they hold, making it unnecessary to have
other lists tracking that information. Also they now have a reference
count, making it possible to use the same buffer on many peers.
This is only a start though. I've just done enough for btpd to work,
I haven't taken advantage of the reference count yet.
This fixes a bug where peer could miss pieces btpd got while the
peer was in handshake.
Also, btpd now sends multiple have messages instead of a bitfield
when it's better to do so.
Let the default be 8 hz for now.
Removed a try at time correction. I don't really think it'll matter and
there was a potential bug if the clock went backwards.
Removed net_by_second. Let the peer bandwidth calculation be handled in
cm_by_second.