Richard Nyberg
8033ec33a1
Set version to 0.7. Update CHANGES.
il y a 19 ans
Richard Nyberg
e025c4743a
Add a new net state to get the index and begin fields from piece messages
before we read the piece data. This can be used to test for junk earlier.
il y a 19 ans
Richard Nyberg
0cae0e478d
Have a peer event for keep alives too. Its only function is to log atm.
il y a 19 ans
Richard Nyberg
e5cd773d85
Wait until we don't have any unanswered requests on a peer before
sending an uninterest message.
il y a 19 ans
Richard Nyberg
87f94f9d5f
Log keep alives.
il y a 19 ans
Richard Nyberg
a263d2f9f6
Accept pieces even if they arrive in a different order than the
requests were sent.
il y a 19 ans
Richard Nyberg
74c5b19492
Logging.
il y a 19 ans
Richard Nyberg
93053ce34a
Remove unsent requests from the write queue when we receive a choke.
il y a 19 ans
Richard Nyberg
a8817eee1a
Enable all logging if DEBUG is defined.
il y a 19 ans
Richard Nyberg
89b0b8b359
More logging: discarded pieces and peer_id.
il y a 19 ans
Richard Nyberg
28fcbed3c5
#include <limits.h> to be sure to get IOV_MAX.
Use the net_state enum and change some state names from NET_ to BTP_.
Some minor type fixes.
il y a 19 ans
Richard Nyberg
2dc98c39b6
Fix two bugs. Add some logging.
il y a 19 ans
Richard Nyberg
e982934f6b
Remove unused constants.
il y a 19 ans
Richard Nyberg
80214ff0fb
Code shuffle.
il y a 19 ans
Richard Nyberg
9ba7dc69fc
Put the net state related data in its own sub struct.
Remove unneccesary use of struct io_buffer.
il y a 19 ans
Richard Nyberg
2bc4a5d83a
Constify some functions.
Remove an unneccesary net state.
Pass the char buffer directly to net_state instead of struct io_buf.
il y a 19 ans
Richard Nyberg
062d08cb60
net_state should return ssize_t not int.
removed some deug logging.
il y a 19 ans
Richard Nyberg
f963072983
Better method of reading data from peers. btpd could send data to peers
that had closed the at least one direction of the connection. That feature
was probably unneccesary. Removed it for now.
il y a 19 ans
Richard Nyberg
32a88ff5d8
Rewrite of the code for receiving data from peers.
It's not quite how I want it yet, but it's getting there.
il y a 19 ans
Richard Nyberg
d5bf714f1d
More logging.
il y a 19 ans
Richard Nyberg
777c7e641d
Changes for 0.6.
il y a 19 ans
Richard Nyberg
faad18e368
In the transition to end game it's likely that we'll send an uniterest
message followed by an interest message. Optimize this but not sending
those messages in that case. This is better becasue we don't risk to
trigger a choke from the receiving peer.
il y a 19 ans
Richard Nyberg
aa1fe4b2dd
Send a new request to a peer after sending cancel.
il y a 19 ans
Richard Nyberg
6bf02797d7
x
il y a 19 ans
Richard Nyberg
2123189ca4
Bump version to 0.6.
il y a 19 ans
Richard Nyberg
f31e2d8b89
* Allocate request messages on piece creation. The request objects can
be shared by several peers. At least in end game.
* Link blocks with the peers we are loading them from and vice versa.
* Limit the number of requests / peer in end game too.
* Improve end game by using some sort of round robin for block requests.
il y a 19 ans
Richard Nyberg
d8720e889c
Use the piece destructor.
il y a 19 ans
Richard Nyberg
dc45054fe8
Add some macros.
il y a 19 ans
Richard Nyberg
a67eaf47cb
Simplify the autocrap somewhat. Always include the #defines needed
to build with glibc.
il y a 19 ans
Richard Nyberg
08dcc6b892
Remove a bad assert. The test can be true during normal operation.
il y a 19 ans
Richard Nyberg
eaf95339c7
Set an upper limit on how many piece messages to queue for
writing to a peer. If more requests arrive they will be ignored.
When all pieces have been sent to the peer, in order for it not to
wait on the ignored requests, its state will be reset by a choke
followed by an unchoke message.
Without this limit there was no bound on how much memory btpd would
consume to satisfy a greedy peer.
il y a 19 ans
Richard Nyberg
fcbec726e5
Only allocate one have message for all peers, instead of one per peer.
il y a 19 ans
Richard Nyberg
2acdcff5a6
* Rearrange some code. Mostly from net to net_buf and peer.
* Use the new net_bufs where it makes sense.
* Take advantage of the reference count on net_bufs and only allocate
the (un)choke and (un)interest messages once.
il y a 19 ans
Richard Nyberg
e485377f95
The fix for bitfield in r59 wasn't quite correct. Instead of
being sent too early it could now be sent too late.
Change version to 0.5 and document the bug fix.
il y a 19 ans
Richard Nyberg
8115e481fa
Wrong logmask was used.
il y a 19 ans
Richard Nyberg
77177de52c
Set version to 0.4.
il y a 19 ans
Richard Nyberg
0537ec3edd
Add items for 0.4.
il y a 19 ans
Richard Nyberg
01191f2561
Spelling.
il y a 19 ans
Richard Nyberg
aa50cbe63a
Removed the info entry in the net_buf. The information can easily
be extracted from the buffer data instead. Created functions to do
that.
il y a 19 ans
Richard Nyberg
1e1846b8f3
Better tests. peer_laden is needed beacuse the peer might have
gotten new request if the piece was fully downloaded and found
to be bad.
il y a 19 ans
Richard Nyberg
c11a57b8cb
Fix style. Remove unnecessary check for EINTR.
il y a 19 ans
Richard Nyberg
17a1f68906
All files:
Each piece must have at least one byte for their block bit array,
or they will collide causing great confusion in btpd. The calculation
was done wrong so this could happen for small torrents (blocks / piece < 8).
policy_subr.c:
* Add test for correctness.
* Add missing call to cm_on_piece_full in cm_new_piece.
il y a 19 ans
Richard Nyberg
af31e76618
* Don't hold a net_buf on allocation. Do it when it's really needed instead.
* Add function net_unsend to safely remove network buffers from a peer's
outq. Use it where needed in peer.c.
il y a 19 ans
Richard Nyberg
bf2c2c6338
Make sure we don't empty the outq and leave the write callback enabled.
il y a 19 ans
Richard Nyberg
4f916d8abd
Remove dead code.
il y a 19 ans
Richard Nyberg
9cc1ffda34
Rework the outgoing network buffers. The buffers now contain more
information on what data they hold, making it unnecessary to have
other lists tracking that information. Also they now have a reference
count, making it possible to use the same buffer on many peers.
This is only a start though. I've just done enough for btpd to work,
I haven't taken advantage of the reference count yet.
il y a 19 ans
Richard Nyberg
762b3560a5
Missing space.
il y a 19 ans
Richard Nyberg
aa31f523a3
Queue the bitfield for sending after the handshake is completed.
This fixes a bug where peer could miss pieces btpd got while the
peer was in handshake.
Also, btpd now sends multiple have messages instead of a bitfield
when it's better to do so.
il y a 19 ans
Richard Nyberg
fcc9418b92
At each bandwidth call the remaining bandwidht counter is set to limit / hz.
Since the set hz is (almost) never achieved the denominator is now based
on the average hz the last 5 seconds.
il y a 19 ans
Richard Nyberg
3ae85c522a
Spelling.
il y a 19 ans