Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raise the max piece size to 512 MiB and fix check when loading torrents #7736

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ChangeLog
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
* raise the max piece size to 512 MiB and fix check when loading torrents
* back-port fix last_upload and last_download resume data fields to use posix time
* back-port treat CGNAT address range as local IPs

Expand Down
26 changes: 11 additions & 15 deletions include/libtorrent/block_cache.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -235,40 +235,36 @@ namespace aux {
// bitfield
std::uint16_t blocks_in_piece = 0;

// this is the number of threads that are currently holding
// a reference to this piece. A piece may not be removed from
// the cache while this is > 0
std::uint8_t piece_refcount = 0;

// 8 bytes padding

// ---- 64 bit boundary ----
// the number of blocks in the cache for this piece
std::uint16_t num_blocks = 0;

// the number of dirty blocks in this piece
std::uint32_t num_dirty:14;
std::uint16_t num_dirty = 0;

// the number of blocks in the cache for this piece
std::uint32_t num_blocks:14;
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by bumping this to 16 bits, the max piece size goes to 512 MiB

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about num_dirty?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, that should be wider too

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about num_dirty?

And pinned?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about num_dirty?

And pinned?

I hope you haven't lost sight of it. It is also 15 bits.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm tempted to just fix the check rather than mucking about with this struct. Do you think there's a legitimate use for torrents with piece sizes this large? It really seems questionable

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think there's a legitimate use for torrents with piece sizes this large? It really seems questionable

I only know that the problem was reported by users... In addition, what seemed meaningless yesterday or today will become commonplace tomorrow.
Support for 512 MiB piece size does not require major changes, so why not add it?
Still, I don't insist. The correct processing of limits is good in itself. But I would still expand the potential capacity in upcoming major update.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be much more reasonable to grow torrents in the number of pieces, rather than piece size.

// this is the number of threads that are currently holding
// a reference to this piece. A piece may not be removed from
// the cache while this is > 0
std::uint16_t piece_refcount:8;

// while we have an outstanding async hash operation
// working on this piece, 'hashing' is set to 1
// When the operation returns, this is set to 0.
std::uint32_t hashing:1;
std::uint16_t hashing:1;

// if we've completed at least one hash job on this
// piece, and returned it. This is set to one
std::uint32_t hashing_done:1;
std::uint16_t hashing_done:1;

// if this is true, whenever refcount hits 0,
// this piece should be deleted from the cache
// (not just demoted)
std::uint32_t marked_for_deletion:1;
std::uint16_t marked_for_deletion:1;

// this is set to true once we flush blocks past
// the hash cursor. Once this happens, there's
// no point in keeping cache blocks around for
// it in avoid_readback mode
std::uint32_t need_readback:1;
std::uint16_t need_readback:1;

// indicates which LRU list this piece is chained into
enum cache_state_t
Expand Down
4 changes: 2 additions & 2 deletions include/libtorrent/piece_picker.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -97,9 +97,9 @@ namespace libtorrent {
// priority factor
prio_factor = 3,
// max blocks per piece
// there are counters in downloading_piece that only have 15 bits to
// there are counters in downloading_piece that only have 16 bits to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't see that the counter itself was changed. Just this comment... Or is it a 16-bit size already?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_blocks went from 14 bits to a uint16_t (i.e. 16 bits)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_blocks went from 14 bits to a uint16_t (i.e. 16 bits)

Ah, I thought it was about something else.

// count blocks per piece, that's restricting this
max_blocks_per_piece = (1 << 15) - 1
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the reason this wasn't caught early was because this 15 should have been 14. The bitfield was probably changed without this changing.

max_blocks_per_piece = (1 << 16) - 1
};

struct block_info
Expand Down
5 changes: 2 additions & 3 deletions src/block_cache.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -292,8 +292,7 @@ static_assert(int(job_action_name.size()) == static_cast<int>(job_action_t::num_
#endif

cached_piece_entry::cached_piece_entry()
: num_dirty(0)
, num_blocks(0)
: piece_refcount(0)
, hashing(0)
, hashing_done(0)
, marked_for_deletion(false)
Expand Down Expand Up @@ -790,7 +789,7 @@ bool block_cache::blocks_flushed(cached_piece_entry* pe, int const* flushed, int

m_write_cache_size -= num_flushed;
m_read_cache_size += num_flushed;
pe->num_dirty -= num_flushed;
pe->num_dirty = aux::numeric_cast<std::uint16_t>(pe->num_dirty - num_flushed);

update_cache_state(pe);
return maybe_free_piece(pe);
Expand Down
7 changes: 6 additions & 1 deletion src/torrent_info.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -1027,11 +1027,16 @@ namespace {

// extract piece length
std::int64_t piece_length = info.dict_find_int_value("piece length", -1);
if (piece_length <= 0 || piece_length > std::numeric_limits<int>::max())
if (piece_length <= 0)
{
ec = errors::torrent_missing_piece_length;
return false;
}
if (piece_length > std::numeric_limits<int>::max())
{
ec = errors::invalid_piece_size;
return false;
}
file_storage files;
files.set_piece_length(static_cast<int>(piece_length));

Expand Down
18 changes: 13 additions & 5 deletions test/test_torrent.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -167,7 +167,7 @@ void test_running_torrent(std::shared_ptr<torrent_info> info, std::int64_t file_
TEST_CHECK(h.get_file_priorities() == prio);
}

void test_large_piece_size(int const size)
void test_large_piece_size(std::int64_t const size)
{
entry torrent;
entry& info = torrent["info"];
Expand All @@ -179,7 +179,15 @@ void test_large_piece_size(int const size)
std::vector<char> buf;
bencode(std::back_inserter(buf), torrent);
add_torrent_params atp;
atp.ti = std::make_shared<torrent_info>(buf, from_span);
try
{
atp.ti = std::make_shared<torrent_info>(buf, from_span);
}
catch (lt::system_error const& e)
{
TEST_CHECK(e.code() == error_code(lt::errors::invalid_piece_size));
return;
}
atp.save_path = ".";

lt::session ses;
Expand Down Expand Up @@ -212,9 +220,9 @@ TORRENT_TEST(long_names)

TORRENT_TEST(large_piece_size)
{
test_large_piece_size(32768 * 16 * 1024);
test_large_piece_size(65536 * 16 * 1024);
test_large_piece_size(65537 * 16 * 1024);
test_large_piece_size(0x40000000 + 0x4000);
test_large_piece_size(0x80000000);
test_large_piece_size(0x100000000);
}

TORRENT_TEST(total_wanted)
Expand Down
Loading