Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Initial pass at removing qs logging in view of the PlayerEventLogging System #4542

Draft
wants to merge 26 commits into
base: master
Choose a base branch
from

Conversation

neckkola
Copy link
Contributor

Description

In discussions with the devs, this is the start to migrate the qs logging system into the player event system. Given that the player event logging system uses a json payload, this will allow for that payload to be flatten into 'qs like' structures to allow for easier server operator queries.

At present, this enables:
aa purchase
killed npc, named and raid mobs
loot
merchant selling and purchasing
npc handins
player speech (off by default)
trades

The qs logging code for the above is still present. This can be removed (now or in the future) to allow for a migration path for operators.

Type of change

Please delete options that are not relevant.

  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Testing

Tested locally on RoF2. Requires further testing and documentation on my part. I wanted to start the review process.

Clients tested:
N/A

Checklist

  • I have tested my changes
  • I have performed a self-review of my code. Ensuring variables, functions and methods are named in a human-readable way, comments are added only where naming of variables, functions and methods can't give enough context.
  • [TBD] I have made corresponding changes to the documentation (if applicable, if not delete this line)
  • I own the changes of my code and take responsibility for the potential issues that occur
  • If my changes make database schema changes, I have tested the changes on a local database (attach image). Updated version.h CURRENT_BINARY_DATABASE_VERSION to the new version. (Delete this if not applicable)

Copy link
Member

@Akkadius Akkadius left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing work!

Some things we need to follow up on

  • All QS table code needs to be removed, period both in world and in QS ingestion. We will provide an optional transformer who wants to transform old tables to new data.
  • We will provide a data transformer from old QS tables to new. I can take this on.
  • Player event processing code needs to be on its own thread and its own database connection, I can also take this on
  • We need a command in world that exposes the ETL settings in JSON format so it can feed into Spire admin. Similar to ./bin/world database:schema
  • As you see in the PR comments, most of the vectors can be reserved on which does help substantially to bulk allocate memory once instead of doing it each event. We know the size of all of the data coming in, so we just need to reserve again even if it means doing another loop prior to fetch each event, call reserve in a switch.

out_entries.augment_5_id = augments[4];
out_entries.augment_6_id = augments[5];
}
etl_queues.npc_handin_entries.push_back(out_entries);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we know the size of etl_queues.npc_handin_entries from in.handin_items we can do a reserve before the loop to pre-allocate memory

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While we know the 'entries' sizes, there could be other events of the same type in the batch queue, so we would need to cycle through the batch_queue to parse the total size for the potential multiple events. Would this be beneficial, instead of the dynamic size adjustments as the vectors are add to? If so, happy to do that.
Same comment for all the reserve items.

Copy link
Member

@Akkadius Akkadius Nov 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah it makes a huge difference to pre-allocate especially with higher event volume.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added reserve to all etl_queues.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you sir

common/events/player_event_logs.cpp Outdated Show resolved Hide resolved
common/events/player_event_logs.cpp Outdated Show resolved Hide resolved
common/events/player_event_logs.cpp Outdated Show resolved Hide resolved
common/events/player_event_logs.cpp Outdated Show resolved Hide resolved
@@ -123,21 +134,466 @@ void PlayerEventLogs::ProcessBatchQueue()

BenchTimer benchmark;

EtlQueues etl_queues{};
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at all of this in ProcessBatchQueue and in our deletions, I think we should wrap these functions in thread calls so we don't tie up the main thread.

This also means that we should create a separate database connection just for processing player events because what happens is even if we had all of this in another thread its going to be lock contended with database locks on the main thread.

World needs to be kept as lightweight as possible in the main thread and this certainly adds weight if anything starts to take more than 500ms which it most certainly could here.

Same would go for using QS if folks use QS to process.

I may end up taking this one on.

common/events/player_event_logs.cpp Show resolved Hide resolved
common/events/player_event_logs.cpp Show resolved Hide resolved
zone/client.cpp Show resolved Hide resolved
zone/client.cpp Outdated Show resolved Hide resolved
{
auto results = db.QueryDatabase(
fmt::format(
"SELECT AUTO_INCREMENT FROM information_schema.tables WHERE TABLE_NAME = '{}';",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should assume that the server user can't directly query information_schema is there a reason we leaned into this versus max + 1 ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I ran into a couple of test cases that failed. If I empty an etl table, the max + 1 will return 1, when in fact the auto_increment may be 100. Further, once the retention period is hit, the next id via max + 1 will again be different from the auto_increment.

@@ -441,7 +441,7 @@ int main(int argc, char **argv)
}

if (player_event_process_timer.Check()) {
player_event_logs.Process();
std::jthread event_thread(&PlayerEventLogs::Process, &player_event_logs);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's also a spot in queryserv

@Akkadius
Copy link
Member

Amazing work sir!

One more stretch goal if you are down for it -

  • Have zoneservers directly connect to queryserv if it is available so that events go straight to queryserv instead of going through world

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants