-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solution to write large amount of points #578
Comments
Hi, Nothing immediately stands out, beyond the fact that you are writing an enormous amount of data and fields every 10ms. That could be pushing things too far.
Given the large size of the data there might be delays in processing at the influxdb side. You might consider splitting up the data across writers, or if you want to narrow things down further instead of updating the data, read known data from a file in, and then once it is in memory, write that known data and do the same math. |
Proposal:
Hi, I have a multi-threading program that contain 80,000 field, the data changed ~10ms.
Is there any efficient method to write that amount of data to a local Influx DB?
Current behavior:
I have implemented the sample of batching write method that create multi-thread, each thread managed 5000 items and write items to db every 10ms.
The execution time of each thread to update and write data to db < 1ms. That mean the interval between each datapoint < 11ms. But when I check the update frequency from query, the gap time > 200ms.
Desired behavior:
Is there any efficient method to write large amount of data to influx DB without ?
Alternatives considered:
Server specs:
CPU: Intel Xeon Silver 4210 (20 CPUs)
Ram: 128GB
Database storage: Local disk, HDD: xTB
Use case:
This program gather the factory sensor data.
Thank you!
The text was updated successfully, but these errors were encountered: