-
Notifications
You must be signed in to change notification settings - Fork 179
TSDB data import tool #671
base: master
Are you sure you want to change the base?
Conversation
exposition format. The tool can be accessed via the TSDB CLI. Addresses #535. Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
The Linux build seems to have failed because it could not download a package. |
@krasi-georgiev It's not strictly related, but I was going through the What is your opinion on this? |
@juliusv added the tsdb dump code so maybe he can share his use case for that. |
@juliusv added the tsdb dump code so maybe he can share his use case for that. Can you show an example file of how would the data look like for this import tool. and maybe some example code on how to write such a file that can be used to import the data. This should give a better idea on which format to use. My first impression is that it would be easyer for people to write some tool to write a file with json data than the Prometheus format, but some examples would help take this decision. |
@krasi-georgiev The exporter I used to export data in expfmt is here https://gist.github.com/dipack95/171b7e3ac226f296f49f0e320eb486bf You can run it using The data that it exports looks like this:
It's quite easy to export data in this format, provided you use the prometheus client libs, and that is why I prefer it over JSON, for which we will have to write an intermediate to once again output expfmt data, to be accepted by the text parsers. |
What would that look like for samples across multiple blocks? |
@brian-brazil I don't quite understand what you mean by samples across different blocks? Do you mean different metrics exposed in the same text file? |
No, I mean how do you handle data that overlaps multiple blocks. |
To prevent any issues when importing into an existing TSDB instance, I have a step before the actual import that checks for any overlaps, and if there are any, it aborts the import process. If you wanted to go ahead and import data that overlaps with what is present in the target TSDB instance (because you have |
This will then result in massive blocks, which isn't usually desirable. You want to have new blocks that match up with existing blocks.. |
You're right about that; importing a lot of data will create large block, but I assumed that the blocks sizes will line up over time during compaction as well, so this wouldn't be much of an issue? Alternatively, we could call |
Actually, I misspoke, calling |
Going down that route too, still gives me blocks of similar sizes. I'm not quite sure if there is a clean way to properly separate the samples into (almost) even blocks. Do you have any suggestions? |
You can either use the existing blocks, or just go with 2h |
I opted against using the existing blocks as using this method we can import data directly into a live instance, and it will be picked up as usual. As for the 2h block range option, I'm currently creating a temp TSDB instance, using the default exponential block range, and then creating a snapshot from it. Shouldn't the |
That depends on the flags passed to the running Prometheus. |
Using the time ranges of those blocks to create new blocks would be ideal as @brian-brazil said. You can get those time ranges by opening the DB in read only mode. And if possible, it would be better to avoid opening a DB instance and creating blocks via it. Instead I suggest to re-use the existing functions from compact.go and write the blocks directly, which would be more efficient. |
Closed by mistake. Reopened. |
Also I think @krasi-georgiev is still gathering opinions on the data format for the import data. |
@codesome One of the use cases for importing data, for us, is to back-populate data for a new metric that we've just started recording. In that instance, I think it makes sense to create new blocks entirely. To ensure that we don't run into issues regarding overlapping data, I open the target TSDB instance in RO mode, and validate that the most recent datapoint in the incoming data is before the start of the current data. The user can choose to skip this step, however. Regarding the large block size issue, I am working off of @brian-brazil's recommendation, and splitting the incoming data into chunks of 2h each, to keep block sizes down. |
compactor, instead of creating a new TSDB instance. Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
Blocks are now created with a max duration of 2h each, and are written directly to disk using the compaction functions, instead of creating a new TSDB instance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure pulling everything into RAM is a good idea.
Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
…emory Signed-off-by: Dipack P Panjabi <dpanjabi@hudson-trading.com>
The blocks are now written as soon as they're cut, to prevent potentially going OOM by parsing too many samples at once. The Windows build seems to have timed out before it began. |
Smaller blocks doesn't rule out the possibility of a big block at the end. It is very much possible.
And this will result in a huge index and the limit of index right now is 64 GiB (soon to be lifted, but such big index will degrade performance) |
I'm presuming we're aligning them in the usual way. |
@codesome Wouldn't the scenario that you've pointed out happen anyway over the course of normal operation? Given enough time, obviously. |
@dipack95 The block size is limited to 1 month, or retention_duration/10, whichever is lower. So no, entire database won't turn into a single block. The above has potential to cross the limit. |
Based on my understanding, we could inflate the index files massively if we set the retention duration long enough. I don't think this importer really creates this problem, as it already exists, depending on how you configure prometheus. As the comments in prometheus/prometheus#535 suggest, there are a lot of use cases where bulk importing of data is useful. In practice, the primary purpose of Prometheus is to represent recent state of a system, and it should be quite difficult for users to hit the index limits you've pointed out. For example, at ~60 million series per block with 5 labels each, the index size is around 5-6 GiB; there wasn't much lag when querying this data. |
Yes. Also, the block size is capped at 31 days, so you cannot inflate beyond that.
I think the block here doesn't span a larger time range. Chunk references are also a part of index. I have personally seen index hitting 20G for 8-10M series with retention duration of 90 days (means a block is capped at 9 days). The idea here is to align the time ranges of the newly created blocks with the existing blocks to avoid cases which I described in #671 (comment). This is because the overlapping blocks are not kept as it is in the database, they are all compacted to form a single huge block. It won't be taken care directly by using small block durations. |
If I understand correctly, you're looking for a block structure as follows:
|
Created a tool to import data formatted according to the Prometheus exposition format. The tool can be accessed via the TSDB CLI.
Addresses prometheus/prometheus#535
Signed-off-by: Dipack P Panjabi dpanjabi@hudson-trading.com