Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DROP MEASUREMENT "xyz" not working #9694

Open
mgf909 opened this issue Apr 9, 2018 · 77 comments
Open

DROP MEASUREMENT "xyz" not working #9694

mgf909 opened this issue Apr 9, 2018 · 77 comments

Comments

@mgf909
Copy link

mgf909 commented Apr 9, 2018

Bug report

Cannot delete measurments.
I accidentally created a bunch of MS SQL server measurements into my Telegraf db, and want to remove them. When i try to remove them with DROP MEASUREMENT "" it doesnt actually seem to drop it...

My mistake seems to have created over 1000 measurements, so i suspect the bug/issue is due to the number of measurements.
If i create a new test database i can drop measurements as expected.

System info:
Influx 1.5.1 running on Red Hat 4.8.5-11

Steps to reproduce:

Create a lot of measurements - i did this by trying to use the MS SQL Server telegraf input.

  1. USE telegraf
  2. DROP MEASUREMENT "xa7_sessions"
    It doesnt matter what measurement i specify, i cannot remove it. There is no error
  3. SHOW MEASUREMENTS
    Its still there!

Expected behavior:
Id expect the measurement to no longer be returned after running the DROP MEASUREMENT command.

Actual behavior:
SHOW MEASUREMENT continues to return the measurement..and will not die ;-(

Additional info: [Include gist of relevant config, logs, etc.]

@mgf909
Copy link
Author

mgf909 commented Apr 9, 2018

telegraf_measurements.zip
Here is a list of the measurements...quite a lot!

@foxmask
Copy link

foxmask commented Apr 11, 2018

+1 I cant drop anything at all.
Even SERIES are still there :(

@max3163
Copy link

max3163 commented Apr 11, 2018

Same issue too, no problem when we switch back to 1.3 version.

@foxmask
Copy link

foxmask commented Apr 11, 2018

@max3163 thanks for the tips, it seems that 1.4.1 works fine too

@binary0111
Copy link

I moved from 1.4.3 to 1.5.2, everything seems to work fine for me.

@kichristensen
Copy link

Same issue here in version 1.5.2

@yellowpattern
Copy link

In my experience, dropping a series is not always "instant". If a "drop" is not given the correct parameters, it can silently do nothing without an error.

This command has no observability and what it does (or doesn't) do is hidden from users. This needs to be improved and "drop series" made more transparent in what it does (or at least the option of having it behave that way.)

A more "verbose" drop command is required, telling you how many series were found to drop, including 0 if none are to be dropped.

Plus include something in the log file - maybe write out to the log file the name of each series dropped and the total number dropped.

@fhriley
Copy link

fhriley commented Apr 28, 2018

Same here. Can't delete old measurements I don't want:

root@influx:/mnt/data/influxdb# influx
Connected to http://localhost:8086 version 1.5.2
InfluxDB shell version: 1.5.2
> use graphite
Using database graphite
> show series where host='freenas.lan' limit 10
key
---
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=idle
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=interrupt
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=nice
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=system
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=user
cpu_temp_value,host=freenas.lan,instance=0,type=temperature
cpu_temp_value,host=freenas.lan,instance=1,type=temperature
cpu_temp_value,host=freenas.lan,instance=2,type=temperature
cpu_temp_value,host=freenas.lan,instance=3,type=temperature
cpu_value,host=freenas.lan,instance=0,type=cpu,type_instance=idle
> drop series where host='freenas.lan'
> show series where host='freenas.lan' limit 10
key
---
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=idle
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=interrupt
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=nice
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=system
aggregation_value,host=freenas.lan,instance=cpu-sum,type=cpu,type_instance=user
cpu_temp_value,host=freenas.lan,instance=0,type=temperature
cpu_temp_value,host=freenas.lan,instance=1,type=temperature
cpu_temp_value,host=freenas.lan,instance=2,type=temperature
cpu_temp_value,host=freenas.lan,instance=3,type=temperature
cpu_value,host=freenas.lan,instance=0,type=cpu,type_instance=idle
> select * from aggregation_value where host='freenas.lan' limit 10
name: aggregation_value
time                host        instance type type_instance value
----                ----        -------- ---- ------------- -----
1500261791902566479 freenas.lan cpu-sum  cpu  idle          604978350
1500261791902566479 freenas.lan cpu-sum  cpu  interrupt     348403
1500261791902566479 freenas.lan cpu-sum  cpu  nice          464942
1500261791902566479 freenas.lan cpu-sum  cpu  system        15242180
1500261791902566479 freenas.lan cpu-sum  cpu  user          15781610
1500261801903654932 freenas.lan cpu-sum  cpu  idle          604983172
1500261801903654932 freenas.lan cpu-sum  cpu  system        15242230
1500261801903654932 freenas.lan cpu-sum  cpu  user          15781673
1500261811887399490 freenas.lan cpu-sum  cpu  idle          604988107
1500261811887399490 freenas.lan cpu-sum  cpu  interrupt     348405
>

@kichristensen
Copy link

In my case I have now waited a week and restarted the database multiple times, the measurement is still not dropped

@kichristensen
Copy link

Are there anyone looking into this?

@timhallinflux
Copy link
Contributor

Yes. We are investigating this.

@timhallinflux
Copy link
Contributor

For those experiencing this issue...we are assuming that:

  1. You have upgraded InfluxDB from an older version to 1.5.x
  2. You have some amount of data already stored within the database...spanning multiple shards

The part where we could use additional hints -- and where we are broadening our tests in an attempt to replicate this -- is regarding the following:
a) did you switch the index-version setting from inmem to tsi1?
b) did you also build/rebuild the index using influx-inspect buildtsi command?
https://docs.influxdata.com/influxdb/v1.5/tools/influx_inspect/#influx-inspect-buildtsi

@kichristensen
Copy link

Yes I've upgraded to 1.5.x and change index from inmem to tsi1, also build the index. Although I must say that I didn't build the index right away, as far as I remember I did it a day later.

@lyondhill
Copy link
Contributor

I was able to reproduce the behavior.

How to reporduce
Started influxdb version 1.5.1 using inmem indexing. I created a database with shard duration of 1 hour. I then inserted data for 6 hours (I'm not sure it needs to be that long). Near the middle of 6th the hour I switched the indexing to tsi1 and restarted the the services. It then inserted data for another 8 hours (again not sure if it needs to be that long). This created a situation where some shards were created using inmem, some mixed and some tsi.

Findings
Drop measurement requests succeed but the measurement still shows up in the list of measurements. Additionally, when the measurement is listed a show series command has no series with that measurement in it.

> show measurement
...
measurement97
measurement98
measurement99
> drop measurement measurement99
> show measurement
...
measurement97
measurement98
measurement99
> show series
...
measurement96,96uniq0=uniq,96uniq1=uniq,96uniq2=uniq,96uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement97,347uniq0=uniq,347uniq1=uniq,347uniq2=uniq,347uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement97,97uniq0=uniq,97uniq1=uniq,97uniq2=uniq,97uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement98,348uniq0=uniq,348uniq1=uniq,348uniq2=uniq,348uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
measurement98,98uniq0=uniq,98uniq1=uniq,98uniq2=uniq,98uniq3=uniq,dup0=dup,dup1=dup,dup2=dup,dup3=dup
> 

@serputko
Copy link

I'm using dockerized influxdb and found next workaround:
do DROP MEASUREMENT "xyz"
do SHOW MEASUREMENTS and check that measurement was not removed
stop & restart influxdb instance
do SHOW MEASUREMENTS and see that "xyz" measurement does not exist anymore.

@mvarge
Copy link

mvarge commented Aug 2, 2018

Facing similar issue here. Can drop newly built measurements but old ones which were populated before upgrade aren't erased. Any update on this?

@wenhuanglin
Copy link

wenhuanglin commented Aug 27, 2018

@serputko thanks for the workaround. I did the same things. and the "xyz" measurement does not show in "show measurements". Yet when i start to write "abc" data into new measurement which also named "xyz", there are deprecated field keys which are not part of my new "abc" data. So i think the "xyz" measurement is still not properly deleted.

@herrkutt
Copy link

herrkutt commented Aug 27, 2018

Facing this issue with 1.6.1 with a database that was created using 1.6.1, ie no migrated data.
Tried to restart the instance as well but no luck.

InfluxDB shell version: 1.6.1
use iPerf
Using database iPerf
> show measurements
name: measurements
name
----
iPerfLog
> drop measurement "iPerfLog"
> show measurements
name: measurements
name
----
iPerfLog
> drop measurement iPerfLog
> show measurements
name: measurements
name
----
iPerfLog

@ronansalmon
Copy link

ronansalmon commented Sep 5, 2018

I'm seeing this too :


> drop MEASUREMENT procstats;
> SHOW QUERIES
qid query        database duration status
--- -----        -------- -------- ------
5   SHOW QUERIES telegraf 70µs     running

> show series from procstat
procstat,host=xxx.wle,pattern=.,process_name=vnetd
procstat,host=xxx.wle,pattern=.,process_name=xinetd
....
# service influxdb restart
> show series from procstat
procstat,host=xxx.wle,pattern=.,process_name=vnetd
procstat,host=xxx.wle,pattern=.,process_name=xinetd
....

I'm using influxdb 1.6.1, and I did switch the index-version to tsi1 at some stage.
restarting influxdb does not make procstat series go away.

@azhurbilo
Copy link

azhurbilo commented Oct 9, 2018

Facing the same issue both with Influxdb 1.5.2 and 1.6.3 (tsm memory index)

I really can't understand how such timeseries storage can be used in production when you can't delete data :(

@timhallinflux
Copy link
Contributor

Are you continuing to feed data into the cluster? If so, the measurement is re-created...as new data points arrive.

@azhurbilo
Copy link

azhurbilo commented Oct 10, 2018

Are you continuing to feed data into the cluster?

yes, but without "incorrect" fields, which type I want to drop.

example:

was

# http_request measurement
http_calls_count: integer
time_spent_in_http_calls: integer # incorrect type, I want to delete
  • run drop measurement http_request
  • apps continue push metrics without "time_spent_in_http_calls" field)

all points have deleted but scheme not changed (time_spent_in_http_calls: integer still exist)

@sebbacon
Copy link

Using 1.6.3, not an upgraded instance. Restarting the influxdb server caused the changes to be visible.

@ghost
Copy link

ghost commented Oct 30, 2018

Hi,
Same here..

drop measurement system
select * from system

after some seconds...

select * from system
name: system
time Temperature Battery 1 host host_1 load1 load15 load5 n_cpus n_users uptime uptime_format


1540909380000000000 host_name 0.03 0.1 0.06 24 3 19452 5:24

The point is that "Temperature Battery 1" is NOT defined in the system!! Is a snmp input writen to another measurement correctly...
Also, is written to host_1 that has not been defined by me.... normally, should go to "host"..

@sada-narayanappa
Copy link

I just cannot drop a measurement; When I drop it, it appears to go away but when I insert a row all the old field types comes back

But doing the same with another measurement name works. Somehow, the old measurement field types are cached; How can I completely wipe out a measurement

@ghost
Copy link

ghost commented Nov 2, 2018

Hi,
Like sada, nothing to do.. I cannot erase the measurement "system"...
Like him/her, I did everything... everything... Checked that I'm not using "host" as field away... Making drop hundred times...
Last tests, I stopped telegraf to avoid writing the DB, drop the measurement and restart influxdb. Verify that the measurement "system" is not there.. good...
Launched again telegraf only with the [[input.system]] activate and..again.. the "system" measurement is remade with "host" "host_1" etc...

I know that in the past (some days ago) I did a mistake and I wrote in "system" fields named "host" and another but how comes that I drop the measurement and they are "rewritten" again?

Like Sada, is like the field types are cached somewhere and when you creates again the measurement re-take the old values...

@sada-narayanappa
Copy link

To work around:

  1. I backed up the database. (influxd backup -portable -database dodadb 106)
  2. dropped the database (run command from influx: drop database dodadb)
  3. restored from back up ( influxd restore -portable -database dodadb 106)

The problem is cleared - this bug is very very annoying and very expensive if the database is large.

I am unsure if influxdb is ready ! I have started to switch over to "https://prometheus.io/"

There are too many bugs in influxdb

@f1-outsourcing
Copy link

Just wanted to drop a line and say that I was hit by the same bug and the following steps fixed it.

1. Upgrade to 1.7.2 (apt,yum,whatever)

2. Do TSM to TSI migration
# do everything under the user running influx or you will end up with bad permissions
 su -l influxdb

# convert the TSM shards to TSI format (old -> new format/type)
influx_inspect buildtsi -datadir /location_of_influxdb_data/ -waldir /location_of_influxdb_wal/

# do an influxdb restart to be sure the new shard files are loaded and OK
systemctl restart influxdb.service

# DONE. You should be able to drop whatever you want
DROP MEASUREMENT "godKnowsWhat"

--

Did this got a messages like below, but did not result in being able to drop

2018-12-21T11:40:18.807789Z  info  Rebuilding retention policy   {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen"}
2018-12-21T11:40:18.810653Z  info  Rebuilding shard  {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 14}
2018-12-21T11:40:18.811422Z  info  Checking index path     {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 14, "path": "data/collections/autogen/14/index"}
2018-12-21T11:40:18.812105Z  info  tsi1 index already exists, skipping {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 14, "path": "data/collections/autogen/14/index"}
2018-12-21T11:40:18.812769Z  info  Rebuilding shard  {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 20}
2018-12-21T11:40:18.813406Z  info  Checking index path     {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 20, "path": "data/collections/autogen/20/index"}
2018-12-21T11:40:18.814084Z  info  tsi1 index already exists, skipping {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 20, "path": "data/collections/autogen/20/index"}
2018-12-21T11:40:18.814774Z  info  Rebuilding shard  {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 27}
2018-12-21T11:40:18.814927Z  info  Checking index path     {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 27, "path": "data/collections/autogen/27/index"}
2018-12-21T11:40:18.815070Z  info  tsi1 index already exists, skipping {"log_id": "0CW8uIJG000", "db_instance": "collections", "db_rp": "autogen", "db_shard_id": 27, "path": "data/collections/autogen/27/index"}

@f1-outsourcing
Copy link

Well, thanks. For helping other people. The steps I did:
systemctl stop influxdb
cp -a /var/lib/influxdb /var/lib/influxdb.backup
My data is very important... verify that both copies are OK:
diff <(find /var/lib/influxdb -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ") <(find /var/lib/influxdb.backup -type f -exec md5sum {} + | sort -k 2 | cut -f1 -d" ")
Erase all the indexes:
find /var/lib/influxdb -type d -name index | while read index; do rm -Rf "$index" ; done
Verify that there is any index in the directory:
find /var/lib/influxdb/ -type d -name index | wc -l (should be 0)
Build the index:
su -s /bin/bash -c 'influx_inspect buildtsi -datadir /var/lib/influxdb/data/ -waldir /var/lib/influxdb/wal/' influxdb
Restart influxdb:
systemctl restart influxdb

That's all.

I deleted all index directories, like you stated and then rebuild them. Now the measurements are gone. Makes me wonder if data is also gone. Because when I ran queries on the now ‘disappeared measurements’, they returned data. Or was this all coming from the index?

@f1-outsourcing
Copy link

Awesome. If anyone else is experiencing this and is willing to send me their tsm and tsi index files, that might help in figuring out what went wrong, if it can be automatically fixed, and if it has been prevented for the future.

These are the index directories of my collections database before I erased them
https://rgw.roosit.eu:7480/RoosIT:test/collections-index.tgz

@hardiksondagar
Copy link

Same issue in 1.7.4

@StephaneDci
Copy link

Same issue here in version 1.5.2

@rdxmb
Copy link

rdxmb commented Jun 6, 2019

why is this still an issue for about one year? This is essential for having influxdb in production.

@conet
Copy link

conet commented Jun 6, 2019

1.5.2 is pretty old, I've see no problems in the latest stable release in this area although the occasional restart is still necessary in some cases when dropping series, I'm using inmem indexes.

@rdxmb
Copy link

rdxmb commented Jun 6, 2019

the occasional restart is still necessary in some cases when dropping series

correct. Also with 1.7.4. This seems quite strange to me.

@coolmic
Copy link

coolmic commented Jun 11, 2019

Using 1.7.6 with docker.
Drop measurement don't work instantly if there is a lot of data.
Need to restart the server.

@tman77
Copy link

tman77 commented Jul 30, 2019

Can confirm same issue with 1.7.4. Converted data and wal to tsi1. Had a measurement no longer needed with very high cardinality due to having data as tags that should have instead been fields.

Used select into query to migrate the data to another table, and convert tags to fields. Cardinality dropped dramatically, improving memory and overall performance. Now I'd like to remove the old measurement.

I tried to run drop measurement, but appears nothing is happening. So instead I ran deletes from that measurement until no data left. Restarted influx, and then try and run drop measurement Metrics_test. Just hangs

Select * from Metrics_test returns nothing.
Select count(*) from Metrics_test returns nothing.

But, show series exact cardinality returns:

name: Metrics_test
count
153288

@changchengx
Copy link

The problem still exists on 1.7.8 after "stop->start" influxdb.service
nstcc2@nstcloudcc2:$ influx -database collectd -execute 'show stats' | grep indexType | sort | uniq
tags: database=collectd, engine=tsm1, id=11, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/11, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/11
tags: database=collectd, engine=tsm1, id=14, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/14, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/14
tags: database=collectd, engine=tsm1, id=17, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/17, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/17
tags: database=collectd, engine=tsm1, id=20, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/20, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/20
tags: database=collectd, engine=tsm1, id=23, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/23, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/23
tags: database=collectd, engine=tsm1, id=26, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/26, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/26
tags: database=collectd, engine=tsm1, id=29, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/29, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/29
tags: database=collectd, engine=tsm1, id=32, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/32, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/32
tags: database=collectd, engine=tsm1, id=35, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/35, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/35
tags: database=collectd, engine=tsm1, id=38, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/38, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/38
tags: database=collectd, engine=tsm1, id=41, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/41, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/41
tags: database=collectd, engine=tsm1, id=5, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/5, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/5
tags: database=collectd, engine=tsm1, id=8, indexType=inmem, path=/var/lib/influxdb/data/collectd/ceph_15d/8, retentionPolicy=ceph_15d, walPath=/var/lib/influxdb/wal/collectd/ceph_15d/8
tags: database=_internal, engine=tsm1, id=18, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/18, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/18
tags: database=_internal, engine=tsm1, id=21, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/21, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/21
tags: database=_internal, engine=tsm1, id=24, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/24, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/24
tags: database=_internal, engine=tsm1, id=27, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/27, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/27
tags: database=_internal, engine=tsm1, id=30, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/30, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/30
tags: database=_internal, engine=tsm1, id=33, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/33, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/33
tags: database=_internal, engine=tsm1, id=36, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/36, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/36
tags: database=_internal, engine=tsm1, id=39, indexType=inmem, path=/var/lib/influxdb/data/_internal/monitor/39, retentionPolicy=monitor, walPath=/var/lib/influxdb/wal/_internal/monitor/39
nstcc2@nstcloudcc2:
$

@stale
Copy link

stale bot commented Jan 19, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jan 19, 2020
@stale
Copy link

stale bot commented Jan 26, 2020

This issue has been automatically closed because it has not had recent activity. Please reopen if this issue is still important to you. Thank you for your contributions.

@stale stale bot closed this as completed Jan 26, 2020
@yellowpattern
Copy link

Does 1.7.9 fix this?

@gigake
Copy link

gigake commented Feb 5, 2020

Problem still exist in 1.7.9-1 !

show series where "envtype"='dev';
drop series where "envtype"='dev';
show series where "envtype"='dev'; <-- still shows series

Seems that in first place, it droped some of the series but now not droping anything more :(

@dgnorton dgnorton reopened this Feb 11, 2020
@russorat russorat removed the wontfix label Feb 18, 2020
@sofixa
Copy link

sofixa commented Mar 10, 2020

Just chiming in, the issue stills persists in 1.7.10. A DELETE deletes the data, but the measurements and series persist... Is there really no way to fix this?

@tushar2013
Copy link

the issue still persists in 1.8.1
is there any progress?

@tushar2013
Copy link

is anyone even working on this anymore?

@dgnorton
Copy link
Contributor

This issue is on our radar. We need to test and see if it still exists in 2.x and schedule it for work.

@Jefferson-Henrique
Copy link

Jefferson-Henrique commented May 11, 2021

I was having issues too, my workaround was this:

influx.query('DROP MEASUREMENT "your_measurement"', database="your_db", method="POST")

@vibhanshuvaibhav
Copy link

Hi, this is still a big issue. Is someone taking a look from Influx side? We were at a disk space crunch and thought of removing some stale measurements to gain back space, didn't work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests