Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no data in table metrics_locator #791

Open
42701618 opened this issue Mar 7, 2017 · 4 comments
Open

no data in table metrics_locator #791

42701618 opened this issue Mar 7, 2017 · 4 comments

Comments

@42701618
Copy link

42701618 commented Mar 7, 2017

Hi
In my test project,I have ingested metrics for 7 days.The number of metrics is 130 000 in a minute.
After 7 days,the ingest maybe ok, but there is no data in table metrics_locator.The metric_full is still ingested data.Why is no data in table metrics_locator?

I find the thread in my process
ps -mp 7678 -o THREAD,tid,time
USER %CPU PRI SCNT WCHAN USER SYSTEM TID TIME
bf 12.6 - - - - - - 1-00:28:55
bf 0.0 19 - futex_ - - 7678 00:00:00
bf 0.0 19 - futex_ - - 7680 00:00:11
bf 0.0 19 - futex_ - - 7681 00:11:00
bf 0.0 19 - futex_ - - 7682 00:11:00
bf 0.0 19 - futex_ - - 7683 00:11:02
bf 0.0 19 - futex_ - - 7684 00:11:03
bf 0.0 19 - futex_ - - 7685 00:11:01
bf 0.0 19 - futex_ - - 7686 00:10:59
bf 0.0 19 - futex_ - - 7687 00:10:59
bf 0.0 19 - futex_ - - 7688 00:11:00
bf 0.0 19 - futex_ - - 7689 00:01:20
bf 0.0 19 - futex_ - - 7690 00:00:00
bf 0.0 19 - futex_ - - 7691 00:00:00
bf 0.0 19 - futex_ - - 7692 00:00:00
bf 0.0 19 - futex_ - - 7693 00:00:16
bf 0.0 19 - futex_ - - 7694 00:00:14
bf 0.0 19 - futex_ - - 7695 00:00:14
bf 0.0 19 - futex_ - - 7696 00:00:04
bf 0.0 19 - futex_ - - 7697 00:00:00
bf 0.0 19 - futex_ - - 7698 00:04:09
bf 0.0 19 - futex_ - - 7699 00:00:07
bf 0.0 19 - futex_ - - 7702 00:06:01
bf 0.0 19 - futex_ - - 7761 00:00:00
bf 0.0 19 - futex_ - - 7762 00:00:00
bf 0.0 19 - futex_ - - 7768 00:02:03
bf 0.9 19 - futex_ - - 7769 01:55:38
bf 0.0 19 - futex_ - - 7770 00:00:00
bf 0.0 19 - futex_ - - 7771 00:00:00
bf 0.0 19 - - - - 7772 00:00:24
bf 0.0 19 - - - - 7773 00:00:24
bf 0.0 19 - futex_ - - 7774 00:00:29
bf 0.2 19 - - - - 7776 00:26:39

the thread 7769 's cpu is more higher,and
then

jstack 7678 |grep 1e59 -A 30
"Shard state writer" #15 prio=5 os_prio=0 tid=0x00007fb69433c000 nid=0x1e59 waiting on condition [0x00007fb692181000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.rackspacecloud.blueflood.service.ShardStateWorker.run(ShardStateWorker.java:91)
at java.lang.Thread.run(Thread.java:745)

"Shard state reader" #16 prio=5 os_prio=0 tid=0x00007fb69433a800 nid=0x1e58 sleeping[0x00007fb692282000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at com.rackspacecloud.blueflood.service.ShardStateWorker.run(ShardStateWorker.java:91)
at java.lang.Thread.run(Thread.java:745)

"MetadataBatchedWrites" #14 prio=5 os_prio=0 tid=0x00007fb694323800 nid=0x1e52 in Object.wait() [0x00007fb692383000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000000c1379e28> (a java.util.TaskQueue)
at java.lang.Object.wait(Object.java:502)
at java.util.TimerThread.mainLoop(Timer.java:526)
- locked <0x00000000c1379e28> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)

"MetadataBatchedReads" #13 prio=5 os_prio=0 tid=0x00007fb694322000 nid=0x1e51 in Object.wait() [0x00007fb692484000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000000c137a2c0> (a java.util.TaskQueue)
at java.lang.Object.wait(Object.java:502)
at java.util.TimerThread.mainLoop(Timer.java:526)
- locked <0x00000000c137a2c0> (a java.util.TaskQueue)
at java.util.TimerThread.run(Timer.java:505)

@ChandraAddala
Copy link
Contributor

ChandraAddala commented Mar 7, 2017

The data in metrics_locator has a TTL of 7 days. If the data is continuously being ingested, I expect the data to be present in metric_locator table.

We recently fixed an issue with locator cache #750 which looks similar to what you are experiencing. Here is the issue #748. If you are not using the latest version, picking latest might help.

@42701618
Copy link
Author

42701618 commented Mar 8, 2017

the data is continuously being ingested after 7 days,the data in metric_locator table is nothing.

@ChandraAddala
Copy link
Contributor

Are you using the latest source code?

@42701618
Copy link
Author

42701618 commented Mar 9, 2017

no,I use blueflood-rax-release-v1.0.1956 ,I just merged the fixed code .
before is that:
private static final Cache<String, Boolean> insertedLocators = CacheBuilder.newBuilder().expireAfterAccess(10,
TimeUnit.MINUTES).concurrencyLevel(16).build();

now is :
private static final Cache<String, Boolean> insertedLocators = CacheBuilder.newBuilder().expireAfterAccess(10,
TimeUnit.MINUTES).expireAfterWrite(3, TimeUnit.DAYS).concurrencyLevel(16).build();

So I will test it again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants