Replies: 3 comments 7 replies
-
Can you elaborate on that? What queries did you try, and what was your environment setup? Asking because "OLAP" is a vague term. Not sure what you actually referred to here. |
Beta Was this translation helpful? Give feedback.
-
Sure @li-boxuan, below are the few queries which we tried. In terms of environment setup we have a spark Hadoop cluster and this are the adhoc jobs which we are trying to run:-
Please lemme know if you need any more detailed information. |
Beta Was this translation helpful? Give feedback.
-
Hello @li-boxuan I fixed all the above issues and triggered a spark job to get the groupCount by iterating over 10k vertices in a single loop. Getting the below error now related to unread blocks, Can you suggest if you have also faced before or have any solution in mind? Caused by: java.lang.IllegalStateException: unread block data |
Beta Was this translation helpful? Give feedback.
-
Hello Folks, have been trying from last few days to run OLAP queries on a huge graph of around 4TB keyspace size consisting of millions of vertices & edges and billions of partitions but not being successful. So was researching a bit by going over the Core Storage in cassandra tables like edgestore and janusgraph_id table where the data is stored in encrypted and serialized format. Also saw an online meetup where they said that g.V().id() may go through to get the count but when I ran through spark looks like it also needs a good amount of memory as it's throwing OOM's.
Is it possible to decrypt and deserialize the data from edgestore table directly? Wanted to achieve various functionalities like data deletion on a periodic basis, run analytical queries to get the trending data from graph etc. Would love to get some ideas and relevant answers on it.
Beta Was this translation helpful? Give feedback.
All reactions