Replies: 1 comment
-
I have tried HBase and ES as the storage and indexing backend, but the size of data is probably one tenth the size of the data you have used, so I am not sure about the load it would put on the backend or if it would be successful. my best bet for the volume of data you are dealing with would be HBase as the storage backend. Also, to elaborate a little more the options to bulk import the data from CSV, I think that isn't exactly present in janusgraph right now. Although, If you are willing to let go of the validations on JanusGraph, i think the bulk ingestion might be a very good option for the speed of data Ingestion. If you are Looking for graph databases other than Janusgraph, I would say...you could take a look at the following
FYI: These are some of the suggestions that have come down from my own research. please do take a look at other community discussions as well about the same. Cheers! |
Beta Was this translation helpful? Give feedback.
-
Hey All,
I am fairly new to GraphDBs, I am trying to do a POC where I am trying to migrate a graph database from TigerGraph. The database I am dealing with is fairly large with approximately 1 billion nodes and an equal number of relationships. I am looking to replicate the OLAP capabilities of TigerGraph for rapid query execution. I have tried neo4j and neptune, but both of them take huge amount of time to execute a query which requires aggregating count of records on the basis of a property. The following operation takes as long as 30 minutes to complete! given it has to traverse around 9 million relationships to go from B to a2:
[a1:A] - [r] - [B] - [r] - [a2:A] return a2.property as prop, count(prop)
I am looking for suggestions about which configuration options to tweak and which storage and index backends to use, options to bulk import the data from CSV format. Any help will be much appreciated. Thanks!
Beta Was this translation helpful? Give feedback.
All reactions