-
Notifications
You must be signed in to change notification settings - Fork 871
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
issue in setting up distributed sharded cluster #7622
Comments
Does anyone faced similar issue? It works fine with two nodes but adding third one throws errors. |
I have also encountered the same problem |
@kishorel please use OrientDB >=2.2.26 and restart the servers with such version with the database you already created. |
@lvca Thanks for the reply. Just to understand, is this a bug? we actually thought of going on production but in the initial setup stage encountered issues.What is the stable release(if any). |
It was a bug. 2.2.26 is stable with HA. With your configuration I suggest to use "majority" in the |
I meet the same problem on version 2.2.29. Does this bug still exist on v2.2.29? Thanks! |
I meet the same problem on version 2.2.30 !!! please help , or any method to scale up write? |
Having the same problem in 2.2.32 |
I have something like this problem in 2.2.33 with select query (#8238) |
Facing the same issue in 2.2.36 |
Facing the same issue with OrientDB 3 and writeQuorum set to majority. |
I am using OrientDB Rest to avoid this issue. (Just in case if you are getting started you can give that a try) |
Any update on this issue? Facing same issue in OrientDB 3.0.12 also |
@persisharma Yes, I am facing this issue in 3.0.13 as well. Can somebody provide a solution for this |
I'm also affected in OrientDB 3.0.30 |
Same issue in 3.2.0 |
OrientDB Version: <2.2.25>
Java Version: <1.8.0_60>
OS: <CentOS release 6.9>
Expected behavior
I'm trying to setup on 3 servers and trying to setup a class "users", with 3 clusters "users_0",
"users_1", and "users_2". My servers are called node1, node2, and
node3. I want the clusters arranged such that I have 2 copies of each
cluster, so if 1 node goes down I still have access to all the data.
I've tried setting this up with the following steps:
Download OrientDB 2.2.25 Community and extract onto the 3 servers.
ran dserver.sh and configured the root password and set the node name for all the three machines and stopped the instance(CTRL+C).
Edit default-distributed-db-config.json on node1 as follows :
and hazelcast.xml as:
Start node1 with dserver.sh.
Create a database using console on node1:
connect remote:localhost root password
create database remote:localhost/global root password plocal graph
Create a class and rename the default cluster:
create class users extends v
(this has created no.of clusters equal to the no.of cores of the server, so removed all of the except users)alter cluster users name users_0
Startup node2 with dserver.sh, wait for database to auto deploy, then
startup node3 and wait for deploy
After starting the 3rd node, i have seen error as below.
Is there something wrong in the way I've created/configured the database or
is this a bug?
I even tried interchanging the nodes but i could see the same error when ever i add third node.
The text was updated successfully, but these errors were encountered: