Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parse timeout after sharding mongodb #1693

Closed
benishak opened this issue May 3, 2016 · 1 comment
Closed

Parse timeout after sharding mongodb #1693

benishak opened this issue May 3, 2016 · 1 comment

Comments

@benishak
Copy link
Contributor

benishak commented May 3, 2016

Environment:

  • MongoDB : 3x AWS EC2 (t2-large - Amazon Linux AMI 2016.03.0) eu-central
  • ParseServer: AWS EC2 (t2-medium - CentOS 7) (with PM2) eu-central
  • Parse API

Problem

I have a Mongodb Cluster with 3 Shards, each shard has 3 mongod processes. I also use 3 mongos and 3 configs

Yesterday I turned on sharding for some collections
As a shard key I used a hashed objectId, before adding the shard key I added it to the index.

The Balancer is running, which quite normal, but my MongoDB is now so slow, I even stopped the balancer but it stills slow and my app get timeout or Database is not available error

It is something really slow about 20x slower,
a query that used to take 1sec it takes now about 20s sometime longer ...

But ............
When I use my own ParseServer hosted on AWS it is really fast less than 600ms. Parse.com returns sometime "Your Mongo Database is not available" I think due to a timeout.

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("571529b0ff2021142cfe35ac")
}
  shards:
        {  "_id" : "dbShard_0",  "host" : "dbShard_0/dbnode-0.x.7041.mongodbdns.com:27000,dbnode-1.x.7041.mongodbdns.com:27000,dbnode-2.x.7041.mongodbdns.com:27000" }
        {  "_id" : "dbShard_1",  "host" : "dbShard_1/dbnode-0.x.7041.mongodbdns.com:27001,dbnode-1.x.7041.mongodbdns.com:27003,dbnode-2.x.7041.mongodbdns.com:27001" }
        {  "_id" : "dbShard_2",  "host" : "dbShard_2/dbnode-0.x.7041.mongodbdns.com:27002,dbnode-1.x.7041.mongodbdns.com:27001" }
  balancer:
        Currently enabled:  yes
        Currently running:  yes
                Balancer lock taken at Tue May 03 2016 11:37:12 GMT+0000 (UTC) by dbnode-2:27017:1461004720:1804289383:Balancer:1681692777
        Collections with active migrations:
                x.Activities started at Tue May 03 2016 11:38:15 GMT+0000 (UTC)
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                51 : Success
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "x-staging",  "partitioned" : false,  "primary" : "dbShard_2" }
        {  "_id" : "x",  "partitioned" : true,  "primary" : "dbShard_2" }
                x.Activities
                        shard key: { "_id" : "hashed" }
                        chunks:
                                dbShard_0       4
                                dbShard_1       4
                                dbShard_2       12
                        too many chunks to print, use verbose if you want to force print
                x.ActiveItems
                        shard key: { "_id" : "hashed" }
                        chunks:
                                dbShard_0       4
                                dbShard_1       3
                                dbShard_2       13
                        too many chunks to print, use verbose if you want to force print
                x.MyProfile
                        shard key: { "_id" : "hashed" }
                        chunks:
                                dbShard_0       3
                                dbShard_1       3
                                dbShard_2       7
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-7722015169224027045") } on : dbShard_0 Timestamp(2, 0)
                        ...
                x.Layer1
                        shard key: { "_id" : "hashed" }
                        chunks:
                                dbShard_0       2
                                dbShard_1       1
                                dbShard_2       2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-5438161287775123827") } on : dbShard_0 Timestamp(2, 0)
                       ...
                x.Layer2
                        shard key: { "_id" : "hashed" }
                        chunks:
                                dbShard_0       3
                                dbShard_1       2
                                dbShard_2       3
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-5089453483550015893") } on : dbShard_0 Timestamp(2, 0)
                         ...

        {  "_id" : "test",  "partitioned" : false,  "primary" : "dbShard_0" }

My AWS instances (the Cluster and my own MongoDB) are located in eu-central

@benishak benishak changed the title Parse Parse timeout after sharding mongodb May 3, 2016
@drew-gross
Copy link
Contributor

Parse.com is hosted in AWS-US-EAST. Connecting to a server in europe will take a long time. I suggest moving your data to US east if you are still sending traffic to api.parse.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants