Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reservation failure #41

Open
pogilon opened this issue May 31, 2017 · 7 comments
Open

Reservation failure #41

pogilon opened this issue May 31, 2017 · 7 comments

Comments

@pogilon
Copy link

pogilon commented May 31, 2017

Hello @connormanning. Could you please guide me through what could be this issue? Thanks! I get the following error at some point during serving a 300M point cloud.

Exception in pool task: std::bad_alloc
16:07:02:11 LOG Error handling: { code: 400, message: 'Reservation failure' }

@pogilon
Copy link
Author

pogilon commented May 31, 2017

I think the computer just run out of RAM memory...

@connormanning
Copy link
Collaborator

I agree, looks like you're out of memory. What are your server system specs like - how much RAM? What is the cacheSize setting in your configuration file?

@pogilon
Copy link
Author

pogilon commented May 31, 2017

@connormanning. I had a 4 GB ram computer with 1 GB cacheSize (2 vCPU).
Amazon EC2 ( t2.medium). I upgraded to 8 GB but it was still close to run out. Any recommendations? Thanks.

@pogilon
Copy link
Author

pogilon commented Jun 12, 2017

@connormanning any ideas? Thanks!

@connormanning
Copy link
Collaborator

You might try lowering your cacheSize. The cacheSize does not represent the total memory that Greyhound can use - only the size allowed for a specific portion of its memory usage. The actual memory usage is correlated, but not limited to, this value. Maybe try 512 MB.

You should also add your machine's instance store as swap space. For the c3 line of instances, that looks like:

mkswap /dev/xvdc
swapon /dev/xvdc
echo "/dev/xvdc        none    swap    sw  0       0" >> /etc/fstab

The mounts might be different on the t2 lineup, so check your lsblk output to see what you have available.

@pogilon
Copy link
Author

pogilon commented Jul 15, 2017

thank you, @connormanning I will let you know soon how it went.

@pogilon
Copy link
Author

pogilon commented Jul 28, 2017

Hello @connormanning. There is something going on that I noticed. Greyhound does not release the RAM memory after use. For example, if I use Greyhound to serve a 300 MM points model and it uses 4GB of RAM, the same RAM remain occupied forever, even after closing the app. This RAM memory usage then stacks along multiple models. For example, If I serve a different model, 4GB of RAM more will be occupied and so on, limiting the number of models to be able to serve. What can I do to make it clean the RAM memory? Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants