Skip to content

Configuring and using Shared Memory

Kajari Ghosh edited this page Apr 27, 2018 · 13 revisions

Usually, when you start an application and it allocates some block of memory, this is freed after your application terminates. And also, this memory is only accessible to a single process only. And then, there is shared memory. It allows you to share data among a number of processes and the shared memory we use is persistent. It stays in the system until it is explicitly removed.

By default, there is some restriction on the size of and shared memory segment. Depending on your distribution and its version it may be as little as 64 kilobytes. This is of course not enough for serious applications.

The following gives a brief description on how to set the limits in a way that you (most probably) won't ever run into them in the future. Please read up on actual settings for your production environment in the manpage of shmctl, or consult further information, eg. here.

First, we are going to raise the system limits. Second, we are going to raise the user limits.

System Limits

Linux

Append the following lines to /etc/sysctl.conf:

kernel.shmall = 1152921504606846720
kernel.shmmax = 18446744073709551615

and then run sysctl -p with super-user privileges. Then check if settings were accepted:

$ ipcs -lm

OS X

On Mac OS X, add the following to /etc/sysctl.conf:

kern.sysv.shmmax=1073741824
kern.sysv.shmall=262144
kern.sysv.shmseg=256

Then reboot.

------ Shared Memory Limits --------
max number of segments = 4096
max seg size (kbytes) = 18014398509481983
max total shared memory (kbytes) = 4611686018427386880
min seg size (bytes) = 1

User Limits

This is only half of the story. On Linux, only the super user is allowed to lock arbitrary amounts of shared memory into RAM. To fix this, we need to set the user limits properly. Let's have a look at what Ubuntu 12.10 sets by default:

$ ulimit -a|grep max
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited

So, as a user we are allowed to only lock at most 64 KiB into RAM. This is obviously not enough. The settings can be changed by editing /etc/security/limits.conf. Add the following lines to the file, to raise the user limits to 64 GiB. At the time of writing, this is enough to do planet-wide car routing.

<user>           hard    memlock         unlimited
<user>           soft    memlock         68719476736

Note that is the user name under which the routing process is running, and you need to re-login to activate these changes. If the user does not have a login, you can use sudo -i -u <user> to simulate an initial login.

Note that on Ubuntu 12.04 LTS it is also necessary to edit /etc/pam.d/su (and /etc/pam.d/common-session) and remove the comment from the following line in order to activate /etc/security/limits.conf:

session    required   pam_limits.so

Using Shared Memory

With all these changes done, you should now load all shared memory directly into RAM. Loading data into shared memory is as easy as

$ ./osrm-datastore /path/to/data.osrm

If there is insufficient available RAM (or not enough space configured), you will receive the following warning when loading data with osrm-datastore:

[warning] could not lock shared memory to RAM

In this case, data will be swapped to a cache on disk, and you will still be able to run queries. But note that caching comes at the prices of disk latency.

You will also see an error message, if you are lacking the CAP_IPC_LOCK capability for system-wide memory locking. In this case granting the capability manually helps:

$ sudo setcap "cap_ipc_lock=ep" `which osrm-datastore`

$ sudo getcap `which osrm-datastore`
osrm-datastore = cap_ipc_lock+ep

Starting the routing process and pointing it to shared memory is also very, very easy:

$ ./osrm-routed --shared-memory

Since OSRM 5.17 we allow multiple datasets in memory at the same time:

$ ./osrm-datastore --dataset-name=mydata /path/to/data.osrm
$ ./osrm-routed --dataset-name=mydata --shared-memory

Locking mechanism and cleaning locks

osrm-datastore and osrm-routed use IPC mechanism based on the Boost.Interprocess library:

  • boost::interprocess::named_mutex at /dev/shm/sem.osrm-region (Linux specific) protects the current region data that contains the current region ID and the incremental time stamp. Process osrm-routed acquires shared mutual access (read-only) and osrm-datastore acquires exclusive mutual access (read-write) to the data.
  • boost::interprocess::named_condition at /dev/shm/osrm-region-cv (Linux specific) is a conditional variable sent by osrm-datastore to notify osrm-routed processes about changed data.
  • boost::interprocess::file_lock at /tmp/osrm-datastore.lock guarantees mutual exclusive execution of osrm-datastore processes.

⚠️ Use the following options with care

Because of asynchronous nature of signals there exists a small risk that osrm-routed can be killed by the system while holding sem.osrm-region mutex. This will lead to osrm-datastore process lock in during exclusive mutual access acquire. To unlock the shared mutex without full system reboot, osrm-datastore has the following options:

  • osrm-datastore --remove-locks — removes sem.osrm-region mutex and and osrm-region-cv conditional variable
  • osrm-datastore --spring-clean — marks shared memory blocks as removed and removes locks