Skip to content

asynchronously synchronise local NSS databases with remote directory services

License

Notifications You must be signed in to change notification settings

google/nsscache

nsscache - Asynchronously synchronise local NSS databases with remote directory services

ci codecov

nsscache is a commandline tool and Python library that synchronises a local NSS cache from a remote directory service, such as LDAP.

As soon as you have more than one machine in your network, you want to share usernames between those systems. Linux administrators have been brought up on the convention of LDAP or NIS as a directory service, and /etc/nsswitch.conf, nss_ldap.so, and nscd to manage their nameservice lookups.

Even small networks will have experienced intermittent name lookup failures, such as a mail receiver sometimes returning "User not found" on a mailbox destination because of a slow socket over a congested network, or erratic cache behaviour by nscd. To combat this problem, we have separated the network from the NSS lookup codepath, by using an asynchronous cron job and a glorified script, to improve the speed and reliability of NSS lookups. We presented at linux.conf.au 2008, (PDF slides) on the problems in NSS and the requirements for a solution.

Here, we present to you this glorified script, which is just a little more extensible than

ldapsearch | awk > /etc/passwd

Read the Google Code blog announcement for nsscache, or more about the motivation behind this tool.

Here's a testimonial from Anchor Systems on their deployment of nsscache.

Pair nsscache with https://github.com/google/libnss-cache to integrate the local cache with your name service switch.


Mailing list: https://groups.google.com/forum/#!forum/nsscache-discuss

Issue history is at https://code.google.com/p/nsscache/issues/list


Contributions

Please format your code with https://github.com/google/yapf (installable as pip install yapf or the yapf3 package on Debian systems) before sending pull requests.

Testing

The Dockerfile sets up a container that then executes the python unit tests and tests/slapd-regtest integration test. Execute that with podman build . to get a reproducible test environment.

The Dockerfile mimics the test environment used by the Github Actions workflow .github/workflows/ci.yml

Setup

gcs source

Install Google Cloud Storage Python Client: sudo pip install google-cloud-storage

For Compute Engine Instances to use the gcs source, their attached service account must have the Storage Object Viewer role on the GCS bucket storing the passwd, group, and shadow objects, or on the objects themselves if using find-grained access controls.