Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory limit not working #15

Closed
cschwem2er opened this issue Jan 10, 2017 · 21 comments
Closed

memory limit not working #15

cschwem2er opened this issue Jan 10, 2017 · 21 comments

Comments

@cschwem2er
Copy link

cschwem2er commented Jan 10, 2017

Hi,

I'm running Linux Mint V18 , systemd version 229 and the check_kernel script listed available limiting options for both memory and CPU. However, I cannot get the memory limit to work.
First I created a profile and inserted the following lines:

c.JupyterHub.spawner_class = 'systemdspawner.SystemdSpawner'    

c.SystemdSpawner.mem_limit = '1G'

After that I launched a jupyperhub likes this

sudo -s
systemd-run jupyterhub
jupyterhub

Then I logged in with a user and tried to open a 3GB file which worked without any problems. Am I launching the hub incorrectly?

@yuvipanda
Copy link
Collaborator

yuvipanda commented Jan 10, 2017 via email

@cschwem2er
Copy link
Author

cschwem2er commented Jan 10, 2017

Hi, sure.
This is my entire config file:

# Configuration file for jupyterhub.

#------------------------------------------------------------------------------
# Application(SingletonConfigurable) configuration
#------------------------------------------------------------------------------

## This is an application.

## The date format used by logging formatters for %(asctime)s
#c.Application.log_datefmt = '%Y-%m-%d %H:%M:%S'

## The Logging format template
#c.Application.log_format = '[%(name)s]%(highlevel)s %(message)s'

## Set the log level by value or name.
#c.Application.log_level = 30

#------------------------------------------------------------------------------
# JupyterHub(Application) configuration
#------------------------------------------------------------------------------

## An Application for starting a Multi-User Jupyter Notebook server.

## Grant admin users permission to access single-user servers.
#  
#  Users should be properly informed if this is enabled.
#c.JupyterHub.admin_access = False

## DEPRECATED, use Authenticator.admin_users instead.
#c.JupyterHub.admin_users = set()

## Answer yes to any questions (e.g. confirm overwrite)
#c.JupyterHub.answer_yes = False

## PENDING DEPRECATION: consider using service_tokens
#  
#  Dict of token:username to be loaded into the database.
#  
#  Allows ahead-of-time generation of API tokens for use by externally managed
#  services, which authenticate as JupyterHub users.
#  
#  Consider using service_tokens for general services that talk to the JupyterHub
#  API.
#c.JupyterHub.api_tokens = {}

## Class for authenticating users.
#  
#  This should be a class with the following form:
#  
#  - constructor takes one kwarg: `config`, the IPython config object.
#  
#  - is a tornado.gen.coroutine
#  - returns username on success, None on failure
#  - takes two arguments: (handler, data),
#    where `handler` is the calling web.RequestHandler,
#    and `data` is the POST form data from the login page.
#c.JupyterHub.authenticator_class = 'jupyterhub.auth.PAMAuthenticator'

## The base URL of the entire application
#c.JupyterHub.base_url = '/'

## Whether to shutdown the proxy when the Hub shuts down.
#  
#  Disable if you want to be able to teardown the Hub while leaving the proxy
#  running.
#  
#  Only valid if the proxy was starting by the Hub process.
#  
#  If both this and cleanup_servers are False, sending SIGINT to the Hub will
#  only shutdown the Hub, leaving everything else running.
#  
#  The Hub should be able to resume from database state.
#c.JupyterHub.cleanup_proxy = True

## Whether to shutdown single-user servers when the Hub shuts down.
#  
#  Disable if you want to be able to teardown the Hub while leaving the single-
#  user servers running.
#  
#  If both this and cleanup_proxy are False, sending SIGINT to the Hub will only
#  shutdown the Hub, leaving everything else running.
#  
#  The Hub should be able to resume from database state.
#c.JupyterHub.cleanup_servers = True

## The config file to load
#c.JupyterHub.config_file = 'jupyterhub_config.py'

## DEPRECATED: does nothing
#c.JupyterHub.confirm_no_ssl = False

## Number of days for a login cookie to be valid. Default is two weeks.
#c.JupyterHub.cookie_max_age_days = 14

## The cookie secret to use to encrypt cookies.
#  
#  Loaded from the JPY_COOKIE_SECRET env variable by default.
#c.JupyterHub.cookie_secret = b''

## File in which to store the cookie secret.
#c.JupyterHub.cookie_secret_file = 'jupyterhub_cookie_secret'

## The location of jupyterhub data files (e.g. /usr/local/share/jupyter/hub)
#c.JupyterHub.data_files_path = '/home/cs/anaconda3/share/jupyter/hub'

## Include any kwargs to pass to the database connection. See
#  sqlalchemy.create_engine for details.
#c.JupyterHub.db_kwargs = {}

## url for the database. e.g. `sqlite:///jupyterhub.sqlite`
#c.JupyterHub.db_url = 'sqlite:///jupyterhub.sqlite'

## log all database transactions. This has A LOT of output
#c.JupyterHub.debug_db = False

## show debug output in configurable-http-proxy
#c.JupyterHub.debug_proxy = False

## Send JupyterHub's logs to this file.
#  
#  This will *only* include the logs of the Hub itself, not the logs of the proxy
#  or any single-user servers.
#c.JupyterHub.extra_log_file = ''

## Extra log handlers to set on JupyterHub logger
#c.JupyterHub.extra_log_handlers = []

## Generate default config file
#c.JupyterHub.generate_config = False

## The ip for this process
#c.JupyterHub.hub_ip = '127.0.0.1'

## The port for this process
#c.JupyterHub.hub_port = 8081

## The public facing ip of the whole application (the proxy)
#c.JupyterHub.ip = ''

## Supply extra arguments that will be passed to Jinja environment.
#c.JupyterHub.jinja_environment_options = {}

## Interval (in seconds) at which to update last-activity timestamps.
#c.JupyterHub.last_activity_interval = 300

## Dict of 'group': ['usernames'] to load at startup.
#  
#  This strictly *adds* groups and users to groups.
#  
#  Loading one set of groups, then starting JupyterHub again with a different set
#  will not remove users or groups from previous launches. That must be done
#  through the API.
#c.JupyterHub.load_groups = {}

## Specify path to a logo image to override the Jupyter logo in the banner.
#c.JupyterHub.logo_file = ''

## File to write PID Useful for daemonizing jupyterhub.
#c.JupyterHub.pid_file = ''

## The public facing port of the proxy
#c.JupyterHub.port = 8000

## The ip for the proxy API handlers
#c.JupyterHub.proxy_api_ip = '127.0.0.1'

## The port for the proxy API handlers
#c.JupyterHub.proxy_api_port = 0

## The Proxy Auth token.
#  
#  Loaded from the CONFIGPROXY_AUTH_TOKEN env variable by default.
#c.JupyterHub.proxy_auth_token = ''

## Interval (in seconds) at which to check if the proxy is running.
#c.JupyterHub.proxy_check_interval = 30

## The command to start the http proxy.
#  
#  Only override if configurable-http-proxy is not on your PATH
#c.JupyterHub.proxy_cmd = ['configurable-http-proxy']

## Purge and reset the database.
#c.JupyterHub.reset_db = False

## Dict of token:servicename to be loaded into the database.
#  
#  Allows ahead-of-time generation of API tokens for use by externally managed
#  services.
#c.JupyterHub.service_tokens = {}

## List of service specification dictionaries.
#  
#  A service
#  
#  For instance::
#  
#      services = [
#          {
#              'name': 'cull_idle',
#              'command': ['/path/to/cull_idle_servers.py'],
#          },
#          {
#              'name': 'formgrader',
#              'url': 'http://127.0.0.1:1234',
#              'token': 'super-secret',
#              'environment': 
#          }
#      ]
#c.JupyterHub.services = []

## The class to use for spawning single-user servers.
#  
#  Should be a subclass of Spawner.
#c.JupyterHub.spawner_class = 'jupyterhub.spawner.LocalProcessSpawner'
c.JupyterHub.spawner_class = 'systemdspawner.SystemdSpawner'
c.SystemdSpawner.mem_limit = '1G'

## Path to SSL certificate file for the public facing interface of the proxy
#  
#  Use with ssl_key
#c.JupyterHub.ssl_cert = ''

## Path to SSL key file for the public facing interface of the proxy
#  
#  Use with ssl_cert
#c.JupyterHub.ssl_key = ''

## Host to send statsd metrics to
#c.JupyterHub.statsd_host = ''

## Port on which to send statsd metrics about the hub
#c.JupyterHub.statsd_port = 8125

## Prefix to use for all metrics sent by jupyterhub to statsd
#c.JupyterHub.statsd_prefix = 'jupyterhub'

## Run single-user servers on subdomains of this host.
#  
#  This should be the full https://hub.domain.tld[:port]
#  
#  Provides additional cross-site protections for javascript served by single-
#  user servers.
#  
#  Requires <username>.hub.domain.tld to resolve to the same host as
#  hub.domain.tld.
#  
#  In general, this is most easily achieved with wildcard DNS.
#  
#  When using SSL (i.e. always) this also requires a wildcard SSL certificate.
#c.JupyterHub.subdomain_host = ''

## Paths to search for jinja templates.
#c.JupyterHub.template_paths = []

## Extra settings overrides to pass to the tornado application.
#c.JupyterHub.tornado_settings = {}

#------------------------------------------------------------------------------
# Spawner(LoggingConfigurable) configuration
#------------------------------------------------------------------------------

## Base class for spawning single-user notebook servers.
#  
#  Subclass this, and override the following methods:
#  
#  - load_state - get_state - start - stop - poll
#  
#  As JupyterHub supports multiple users, an instance of the Spawner subclass is
#  created for each user. If there are 20 JupyterHub users, there will be 20
#  instances of the subclass.

## Extra arguments to be passed to the single-user server.
#  
#  Some spawners allow shell-style expansion here, allowing you to use
#  environment variables here. Most, including the default, do not. Consult the
#  documentation for your spawner to verify!
#c.Spawner.args = []

## The command used for starting the single-user server.
#  
#  Provide either a string or a list containing the path to the startup script
#  command. Extra arguments, other than this path, should be provided via `args`.
#  
#  This is usually set if you want to start the single-user server in a different
#  python environment (with virtualenv/conda) than JupyterHub itself.
#  
#  Some spawners allow shell-style expansion here, allowing you to use
#  environment variables. Most, including the default, do not. Consult the
#  documentation for your spawner to verify!
#c.Spawner.cmd = ['jupyterhub-singleuser']

## Minimum number of cpu-cores a single-user notebook server is guaranteed to
#  have available.
#  
#  If this value is set to 0.5, allows use of 50% of one CPU. If this value is
#  set to 2, allows use of up to 2 CPUs.
#  
#  Note that this needs to be supported by your spawner for it to work.
#c.Spawner.cpu_guarantee = None

## Maximum number of cpu-cores a single-user notebook server is allowed to use.
#  
#  If this value is set to 0.5, allows use of 50% of one CPU. If this value is
#  set to 2, allows use of up to 2 CPUs.
#  
#  The single-user notebook server will never be scheduled by the kernel to use
#  more cpu-cores than this. There is no guarantee that it can access this many
#  cpu-cores.
#  
#  This needs to be supported by your spawner for it to work.
#c.Spawner.cpu_limit = None

## Enable debug-logging of the single-user server
#c.Spawner.debug = False

## The URL the single-user server should start in.
#  
#  `{username}` will be expanded to the user's username
#  
#  Example uses:
#  - You can set `notebook_dir` to `/` and `default_url` to `/home/{username}` to allow people to
#    navigate the whole filesystem from their notebook, but still start in their home directory.
#  - You can set this to `/lab` to have JupyterLab start by default, rather than Jupyter Notebook.
#c.Spawner.default_url = ''

## Disable per-user configuration of single-user servers.
#  
#  When starting the user's single-user server, any config file found in the
#  user's $HOME directory will be ignored.
#  
#  Note: a user could circumvent this if the user modifies their Python
#  environment, such as when they have their own conda environments / virtualenvs
#  / containers.
#c.Spawner.disable_user_config = False

## Whitelist of environment variables for the single-user server to inherit from
#  the JupyterHub process.
#  
#  This whitelist is used to ensure that sensitive information in the JupyterHub
#  process's environment (such as `CONFIGPROXY_AUTH_TOKEN`) is not passed to the
#  single-user server's process.
#c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL']

## Extra environment variables to set for the single-user server's process.
#  
#  Environment variables that end up in the single-user server's process come from 3 sources:
#    - This `environment` configurable
#    - The JupyterHub process' environment variables that are whitelisted in `env_keep`
#    - Variables to establish contact between the single-user notebook and the hub (such as JPY_API_TOKEN)
#  
#  The `enviornment` configurable should be set by JupyterHub administrators to
#  add installation specific environment variables. It is a dict where the key is
#  the name of the environment variable, and the value can be a string or a
#  callable. If it is a callable, it will be called with one parameter (the
#  spawner instance), and should return a string fairly quickly (no blocking
#  operations please!).
#  
#  Note that the spawner class' interface is not guaranteed to be exactly same
#  across upgrades, so if you are using the callable take care to verify it
#  continues to work after upgrades!
#c.Spawner.environment = {}

## Timeout (in seconds) before giving up on a spawned HTTP server
#  
#  Once a server has successfully been spawned, this is the amount of time we
#  wait before assuming that the server is unable to accept connections.
#c.Spawner.http_timeout = 30

## The IP address (or hostname) the single-user server should listen on.
#  
#  The JupyterHub proxy implementation should be able to send packets to this
#  interface.
#c.Spawner.ip = '127.0.0.1'

## Minimum number of bytes a single-user notebook server is guaranteed to have
#  available.
#  
#  Allows the following suffixes:
#    - K -> Kilobytes
#    - M -> Megabytes
#    - G -> Gigabytes
#    - T -> Terabytes
#  
#  This needs to be supported by your spawner for it to work.
#c.Spawner.mem_guarantee = None

## Maximum number of bytes a single-user notebook server is allowed to use.
#  
#  Allows the following suffixes:
#    - K -> Kilobytes
#    - M -> Megabytes
#    - G -> Gigabytes
#    - T -> Terabytes
#  
#  If the single user server tries to allocate more memory than this, it will
#  fail. There is no guarantee that the single-user notebook server will be able
#  to allocate this much memory - only that it can not allocate more than this.
#  
#  This needs to be supported by your spawner for it to work.
#c.Spawner.mem_limit = None

## Path to the notebook directory for the single-user server.
#  
#  The user sees a file listing of this directory when the notebook interface is
#  started. The current interface does not easily allow browsing beyond the
#  subdirectories in this directory's tree.
#  
#  `~` will be expanded to the home directory of the user, and {username} will be
#  replaced with the name of the user.
#  
#  Note that this does *not* prevent users from accessing files outside of this
#  path! They can do so with many other means.
#c.Spawner.notebook_dir = ''

## An HTML form for options a user can specify on launching their server.
#  
#  The surrounding `<form>` element and the submit button are already provided.
#  
#  For example:
#  
#      Set your key:
#      <input name="key" val="default_key"></input>
#      <br>
#      Choose a letter:
#      <select name="letter" multiple="true">
#        <option value="A">The letter A</option>
#        <option value="B">The letter B</option>
#      </select>
#  
#  The data from this form submission will be passed on to your spawner in
#  `self.user_options`
#c.Spawner.options_form = ''

## Interval (in seconds) on which to poll the spawner for single-user server's
#  status.
#  
#  At every poll interval, each spawner's `.poll` method is called, which checks
#  if the single-user server is still running. If it isn't running, then
#  JupyterHub modifies its own state accordingly and removes appropriate routes
#  from the configurable proxy.
#c.Spawner.poll_interval = 30

## Timeout (in seconds) before giving up on starting of single-user server.
#  
#  This is the timeout for start to return, not the timeout for the server to
#  respond. Callers of spawner.start will assume that startup has failed if it
#  takes longer than this. start should return when the server process is started
#  and its location is known.
#c.Spawner.start_timeout = 60

#------------------------------------------------------------------------------
# LocalProcessSpawner(Spawner) configuration
#------------------------------------------------------------------------------

## A Spawner that uses `subprocess.Popen` to start single-user servers as local
#  processes.
#  
#  Requires local UNIX users matching the authenticated users to exist. Does not
#  work on Windows.
#  
#  This is the default spawner for JupyterHub.

## Seconds to wait for single-user server process to halt after SIGINT.
#  
#  If the process has not exited cleanly after this many seconds, a SIGTERM is
#  sent.
#c.LocalProcessSpawner.INTERRUPT_TIMEOUT = 10

## Seconds to wait for process to halt after SIGKILL before giving up.
#  
#  If the process does not exit cleanly after this many seconds of SIGKILL, it
#  becomes a zombie process. The hub process will log a warning and then give up.
#c.LocalProcessSpawner.KILL_TIMEOUT = 5

## Seconds to wait for single-user server process to halt after SIGTERM.
#  
#  If the process does not exit cleanly after this many seconds of SIGTERM, a
#  SIGKILL is sent.
#c.LocalProcessSpawner.TERM_TIMEOUT = 5

#------------------------------------------------------------------------------
# Authenticator(LoggingConfigurable) configuration
#------------------------------------------------------------------------------

## Base class for implementing an authentication provider for JupyterHub

## Set of users that will have admin rights on this JupyterHub.
#  
#  Admin users have extra privilages:
#   - Use the admin panel to see list of users logged in
#   - Add / remove users in some authenticators
#   - Restart / halt the hub
#   - Start / stop users' single-user servers
#   - Can access each individual users' single-user server (if configured)
#  
#  Admin access should be treated the same way root access is.
#  
#  Defaults to an empty set, in which case no user has admin access.
#c.Authenticator.admin_users = set()

## Dictionary mapping authenticator usernames to JupyterHub users.
#  
#  Primarily used to normalize OAuth user names to local users.
#c.Authenticator.username_map = {}

## Regular expression pattern that all valid usernames must match.
#  
#  If a username does not match the pattern specified here, authentication will
#  not be attempted.
#  
#  If not set, allow any username.
#c.Authenticator.username_pattern = ''

## Whitelist of usernames that are allowed to log in.
#  
#  Use this with supported authenticators to restrict which users can log in.
#  This is an additional whitelist that further restricts users, beyond whatever
#  restrictions the authenticator has in place.
#  
#  If empty, does not perform any additional restriction.
#c.Authenticator.whitelist = set()

#------------------------------------------------------------------------------
# LocalAuthenticator(Authenticator) configuration
#------------------------------------------------------------------------------

## Base class for Authenticators that work with local Linux/UNIX users
#  
#  Checks for local users, and can attempt to create them if they exist.

## The command to use for creating users as a list of strings
#  
#  For each element in the list, the string USERNAME will be replaced with the
#  user's username. The username will also be appended as the final argument.
#  
#  For Linux, the default value is:
#  
#      ['adduser', '-q', '--gecos', '""', '--disabled-password']
#  
#  To specify a custom home directory, set this to:
#  
#      ['adduser', '-q', '--gecos', '""', '--home', '/customhome/USERNAME', '--
#  disabled-password']
#  
#  This will run the command:
#  
#      adduser -q --gecos "" --home /customhome/river --disabled-password river
#  
#  when the user 'river' is created.
#c.LocalAuthenticator.add_user_cmd = []

## If set to True, will attempt to create local system users if they do not exist
#  already.
#  
#  Supports Linux and BSD variants only.
#c.LocalAuthenticator.create_system_users = False

## Whitelist all users from this UNIX group.
#  
#  This makes the username whitelist ineffective.
#c.LocalAuthenticator.group_whitelist = set()

#------------------------------------------------------------------------------
# PAMAuthenticator(LocalAuthenticator) configuration
#------------------------------------------------------------------------------

## Authenticate local UNIX users with PAM

## The text encoding to use when communicating with PAM
#c.PAMAuthenticator.encoding = 'utf8'

## Whether to open a new PAM session when spawners are started.
#  
#  This may trigger things like mounting shared filsystems, loading credentials,
#  etc. depending on system configuration, but it does not always work.
#  
#  If any errors are encountered when opening/closing PAM sessions, this is
#  automatically set to False.
#c.PAMAuthenticator.open_sessions = True

## The name of the PAM service to use for authentication
#c.PAMAuthenticator.service = 'login'

As for the systemd-cgtop output, I noticed that the Memory consumption for the singleuser.service never went above 1023.9 MB, so this looks fine. But the > 3GB still gets loaded into python without any problems and does indeed use that much memory:

import sys
sys.getsizeof(data)
3625158631

I am confused =D

Edit: I think I finally understand whats going on. The object was completely stored in swap storage, which explains why loading it took such a long time! Can this be prevented somehow? ```

@yuvipanda
Copy link
Collaborator

yuvipanda commented Jan 10, 2017 via email

@cschwem2er
Copy link
Author

Its nothing more than

%cd "/home/cs/Documents/tweets_berlin"

with open('tweets_berlin_2016-12-25_23-28.txt', 'rt') as f:
    data = f.read()
import sys
sys.getsizeof(data)

But I guess swap does not care about the code right?

@yuvipanda
Copy link
Collaborator

yuvipanda commented Jan 10, 2017 via email

@cschwem2er
Copy link
Author

Sorry just to be sure: You did read my edit from above right? The object was loaded into swap space instead of memory.

@yuvipanda
Copy link
Collaborator

Ah, nope I missed it (didn't come in via email).

Can you explain what you mean by 'loaded into swap space'? My understanding of how this should work is that swap is not a per-process thing but overall for the system, and things in swap would also count as memory. I think the output of the systemctl show will provide more info on what's happening.

@cschwem2er
Copy link
Author

Unfortunately my linux knowledge is very limited, but I also understand swap to be for the overall system. After loading the file from above, swap space was filled with 3.6GB, exactly the size of the object.
This is the output of systemctl show:

systemctl show
Version=229
Features=+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSET
Architecture=x86-64
FirmwareTimestampMonotonic=0
LoaderTimestampMonotonic=0
KernelTimestamp=Tue 2017-01-10 09:50:27 CET
KernelTimestampMonotonic=0
InitRDTimestampMonotonic=0
UserspaceTimestamp=Tue 2017-01-10 09:50:30 CET
UserspaceTimestampMonotonic=2568634
FinishTimestamp=Tue 2017-01-10 09:50:40 CET
FinishTimestampMonotonic=12360076
SecurityStartTimestamp=Tue 2017-01-10 09:50:30 CET
SecurityStartTimestampMonotonic=2570599
SecurityFinishTimestamp=Tue 2017-01-10 09:50:30 CET
SecurityFinishTimestampMonotonic=2570931
GeneratorsStartTimestamp=Tue 2017-01-10 09:50:30 CET
GeneratorsStartTimestampMonotonic=2601220
GeneratorsFinishTimestamp=Tue 2017-01-10 09:50:30 CET
GeneratorsFinishTimestampMonotonic=2623001
UnitsLoadStartTimestamp=Tue 2017-01-10 09:50:30 CET
UnitsLoadStartTimestampMonotonic=2623480
UnitsLoadFinishTimestamp=Tue 2017-01-10 09:50:30 CET

@yuvipanda
Copy link
Collaborator

yuvipanda commented Jan 10, 2017 via email

@cschwem2er
Copy link
Author

Oh I see, will do :) You want me to do this after I loaded in the huge object right?

@yuvipanda
Copy link
Collaborator

yuvipanda commented Jan 11, 2017 via email

@cschwem2er
Copy link
Author

systemctl show jupyter-cs-singleuser
Type=simple
Restart=no
NotifyAccess=none
RestartUSec=100ms
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Wed 2017-01-11 15:49:07 CET
WatchdogTimestampMonotonic=107908212176
FailureAction=none
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=14157
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
ExecMainStartTimestamp=Wed 2017-01-11 15:49:07 CET
ExecMainStartTimestampMonotonic=107908212143
ExecMainExitTimestampMonotonic=0
ExecMainPID=14157
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/bin/bash ; argv[]=/bin/bash -c cd /home/cs && exec jupyterhub-singleuser '--user="cs"' '--co
Slice=system.slice
ControlGroup=/system.slice/jupyter-cs-singleuser.service
MemoryCurrent=1572552704
CPUUsageNSec=18446744073709551615
TasksCurrent=18446744073709551615
Delegate=no
CPUAccounting=no
CPUShares=18446744073709551615
StartupCPUShares=18446744073709551615
CPUQuotaPerSecUSec=infinity
BlockIOAccounting=no
BlockIOWeight=18446744073709551615
StartupBlockIOWeight=18446744073709551615
MemoryAccounting=yes
MemoryLimit=1572864000
DevicePolicy=auto
TasksAccounting=no
TasksMax=18446744073709551615
Environment=PATH=/home/cs/anaconda3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
UMask=0022
LimitCPU=18446744073709551615
LimitCPUSoft=18446744073709551615
LimitFSIZE=18446744073709551615
LimitFSIZESoft=18446744073709551615
LimitDATA=18446744073709551615
LimitDATASoft=18446744073709551615
LimitSTACK=18446744073709551615
LimitSTACKSoft=8388608
LimitCORE=18446744073709551615
LimitCORESoft=0
LimitRSS=18446744073709551615
LimitRSSSoft=18446744073709551615
LimitNOFILE=4096
LimitNOFILESoft=1024
LimitAS=18446744073709551615
LimitASSoft=18446744073709551615
LimitNPROC=128087
LimitNPROCSoft=128087
LimitMEMLOCK=65536
LimitMEMLOCKSoft=65536
LimitLOCKS=18446744073709551615
LimitLOCKSSoft=18446744073709551615
LimitSIGPENDING=128087
LimitSIGPENDINGSoft=128087
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=18446744073709551615
LimitRTTIMESoft=18446744073709551615
OOMScoreAdjust=0
Nice=0
IOScheduling=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
SecureBits=0
CapabilityBoundingSet=18446744073709551615
AmbientCapabilities=0
User=1000
Group=1000
MountFlags=0
PrivateTmp=no
PrivateNetwork=no
PrivateDevices=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
RuntimeDirectoryMode=0755
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=jupyter-cs-singleuser.service
Names=jupyter-cs-singleuser.service
Requires=sysinit.target system.slice
Conflicts=shutdown.target
Before=shutdown.target
After=systemd-journald.socket sysinit.target basic.target system.slice
Description=/bin/bash -c cd /home/cs && exec jupyterhub-singleuser '--user="cs"' '--cookie-name="jupyter-hub-t
LoadState=loaded
ActiveState=active
SubState=running
DropInPaths=/run/systemd/system/jupyter-cs-singleuser.service.d/50-Description.conf /run/systemd/system/jupyte
StateChangeTimestamp=Wed 2017-01-11 15:49:07 CET
StateChangeTimestampMonotonic=107908212177
InactiveExitTimestamp=Wed 2017-01-11 15:49:07 CET
InactiveExitTimestampMonotonic=107908212177
ActiveEnterTimestamp=Wed 2017-01-11 15:49:07 CET
ActiveEnterTimestampMonotonic=107908212177
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Wed 2017-01-11 15:49:07 CET
ConditionTimestampMonotonic=107908195172
AssertTimestamp=Wed 2017-01-11 15:49:07 CET
AssertTimestampMonotonic=107908195172
Transient=yes
StartLimitInterval=10000000
StartLimitBurst=5
StartLimitAction=none

Here you go :)

@yuvipanda
Copy link
Collaborator

Ok, so the Linux kernel thinks you've used 1572552704 bytes of a limit of 1572864000 bytes - so that's just below the limit. I am not sure what exactly python is doing here, however.

@nabriis
Copy link

nabriis commented Mar 14, 2017

Hi,

I just wanted to let you know i ran into the same issue here.

When allocating above the set mem_limit (sayt 2G) the swap space on the server takes over and stores the remainder of the allocated data.

So the user is restricted to their 2G mem in RAM, but the swap space takes over and hence no error is shown in the jupyter notebook when allocating something that uses more than 2G space.

Is there a way to stop this from happening and throw an error instead?

@nabriis
Copy link

nabriis commented Jul 11, 2017

I am still stuck with this issue :(. I have given up for now.

Ubunto 16.04 with Jupyterhub 0.7.2.

The actual memory is limited, but the swap space gets filled up instead of giving an error when allocating in Python 3.

I think it has to do with the systemd version in ubunto 16.04.

@ixjlyons
Copy link
Contributor

ixjlyons commented Sep 7, 2017

Seeing this issue as well. This blog post seems to indicate that this behavior isn't really a bug but is how cgroup memory limits work (i.e. when hitting the limit, swap can still be used): https://jvns.ca/blog/2017/02/17/mystery-swap/

I see there is also a MemorySwapMax option though it seems to be somewhat recent: https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html#MemorySwapMax=bytes

@ixjlyons
Copy link
Contributor

Another reference: systemd/systemd#6074

So limiting swap usage via systemd doesn't seem possible at the moment unless you're using cgroups v2 only :/

@willingc
Copy link
Contributor

@yuvipanda @minrk Was this issue resolved with the release of JupyterHub 0.8?

@yuvipanda
Copy link
Collaborator

@willingc nope unfortunately - I think this is a limitation in systemd itself...

@willingc
Copy link
Contributor

Perhaps it would make sense to close this issue, mark it as reference, and mention the limitation in the README until systemd provides a mechanism to address this suitably.

@willingc
Copy link
Contributor

Closing as a limitation of systemd. Labeling as Reference for future documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants