You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 3, 2022. It is now read-only.
This issue splits off the feature requests from #1280 regarding volume mappings for /tmp and /data.
For /tmp, we already have a volume mapping that maps that directory to /mnt/disks/datalab-pd/tmp in the host VM, but we never clean that out.
We should change our startup script to wipe out the contents of the host directory on startup. That would make the /tmp directory inside the container match the expected semantics of /tmp (that it is wiped out when the machine restarts).
The volume mapping for /data is a completely new thing. Currently, the only volume mapping we have for persisting user data requires placing it under the /contents/datalab directory (soon to change to just /content).
That allows users to save data such that it survives container restarts, but the /content directory is also automatically backed up to GCS.
That makes the existing mapping good for saving notebooks, but potentially too expensive for data that can be recreated if necessary.
The idea behind a /data directory is to fill the gap between /tmp and /content. It would hold data that the user doesn't want to be automatically discarded, but that also does not need to be persisted in their backups to GCS.
The text was updated successfully, but these errors were encountered:
@seankerman Sorry for not updating this issue, but we are now cleaning up the /tmp directory. That should happen every time the VM is restarted (or when you connect to it after it has been stopped).
Are you not seeing that behavior? If so, what version of Datalab do you have?
Note: if you disable the auto-stop feature and never manually stop the VM, then we will not clean up the /tmp directory (since you may still be using it).
On Tue, Oct 31, 2017 at 3:52 PM, Omar Jarjur ***@***.***> wrote:
@seankerman <https://github.com/seankerman> Sorry for not updating this
issue, but we are now cleaning up the /tmp directory. That should happen
every time the VM is restarted (or when you connect to it after it has been
stopped).
Are you not seeing that behavior? If so, what version of Datalab do you
have?
Note: if you disable the auto-stop feature and never manually stop the VM,
then we will not clean up the /tmp directory (since you may still be using
it).
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#1283 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AI8TXd3HhzZlSbozGp1ZICu953GjzYKUks5sx5adgaJpZM4MjGog>
.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
This issue splits off the feature requests from #1280 regarding volume mappings for
/tmp
and/data
.For
/tmp
, we already have a volume mapping that maps that directory to/mnt/disks/datalab-pd/tmp
in the host VM, but we never clean that out.We should change our startup script to wipe out the contents of the host directory on startup. That would make the
/tmp
directory inside the container match the expected semantics of/tmp
(that it is wiped out when the machine restarts).The volume mapping for
/data
is a completely new thing. Currently, the only volume mapping we have for persisting user data requires placing it under the/contents/datalab
directory (soon to change to just/content
).That allows users to save data such that it survives container restarts, but the
/content
directory is also automatically backed up to GCS.That makes the existing mapping good for saving notebooks, but potentially too expensive for data that can be recreated if necessary.
The idea behind a
/data
directory is to fill the gap between/tmp
and/content
. It would hold data that the user doesn't want to be automatically discarded, but that also does not need to be persisted in their backups to GCS.The text was updated successfully, but these errors were encountered: