-
-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Beat singularity-tools up to shape #177908
Comments
I would probably have more spare time after June 21. To speed up the merging of #158486, I would prefer limiting the scope of this PR to:
Further improvements can be done in successive PRs. |
I agree with limiting the scope of the PR, I'll have time to help in a couple of weeks. |
Is there a way to compute |
Regarding the singularity-tools, a significant problem is the closure size being doubled unnecessarily by
Why don't we get a list of references of all the packages directly? Here's my implementation which merges the {
writeMultipleReferencesToFile = paths: runCommand "runtime-deps-multiple" {
referencesFiles = map writeReferencesToFile paths;
} ''
touch "$out"
declare -a paths=();
for refFile in $referencesFiles; do
while read path; do
isPathIncluded=0
for pathIncluded in "''${paths[@]}"; do
if [[ "$path" == "$pathIncluded" ]]; then
isPathIncluded=1
break
fi
done
if (( ! isPathIncluded )); then
echo "$path" >> "$out"
paths+=( "$path" )
fi
done < "$refFile"
done
'';
} |
I cannot say what's a good way to compute it, but the trivial baseline is a derivation that takes, in EDIT: i.e. we wouldn't know |
Then we would no longer be able to use vmTools.runInLinuxVM (
runCommand {
preVM = vmTools.createEmptyImage {
size = diskSize;
fullName = "${projectName}-run-disk";
};
} ''
mkfs -t ext3 -b 4096 /dev/${vmTools.hd}
mount "/dev/${vm.hd}" disk
''
) |
I see now. It appears that nixpkgs/pkgs/build-support/vm/default.nix Line 280 in 4b31cc7
|
Great! |
For consistency with @SomeoneSerge any other HPC pain points? |
There's another change lineing up that builds the image through a Singularity definition (Apptainer recipe) file to make the image more declarative and the build process explainable. It could be a drop-in replacement of the current Singularity-sandbox-based implementation. I also went on and made a |
I'm sorry for the long absence, my priorities had shifted somewhat @dmadisetti On the high-level I've exactly one pain-point, and that is an unsolved (underinvested) use-case:
I think I might give this a shot again. The issues I had were:
Shouldn't be hard to alleviate |
This now suggests another point, that we maybe want a Another possibility is the module system with support for @ShamrockLee, |
Not yet, but I already have the implementation integrated the change into my HEP analysis workflow. It's time to also re-think about the |
This comment was marked as off-topic.
This comment was marked as off-topic.
Hopefully not adding to the noise. My current workflow is making a docker tar with nix, unpacking it, and turning it singularity. A bit of a hack, but it works? packages.docker = pkgs.dockerTools.buildNixShellImage {
name = "pre-sif-container";
tag = "latest";
drv = devShells.default;
};
packages.singularity = pkgs.stdenv.mkDerivation {
name = "container.sif";
src = .;
installPhase = ''
mkdir unpack
tar xzvf ${packages.docker}/image.tgz -C unpack
# Singularity can't handle .gz
tar -C unpack/ -cvf layer.tar .
# TODO: Allow for module of user defined nightly, opposed to using src
singularity build $out Singularity.nightly
'';
}; Singularity.nightly containing Bootstrap:docker-archive
From:layer.tar
.... Big fan of using the Singularity file to define hooks etc.. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
By the way, I was meaning to ask, why do we have to
Oh, I'll just throw some bait in. Have you noticed https://discourse.nixos.org/t/working-group-member-search-module-system-for-packages/26574/8 and https://github.com/DavHau/drv-parts in particular?
I guess your post further proves there's a use-case:) |
It was not until last year that the unprivileged image-building workflow started to be implemented in the Apptainer project. The program used to assert We are closed to the unprivileged image generation with Apptainer. The remaining obstacle is its use of Sylabs's Singularity fork seems to have caught up some progress in unprivileged image build, but it still expects a bunch of top-level directories |
I see. So, in principle, we could have run everything except |
It's true when it comes to the definition-based build. It won't help much, since it should be trivial in terms of resources to generate the definition file from the definition attrset. As for the current, Apptainer-sandbox-based |
I was rather wondering if we could prepare the file-tree outside qemu and somehow pack the whole batch into an ext3/squashfs image without the |
I also prefer an approach that doesn't involve creating and running virtual machines. singularity/apptainer can run filesytems in squashfs, and I use this script to create containers:
|
FYI: With apptainer/apptainer#1284, Apptainer images can be built as a derivation without a VM. The code already works (tested with singularity-tools.buildImageFromDef from #224636 specifying The upstream maintainer expects something more general (such as |
It's a bit hacky but I think this achieves your goals:
|
This comment was marked as off-topic.
This comment was marked as off-topic.
I managed to get a CUDA-capable container built by adjusting Running it with env vars isn't solved yet. |
apptainer has merged a PR that allows to use apptainer to build containers in the Nix sandbox: With that change, it's possible to build containers with
default.nix:
make-apptainer.nix
This copies the closure of $contents to $out/r, links all bin/* to /bin/, creates dummy apptainer.conf and resolv.conf files, and finally runs apptainer build. |
By the way, maybe we should consider dropping support for choosing between apptainer and singularity for building images. For one thing, I suspect we'll have to introduce a separate attribute (like |
If the images by one can be ran by the other and are expected to do so going forward then i don't see a problem with that. |
How does patching Apptainer and SingularityCE (the |
It doesn't, it's just that why would we patch them both separately, if we only really need the patches for
We could even package siftool separately, and that could be enough... |
The development would be a lot easier if the reproducible image build functionality could be implemented upstream.
I seems to lose track of this. What is siftool? |
Issue description
I intend to start using nixpkgs'
singularity-tools
for hpc applications.What follows is a list of hindrances and minor annoyances that I've immediately encountered.
The list is mostly for myself: I'm opening the issue to make this visible and maybe motivate people to voice ideas and comments.
Cf. this read on singularity with Nix for more inspiration
VM-free image builds: Beat singularity-tools up to shape #177908 (comment)
Singularity needs patching to make images reproducible: singularity-tools.buildImage: non-deterministic #279250
mkfs
generates randomUUID
sGive users control over
contents
, in particular allow to removebash
: currently includingbash
manually results insingularity-tools.buildImage
throwing obscure errorsAnnoyance: we can compute
diskSize
from the builtcontents
instead of choosing an arbitrary constantHindrance: failing to pack any cuda-enabled dependencies. The error says:
... Cannot allocate memory
. My/tmp
is on disk, and I don't seem to be running out of RAM, so this message might be just another version of "not enough space left on (squashfs) device"Hindrance:
buildImage
interface doesn't exposeapphelp
)...
Get this merged: singularity: fix defaultPath and reflect upstream changes #158486
CC (possibly interested) @ShamrockLee @jbedo
The text was updated successfully, but these errors were encountered: