Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]: Better method for sending multiple files to the conainer #811

Closed
dgiddins opened this issue Feb 28, 2023 · 1 comment · Fixed by #913
Closed

[Enhancement]: Better method for sending multiple files to the conainer #811

dgiddins opened this issue Feb 28, 2023 · 1 comment · Fixed by #913
Assignees
Labels
enhancement New feature or request

Comments

@dgiddins
Copy link

Problem

.WithResourceMapping() Allows us to send files to the container. When there are many files to be sent calling StartAsync() on the container can timeout. Probably as files are sent in parallel

if (configuration.ResourceMappings != null)
{
await Task.WhenAll(configuration.ResourceMappings.Values.Select(resourceMapping => CopyResourceMapping(id, resourceMapping)))
.ConfigureAwait(false);
}

Solution

To allow sending of archives that are then automatically unzipped in the container. Or this could be implemented as providing a list of files to the container builder so that the builder creates the archive and then unzips into a specified target directory.

public async Task CopyFileAsync(string id, string filePath, byte[] fileContent, int accessMode, int userId, int groupId, CancellationToken ct = default)
{
IOperatingSystem os = new Unix(dockerEndpointAuthConfig: null);
var containerPath = os.NormalizePath(filePath);
using (var tarOutputMemStream = new MemoryStream())
{
using (var tarOutputStream = new TarOutputStream(tarOutputMemStream, Encoding.Default))
{
tarOutputStream.IsStreamOwner = false;
var header = new TarHeader();
header.Name = containerPath;
header.UserId = userId;
header.GroupId = groupId;
header.Mode = accessMode;
header.Size = fileContent.Length;
var entry = new TarEntry(header);
await tarOutputStream.PutNextEntryAsync(entry, ct)
.ConfigureAwait(false);
#if NETSTANDARD2_1_OR_GREATER
await tarOutputStream.WriteAsync(fileContent, ct)
.ConfigureAwait(false);
#else
await tarOutputStream.WriteAsync(fileContent, 0, fileContent.Length, ct)
.ConfigureAwait(false);
#endif
await tarOutputStream.CloseEntryAsync(ct)
.ConfigureAwait(false);
}
tarOutputMemStream.Seek(0, SeekOrigin.Begin);
await this.containers.ExtractArchiveToContainerAsync(id, Path.AltDirectorySeparatorChar.ToString(), tarOutputMemStream, ct)
.ConfigureAwait(false);
}
}

Benefit

When using sqitch for managing changes to database schemas the schema repo can grow very large resulting in many files needing to be moved.

Alternatives

Using volume mounts which is not advised.

Would you like to help contributing this enhancement?

Yes

@dgiddins dgiddins added the enhancement New feature or request label Feb 28, 2023
@HofmeisterAn
Copy link
Collaborator

To create a Docker image, a lot of similar code is necessary. I think we can reuse and restructure a lot. This part implements ITarArchive to create a tarball of the Docker image build context, which includes all files and subdirectories of a directory.

I believe we need to overload CopyFileAsync with a source and target paths and determine whether the source path is a directory or not. Based on the FileSystemInfo, we can construct the tarball and pass it to the Docker endpoint.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
2 participants