Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write build context to a build directory instead of using birdhouse/ #362

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

mishaschwartz
Copy link
Collaborator

@mishaschwartz mishaschwartz commented Aug 2, 2023

Overview

  • Links all files needed to run docker compose commands to a build directory
  • Generates a unified docker-compose.yml file that can be easily inspected to see exactly what is being deployed.
  • Writes the generated output of template files to a new directory instead of beside the original template file
    • this will keep the repo cleaner and won't require keeping legacy files in the various .gitignore file peppered throughout the repository.
  • The build directory is in birdhouse/build by default but can be changed by setting the BUILD_DIR variable in env.local.
  • No longer requires that custom component config files be placed relative to ./birdhouse/docker-compose.yml since these files are copied to the relevant location in the build directory.
    • previously, file paths in docker-compose-extra.yml files were either absolute or relative paths from the
      birdhouse/ directory.
    • now, they are absolute or relative paths from the parent of the component's directory (this resolves to the build
      directory when the stack is started up).

Changes

Non-breaking changes

  • Changes deployment strategy to create a build directory

Breaking changes

  • Any custom components used in deployment needs to update their docker-compose-extra.yml files to specify the location of a bind-mounted directory on host relative to the component's directory, not relative to the birdhouse/ directory.

For example, a custom component with files at birdhouse/optional-components/custom_wps/:

before:

service:
   custom_wps:
      volumes:
          './optional-components/custom_wps/wps.cfg:/wps.cfg'

after:

service:
   custom_wps:
      volumes:
          './custom_wps/wps.cfg:/wps.cfg'

Related Issue / Discussion

Additional Information

  • This change will greatly simplify the code moving forward and will make the components more flexible and modular (since they can now be hosted anywhere on disk)
  • This will also hopefully simplify all future changes to components in the stack since we no longer have to worry about breaking autodeploy if we miss a file generated from a template file in .gitignore
  • This change is the first step in enabling different deployment options in the future (swarm, kubernetes, etc.). This separates the source code from the deployment configuration so that we can support different configurations in the future without having to make too many major changes to the existing source code.

@github-actions github-actions bot added ci/deployment Related to deployment utilities and scripts ci/operations Continuous Integration components ci/tests Issues or changes related to tests scripts component/cowbird Related to https://github.com/Ouranosinc/cowbird component/geoserver Related to GeoServer or one of its underlying services component/jupyterhub Related to JupyterHub as development frontend with notebooks component/magpie Related to https://github.com/Ouranosinc/Magpie component/THREDDS Features or components related to THREDDS component/twitcher Related to https://github.com/bird-house/twitcher component/weaver Related to https://github.com/crim-ca/weaver documentation Improvements or additions to documentation feature/WPS Feature or service related to Web Processing Service labels Aug 2, 2023
@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1909/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-216.rdext.crim.ca

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@github-actions github-actions bot added the feature/node-registry Related to https://github.com/DACCS-Climate/DACCS-node-registry label Aug 2, 2023
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmigneault please have a look at the changes in this file and let me know if I'm missing something.

Because this file is called by pavics-compose.sh we end up building the build directory multiple times. I thought about making the "build" step only trigger when the first argument is "up" but it also looks like just calling these exec commands with docker is sufficient (since the container already exists).

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1910/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-133.rdext.crim.ca

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@fmigneault
Copy link
Collaborator

@mishaschwartz

Before doing an in-depth review, I would like to better understand the reasoning, or the exact needs, for these changes.

Copies all files needed to run docker compose commands to a build directory

Why create a copy?
If one of by custom components uses a large directory, this will just be a massive overhead each time I run any docker compose command.

Generates a unified docker-compose.yml file that can be easily inspected to see exactly what is being deployed.

Running docker compose config or pavics-compose config with -o produces the unified config file. This can be done without any change, and preserves the original volume file paths, making it easier to debug and develop upon.

Writes the generated output of template files to a new directory instead of beside the original template file
this will keep the repo cleaner and won't require keeping legacy files in the various .gitignore file peppered throughout the repository.

This could be a more personal preference, but I find it much easier to switch between the instantiated file and its template when they are next to each other. Resolving the equivalent file hierarchy between build vs in the source (especially with similar component/<service>/config/<service> variants) is a bigger cognitive burden IMO.

The explicit .gitignore under each component keeps the scope simple and easier to maintain. It also helps quickly flagging invalid files under the right location during an incorrect auto-deploy with leftover files. If anything, I believe writing beside the original template file is an advantage.

This will also hopefully simplify all future changes to components in the stack since we no longer have to worry about breaking autodeploy if we miss a file generated from a template file in .gitignore

I believe having autodeploy break in such cases is intentional. This is to avoid some files to be silently included or generated from incorrectly configured volume mount configs (e.g.: from a typo in the path), which can take longer to debug or identify.

No longer requires that custom component config files be placed relative to ./birdhouse/docker-compose.yml since these files are copied to the relevant location in the build directory.
[...]
This change will greatly simplify the code moving forward and will make the components more flexible and modular (since they can now be hosted anywhere on disk)

Not sure if I misinterpret this or not, but I have custom files and components loaded from other locations that are not under ./birdhouse, nor even inside the birdhouse-deploy repository at all. This already works.

The birdhouse directory in this case is equivalent to a Python package, or src often used as another alternative. It represents the root of the source code. I'm not sure what was considered problematic of having files under birdhouse to distinguish it from tests, docs, etc.

@tlvu
I'd like to hear your impression as well regarding those changes.
I find it hard to justify all the configs it could break relative to their purposes.

@mishaschwartz
Copy link
Collaborator Author

I would like to better understand the reasoning, or the exact needs, for these changes.

That's understandable, I'm happy to clarify.

The main reasons for this change are described in the "Additional Information" section in the PR description above.
My main goal is to lay the groundwork for future changes that will allow for other deployment strategies (see my third point above).

Please see below for some additional clarifications:

Why create a copy?
If one of by custom components uses a large directory, this will just be a massive overhead each time I run any docker compose command.

Creating a copy is the simplest way of ensuring that the output of template files are created outside the birdhouse directory and it requires the fewest changes to the pavics-compose.sh code. However there are other options that would allow us to do the same thing with fewer copies.

  • we could write the output of template files to the build directory only (without copying the original template file)
  • we could translate the path of large files that need to be mounted to containers to an absolute path so we don't need to copy them.

I'm happy to make the pavics-compose.sh smarter to avoid extra copies.

Running docker compose config or pavics-compose config with -o produces the unified config file. This can be done without any change, and preserves the original volume file paths, making it easier to debug and develop upon.

Correct, that's the strategy we're going for here. The only difference is that we're using that we can then use that generated config file to actually deploy the stack.

This could be a more personal preference, but I find it much easier to switch between the instantiated file and its template when they are next to each other.

Ok that is interesting, I actually have the opposite preference because it makes searching for a term in my IDE easier when I can specify whether I expect it in the build directory or the birdhouse directory. Searching become very cluttered when you have a lot of very similar files side-by-side.

The explicit .gitignore under each component keeps the scope simple and easier to maintain.

I'm interested why you think that. To me this strategy creates a lot of additional files that are not easy to maintain at all. Especially since we need to keep a reference to files that were generated in old versions that aren't there anymore.

In my proposed changes, we no longer need to keep these files at all. The big .gitignore file that is added here is simply so that you don't have to clean up all the old files if you don't want to do it right away. It's a courtesy but is not required.

Not sure if I misinterpret this or not, but I have custom files and components loaded from other locations that are not under ./birdhouse, nor even inside the birdhouse-deploy repository at all. This already works.

Right, and do all of those have to specify the paths in the docker-compose-extra.yml files as relative to the birdhouse directory or as absolute paths? Doesn't that make the code less portable and harder to share with others? If you move that directory don't you have to update all the paths in the files?

I believe having autodeploy break in such cases is intentional.

I'm not sure I understand how this prevents this error:

This is to avoid some files to be silently included

Do you mean included in a git commit? The changes proposed here also prevent that, without the need to maintain lots of .gitignore files.

or generated from incorrectly configured volume mount configs (e.g.: from a typo in the path), which can take longer to debug or identify.

I don't understand how this is affected by whether a file is in the .gitignore file or not. Maybe an example would help me understand.

The birdhouse directory in this case is equivalent to a Python package.

Ok, that makes sense. To continue the analogy, the change that I'm proposing here is similar to installing the python package where the files get written to a site-packages directory. Or the strategy a lot of software employs to take the code from a src directory and write a version that is ready for deployment to a build or dist directory.

I find it hard to justify all the configs it could break relative to their purposes.

I don't think it breaks much at all. In fact it should be very backwards compatible

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1949/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-67.rdext.crim.ca

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1951/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1952/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1953/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1954/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://

Infrastructure deployment failed. Instance has not been destroyed. @matprov

Copy link
Collaborator

@tlvu tlvu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to revert to keep old docker-compose, otherwise looks good. Please allow me to test on my end before merging this.

PAVICS_LOG_DIR="${PAVICS_LOG_DIR:-/tmp/pavics-compose}"
CELERY_HEALTHCHECK="/opt/local/bin/weaver/celery-healthcheck"
mkdir -p "${PAVICS_LOG_DIR}"
# note: use 'tee' instead of capturing in variable to allow displaying results directly when running command
${PAVICS_COMPOSE} exec weaver bash "${CELERY_HEALTHCHECK}" | tee "${PAVICS_LOG_DIR}/weaver.log"
docker exec weaver bash "${CELERY_HEALTHCHECK}" | tee "${PAVICS_LOG_DIR}/weaver.log"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, we can not use pavics-compose.sh as a drop-in replacement for docker-compose after this PR so we have to directly use docker?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can still use pavics-compose.sh, especially now that the build directory is only re-created on "up". But I still think this is a good change here since it's no longer necessary to call pavics-compose.sh and this is simpler (and more consistent with what we're doing in other post-docker-up scripts.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't even realize this change in my previous review, but I think it can actually produce quite different behaviour.
Using docker exec directly (instead of docker compose exec) will not apply all other configuration in docker compose such as mounted volumes, configs, networks, etc. Since the database network is not default here, it is possible this change actually breaks the healthcheck.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docker exec runs a command in a container that is already created and already running. At the point where docker exec is called here, all volumes, networks, configs, etc. are already associated with the container. Even docker compose exec wouldn't apply any of the configurations you describe.

You can think of this as more or less the same as using ssh to execute a command on a machine that is already running.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes that container_name was not overridden, while docker compose exec works with the service name instead of the container name.

RELATIVE_FILE_PATH=${FILE#${adir}}
DEST="${BUILD_DIR}/${CONF_NAME}/${RELATIVE_FILE_PATH#/}"
mkdir -p "$(dirname "${DEST}")"
ln "${FILE}" "${DEST}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using hardlink assume all external repos checkouts are on the same filesystem with this checkout. This is a sensible assumption but should be documented nonetheless. Before the external repos checkouts can be anywhere and it would still work.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tlvu What if we try the hardlink with a fallback strategy to copy the files. That way, we don't lose the ability to host these on other filesystems

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh yes, that would be best. Test for ln exit code, if it errors out, use regular cp instead.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could consider rsync as well instead of cp to skip copies where no deltas are detected.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The build directory is emptied before this so there would never be a case where there are no deltas. If we didn't clear the build directory first, this could be useful. But I worry that we'd end up with a lot of leftover files between builds if the configuration changes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could that be an option? Like allowing CLEAN_BUILD_DIR=true pavics-compose up -d to take advantage of quicker rsync? I think cleaning the build dir is valid to start from a fresh setup, but wiping it each time while developing the same feature is overkill.

done

# we apply all the templates
find "$BUILD_DIR" -name '*.template' |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah nice, so this scenario, we still have the original template next to the instantiated files, for easy template expansion verification.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah! I didn't even think of that but yes that's a good point.

Copy link
Collaborator

@fmigneault fmigneault Aug 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So instead of the double search matches in .template and their instantiated files, we will now have 3 because of the duplicated template between the build and source dirs?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually there is a 4th because the birdhouse directory has a symlink in docs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For our dev machine, I think we can set BUILD_DIR outside of birdhouse-deploy checkout to avoid those duplicate files when searching.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If BUILD_DIR=birdhouse is supported, then this is a viable option for a dev machine.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, this is not supported since the arrangement of files in the BUILD_DIR is not the same as the one in birdhouse.

But... if we moved all of the components, from components, config, optional-components, etc. into one single location that could be possible. If we did that though, you'd have to also put any additional components into that directory as well.

For me, I would just adjust my search filters to not read from the BUILD_DIR (or make it outside of the project folder like @tlvu suggests). Then if I explicitly want to search in the BUILD_DIR I can select that folder to search. This is what I currently do to avoid looking in the docs/source/birdhouse/ directory for duplicates.

I mostly use PyCharm and VSCode and it's pretty easy to set up these filters, I can demo it for you if you'd like. I'm sure most other IDEs support a similar search filtering.

echo ${COMPOSE_CONF_LIST} | tr ' ' '\n' | grep -v '^-f'

# the PROXY_SECURE_PORT is a little trick to make the compose file invalid without the usage of this wrapper script
PROXY_SECURE_PORT=443 HOSTNAME=${PAVICS_FQDN} docker compose --project-directory "${BUILD_DIR}" ${COMPOSE_CONF_LIST} config -o "${COMPOSE_FILE}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have to keep the old docker-compose, otherwise it won't work in the autodeploy container. Or you'll have to update the autodeploy container and ensure all existing deployments are up-to-date as well.

Or make pavics-compose.sh not rely on the locally installed docker-compose but use the same image as the autodeploy. This will also allow catching incompatibility bugs earlier than having to trigger autodeploy to catch the bug.

I think this docker upgrade should be in separate PR to not burden this one. So just keep docker-compose for now.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1950/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-166.rdext.crim.ca

PAVICS-e2e-workflow-tests Pipeline Results

Tests URL : http://daccs-jenkins.crim.ca:80/job/PAVICS-e2e-workflow-tests/job/master/1261/

NOTEBOOK TEST RESULTS
    
[2023-08-15T15:25:29.840Z] ============================= test session starts ==============================
[2023-08-15T15:25:29.840Z] platform linux -- Python 3.9.16, pytest-7.3.1, pluggy-1.0.0
[2023-08-15T15:25:29.840Z] rootdir: /home/jenkins/agent/workspace/PAVICS-e2e-workflow-tests_master
[2023-08-15T15:25:29.840Z] plugins: anyio-3.6.1, dash-2.10.0, nbval-0.9.6, tornasync-0.6.0.post2, xdist-3.3.1
[2023-08-15T15:25:29.840Z] collected 236 items
[2023-08-15T15:25:29.840Z] 
[2023-08-15T15:25:37.993Z] notebooks-auth/test_thredds.ipynb ...........                            [  4%]
[2023-08-15T15:25:47.025Z] pavics-sdi-master/docs/source/notebooks/WCS_example.ipynb .......        [  7%]
[2023-08-15T15:25:57.372Z] pavics-sdi-master/docs/source/notebooks/WFS_example.ipynb ......         [ 10%]
[2023-08-15T15:26:04.890Z] pavics-sdi-master/docs/source/notebooks/WMS_example.ipynb ........       [ 13%]
[2023-08-15T15:33:38.455Z] pavics-sdi-master/docs/source/notebooks/climex.ipynb ............        [ 18%]
[2023-08-15T15:33:38.455Z] pavics-sdi-master/docs/source/notebooks/eccc-geoapi-climate-stations.ipynb . [ 19%]
[2023-08-15T15:33:47.545Z] ...............                                                          [ 25%]
[2023-08-15T15:33:58.288Z] pavics-sdi-master/docs/source/notebooks/eccc-geoapi-xclim.ipynb .....    [ 27%]
[2023-08-15T15:34:06.257Z] pavics-sdi-master/docs/source/notebooks/esgf-dap.ipynb ......            [ 30%]
[2023-08-15T15:34:23.659Z] pavics-sdi-master/docs/source/notebooks/forecasts.ipynb ......           [ 32%]
[2023-08-15T15:34:25.042Z] pavics-sdi-master/docs/source/notebooks/jupyter_extensions.ipynb .       [ 33%]
[2023-08-15T15:34:30.399Z] pavics-sdi-master/docs/source/notebooks/opendap.ipynb .......            [ 36%]
[2023-08-15T15:34:34.914Z] pavics-sdi-master/docs/source/notebooks/pavics_thredds.ipynb .....       [ 38%]
[2023-08-15T15:38:05.059Z] pavics-sdi-master/docs/source/notebooks/regridding.ipynb ............... [ 44%]
[2023-08-15T15:39:25.316Z] .............                                                            [ 50%]
[2023-08-15T15:39:30.533Z] pavics-sdi-master/docs/source/notebooks/rendering.ipynb ....             [ 51%]
[2023-08-15T15:39:33.118Z] pavics-sdi-master/docs/source/notebooks/subset-user-input.ipynb ........ [ 55%]
[2023-08-15T15:39:50.645Z] .................                                                        [ 62%]
[2023-08-15T15:39:58.543Z] pavics-sdi-master/docs/source/notebooks/subsetting.ipynb ......          [ 64%]
[2023-08-15T15:40:00.462Z] pavics-sdi-master/docs/source/notebook-components/weaver_example.ipynb . [ 65%]
[2023-08-15T15:40:15.460Z] ..F......                                                                [ 69%]
[2023-08-15T15:40:25.245Z] finch-master/docs/source/notebooks/dap_subset.ipynb ...........          [ 73%]
[2023-08-15T15:40:34.875Z] finch-master/docs/source/notebooks/finch-usage.ipynb ......              [ 76%]
[2023-08-15T15:40:36.301Z] PAVICS-landing-master/content/notebooks/climate_indicators/PAVICStutorial_ClimateDataAnalysis-1DataAccess.ipynb . [ 76%]
[2023-08-15T15:40:39.606Z] ......                                                                   [ 79%]
[2023-08-15T15:40:47.743Z] PAVICS-landing-master/content/notebooks/climate_indicators/PAVICStutorial_ClimateDataAnalysis-2Subsetting.ipynb . [ 79%]
[2023-08-15T15:41:02.590Z] .............                                                            [ 85%]
[2023-08-15T15:41:14.826Z] PAVICS-landing-master/content/notebooks/climate_indicators/PAVICStutorial_ClimateDataAnalysis-3Climate-Indicators.ipynb . [ 85%]
[2023-08-15T15:41:51.419Z] ....s.                                                                   [ 88%]
[2023-08-15T15:41:59.539Z] PAVICS-landing-master/content/notebooks/climate_indicators/PAVICStutorial_ClimateDataAnalysis-4Ensembles.ipynb . [ 88%]
[2023-08-15T15:42:15.123Z] ...                                                                      [ 89%]
[2023-08-15T15:42:33.244Z] PAVICS-landing-master/content/notebooks/climate_indicators/PAVICStutorial_ClimateDataAnalysis-5Visualization.ipynb . [ 90%]
[2023-08-15T15:42:56.683Z] ......                                                                   [ 92%]
[2023-08-15T15:42:59.404Z] notebooks/hummingbird.ipynb ............                                 [ 97%]
[2023-08-15T15:46:03.850Z] notebooks/stress-tests.ipynb .....                                       [100%]
[2023-08-15T15:46:03.850Z] 
[2023-08-15T15:46:03.850Z] =================================== FAILURES ===================================
    
  

@mishaschwartz
Copy link
Collaborator Author

Ok I've made a few updates in response to the comments here:

  • I took inspiration from cowbird and used links from the various component directories to the build directories instead of copying. This should remove the overhead of copying large files.
  • I only rebuild the build directory on "up"
  • I considered rewriting the compose files to serve large files from their original location but that started getting really complex and as @tlvu pointed out:

I am just afraid we are heading for a poor-man setup.py or makefile level of complexity to ensure we copy just enough to have a working deployment tree. Especially we have to handle external repos as well.

DEST=${FILE%.template}
cat ${FILE} | envsubst "$VARS" | envsubst "$OPTIONAL_VARS" > ${DEST}
done
BUILD_DIR="${BUILD_DIR:-"${COMPOSE_DIR}/build"}"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for nitpicking but I'd rather prefer this default value appear in birdhouse/default.env rather than here so it is more easily discoverable by our users.

I know you already documented in env.local.example but the easier any configs, in general, can be found, the better.

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1955/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-166.rdext.crim.ca

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/1957/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@tlvu tlvu self-requested a review September 20, 2023 18:55
@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/2096/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-90.rdext.crim.ca

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/2097/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://host-140-67.rdext.crim.ca

Infrastructure deployment failed. Instance has not been destroyed. @matprov

@crim-jenkins-bot
Copy link
Collaborator

E2E Test Results

DACCS-iac Pipeline Results

Build URL : http://daccs-jenkins.crim.ca:80/job/DACCS-iac-birdhouse/2128/
Result : failure

BIRDHOUSE_DEPLOY_BRANCH : build-to-build-dir
DACCS_CONFIGS_BRANCH : master
PAVICS_E2E_WORKFLOW_TESTS_BRANCH : master
PAVICS_SDI_BRANCH : master

DESTROY_INFRA_ON_EXIT : true
PAVICS_HOST : https://

Infrastructure deployment failed. Instance has not been destroyed. @matprov

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/deployment Related to deployment utilities and scripts ci/operations Continuous Integration components ci/tests Issues or changes related to tests scripts component/cowbird Related to https://github.com/Ouranosinc/cowbird component/geoserver Related to GeoServer or one of its underlying services component/jupyterhub Related to JupyterHub as development frontend with notebooks component/magpie Related to https://github.com/Ouranosinc/Magpie component/THREDDS Features or components related to THREDDS component/twitcher Related to https://github.com/bird-house/twitcher component/weaver Related to https://github.com/crim-ca/weaver documentation Improvements or additions to documentation feature/node-registry Related to https://github.com/DACCS-Climate/DACCS-node-registry feature/WPS Feature or service related to Web Processing Service
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants