-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker: avoid re-entering XDG aliasing commands #444
Conversation
Perhaps we should also add some checks to ensure this never causes undetected problems again? For example, a simple |
Today I tried to pull |
I don't know. I can still pull and run the broken images. |
done: a0937b6. (I manually cancelled the job to save us unnecessary credits.)
That should be done in CI file, but to test unprivileged users, we first have to set up some in the CI environment, right? |
@MehmedGIT @kba here's the current
So there's plenty of room to reduce the number of layers/steps... |
@bertsky we have briefly discussed with @kba about that after the tech call. Some of the commands could be abstracted away in separate script/s and called with a single RUN command. The register layer error is from that. Pulling the |
I'm already in the middle of that change – coming up ... here. |
Note: ffefb8f was not possible with earlier versions of Docker. The original idea of using multiple |
So reducing the number of steps actually brought down build time by 4min. How about trying to reactivate our parallel build again? Also, in the same experiment: let's try storing test results. |
Please also consider the cases in which the build is broken with an interruption. Rerunning the build should remove the files used for the parallel build. I remember facing such issues in the past. |
I don't remember, sry. Can you please elaborate? So rerunning the CI becomes impossible if we use (BTW, the current version is still sequential, because I confused |
AFAIR, there were some files created under the |
Oh that! Yes, that can happen. Not sure how to fix this (I guess it's really a GNU parallel issue). But this should not affect CI/CD, since this will spawn a fresh environment each time. (Also, setting |
Just deleting the Right, it does not affect the CI/CD pipeline. |
Oops! Seems we do have a test failure when running So ocrd_network complains that it cannot import |
Ouch! I am seeing:
|
Inconsistent core version as you also found out. 2.64.0 instead of 2.66.1. @joschrew also had his setup failing due to that error. |
So IIUC there are multiple things going on:
You can never be too paranoid. |
(because it will be needed for sub-venvs anyway)
For native installations, that's not recommendable. First of all, it is unexpected to remove stuff in the user's home directory. Second, there might be other use-cases of that, even some that are running at the same time. Third, this is also the place where |
Ok, we are still failing, but this time ocr-fileformat is the culprit:
|
I'll update to OCR-D/ocrd_fileformat#186. Good news is that with -j4 we reduced build time by another 10min. |
Next CI failure almost feels great! I think I've seen those already. IINM OCR-D/core#1243 should fix them. But ocrd_all is not the place for this. So from my side, this is ready. |
Note: I've cancelled the CircleCI workflow for the latest change, which only affects the GH Actions workflow for the Docker build (in similar fashion). I manually triggered that here. |
Well done! |
note: 60c7f7a is in preparation of OCR-D/core#1250 (but perhaps we should also do something similar in native installations...) |
That would be ideal. I was wondering if that will not produce some inconsistencies between the single |
So perhaps we should embed Line 725 in 909fdaf
For example: # already depend on OCRD_MODULES and OCRD_EXECUTABLES:
all: ocrd-all-tool.json ocrd-all-module-dir.json
. $(ACTIVATE_VENV) && cp -f $^ `python -c "import ocrd; print(ocrd.__path__[0])"`
Notice that the recipe which creates ocrd-all-tool.json (via ocrd-all-tool.py) merely concatenates all individual ocrd-tool.json files. Therefore, insofar as these are up-to-date, and the recipe is running after each change to the installation, the total file will also be up-to-date. But if someone merely updates a single tool without running the ocrd-all-tool.json recipe afterwards, then indeed there will be an inconsistency. |
We could rewrite our documentation to say that even for individual tool updates, the recommended method is always via BTW, the problem of not being up-to-date also arises in the network implementation (or other long-running instances of ocrd_utils): because the runtime |
CI still fails with 909fdaf – I wonder what's the matter with the additional |
Ah, not obscured, but disguised due to out-of-order output in parallel build: actual error#12 31.65 ERROR: Error checking for conflicts. #12 31.65 Traceback (most recent call last): #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 3021, in _dep_map #12 31.65 return self.__dep_map #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2815, in __getattr__ #12 31.65 raise AttributeError(attr) #12 31.65 AttributeError: _DistInfoDistribution__dep_map #12 31.65 #12 31.65 During handling of the above exception, another exception occurred: #12 31.65 #12 31.65 Traceback (most recent call last): #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 3012, in _parsed_pkg_info #12 31.65 return self._pkg_info #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2815, in __getattr__ #12 31.65 raise AttributeError(attr) #12 31.65 AttributeError: _pkg_info #12 31.65 #12 31.65 During handling of the above exception, another exception occurred: #12 31.65 #12 31.65 Traceback (most recent call last): #12 31.65 File "/usr/local/sub-venv/headless-tf1/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 543, in _warn_about_conflicts #12 31.65 File "/usr/local/sub-venv/headless-tf1/lib/python3.8/site-packages/pip/_internal/operations/check.py", line 114, in check_install_conflicts #12 31.65 File "/usr/local/sub-venv/headless-tf1/lib/python3.8/site-packages/pip/_internal/operations/check.py", line 53, in create_package_set_from_installed #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 2736, in requires #12 31.65 dm = self._dep_map #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 3023, in _dep_map #12 31.65 self.__dep_map = self._compute_dependencies() #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 3032, in _compute_dependencies #12 31.65 for req in self._parsed_pkg_info.get_all('Requires-Dist') or []: #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 3014, in _parsed_pkg_info #12 31.65 metadata = self.get_metadata(self.PKG_INFO) #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 1420, in get_metadata #12 31.65 value = self._get(path) #12 31.65 File "/usr/share/python-wheels/pkg_resources-0.0.0-py2.py3-none-any.whl/pkg_resources/__init__.py", line 1616, in _get #12 31.65 with open(path, 'rb') as stream: #12 31.65 FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/sub-venv/headless-tf1/lib/python3.8/site-packages/pip-20.0.2.dist-info/METADATA' #12 31.65 Installing collected packages: pip, setuptools, wheel #12 31.65 Attempting uninstall: pip #12 31.66 Found existing installation: pip 20.0.2 #12 31.66 Can't uninstall 'pip'. No files were found to uninstall. #12 32.50 Attempting uninstall: setuptools #12 32.50 Found existing installation: setuptools 44.0.0 #12 32.50 Attempting uninstall: setuptools #12 32.50 Found existing installation: setuptools 44.0.0 #12 32.55 Uninstalling setuptools-44.0.0: #12 32.55 Successfully uninstalled setuptools-44.0.0 #12 32.56 Uninstalling setuptools-44.0.0: #12 32.56 ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/usr/local/sub-venv/headless-tf1/bin/easy_install' So perhaps just copying to core/src/ocrd was not such a bright idea after all – it may cause problems next time we want to install. Also, we forgot to install in the sub-venv as well. |
The new recipe seems to work now. (We can build successfully, merely the core test is running into the 4 unrelated assertions discussed earlier.) I also tested native installation in a GHA build again. |
So to sum up (esp. for the changelog) this PR brings:
|
fixes #394