-
Notifications
You must be signed in to change notification settings - Fork 208
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge develop into verdi #1791
Merge develop into verdi #1791
Conversation
This issue was raised by @mottetm in an environment where the user did not have access to the command line. See `https://github.com/DropD/reentry/issues/29`
I'm trying a way to add nodes to a group that doesn't require an explcit check to see if the node is already there. There are several parts to this: 1) A UNIQUE constraint needs to be added to the association table as we have in Django. Something like: CREATE UNIQUE INDEX db_dbgroup_dbnodes_dbgroup_id_dbnode_id_key ON public.db_dbgroup_dbnodes (dbgroup_id, dbnode_id); 2) Add the nodes one by one and flush after each one which will cause the IntegrityError to appear if the constraint is violated 3) Wrap the append/flush in a nested session so that it is automatically rolled back in the case of a constraint violation 4) finally after everything do the final commit (because flush just puts it in SQLs buffers) NOTE: Before this gets considered for a merge we have to find a way to put the constraint in the model. I didn't see how to do this.
When exposing the ports of a process one would expect that the receiving port namespace would not just inherit the ports but also the mutable properties of the PortNamespace, such as the default, valid_type, help, validator and required attribute. The method that was responsible for creating the new namespace and exposing the ports, ProcessSpec._expose_ports, failed to do this. However, plumpy already defined a method that did just this, PortNamespace.absorb(). To fix this problem, we decided to move the expose functionality to plumpy and leverage the absorb method, which already properly overloaded mutable properties of the donor namespace to the host namespace. The only other addition is that the implementation of exposing ports in plumpy did not yet have a memory, as required by the implementation in aiida-core. This implementation, along with the proper namespace property inheritance has now been released with plumpy v0.10.3, which is the new dependency. The expose methods in plumpy now also takes an optional dictionary namespace_options which can be used to override even the properties of the source namespace that would be inherited. If the ports are exposed into a namespace that does not yet exist in the target namespace, it will be constructed using this namespace_options dictionary as constructor keyword arguments. Note that these namespace options will *not* be used for any sub namespaces that will have to be created along the way, nor will they be used to override any of the properties of already existing sub namespaces.
…1640) Currently it is possible to return non-zero integers from a WorkChain outline step to abort its execution and assign the exit status to the node, however, there is no official API yet to make this easier. Here we define the concept of an ExitCode, a named tuple that takes an integer exit status and a descriptive message. Through the ProcessSpec, a WorkChain developer can add exit codes that correspond to errors that may crop up during the execution of the workchain. For example the following spec definition: @classmethod def define(cls, spec): super(CifCleanWorkChain, cls).define(spec) spec.exit_code(418, 'ERROR_I_AM_A_TEAPOT', message='workchain found itself in an identity crisis') In one of the outline steps, the exit code can be used by retrieving it through either the integer exit status or the string label: return self.exit_codes('I_AM_A_TEAPOT') or return self.exit_codes(418) and return it. Returning an instance of ExitCode will trigger the exact same mechanism as returning an integer from an outline step. The engine will detect the ExitCode and return its exit status as the result, triggering the abort and the exit status to be set on the Node. Note that this addition required name changes in parts of the existing API. For example the Calculation attribute `finish_status` was renamed to `exit_status`. This is because it can now be accompanied by an string `exit_message`. The `exit_status` and `exit_message` together form the `ExitCode`.
There was a bug in the execute method of the Waiting state of the JobProcess, that in the case of a TransportException, which would occur in the case of a SUBMISSIONFAILED or RETRIEVALFAILED, the returned exit code was a JobCalculationExitStatus enum instance and not the required integer or ExitCode. This would except the process due to the check in the Process.on_finish call. A similar problem existed in the execmanager.parse_results method which also did not guarantee the return of an exit code of a valid type. We now make sure to convert the bool, integer or ExitCode which can be returned by parse_results is converted to an ExitCode before it is returned.
…g_slow [BLOCKED] Suggested performance improvement for adding nodes to group
The utility function `aiida.get_strict_version` will return a StrictVersion instance from `distutils.version` with the current version of the package.
…iidateam#1688) In 0.12.0 it was (correctly) avoiding to ask also for a list of job ids if the scheduler supported querying by users. Now instead it was asking for both the user and job ids, which is not supported in general by the schedulers (see e.g. docstring of Scheduler._get_joblist_command).
…m#1704) In the case where the return value of the stepper was not of type None, int or ExitCode, yet the stepper considered itself to be finished, for example when the last step was executed, the return value of the stepper would be returned, which could have any type. However, the result of _do_step should only be an exit status, i.e. int or ExitCode. To fix this, we explicitly check the return value of the stepper and convert any type other than int or ExitCode to None.
…aiidateam#1718) The documentation about the new workflow system has been completely rewritten in a single concise piece in concepts/workflows.rst. The old system had some duplication, which was removed. The leading docs for the legacy system are in old_workflows/index.rst
…#1705) We explicitly disallow calling the functions copy and deepcopy from the python copy module on a Node instance. The reason is that the behavior of this for a Calculation node, with respect to what should be returned and how this would affect the graph, is not clear. Rather, to clone a Calculation, the caching mechanism should be used or a new ProcessBuilder could be generated from a completed Process that would recreate the necessary inputs that could then easily be relaunched. For Data nodes, the behavior can be defined a bit better. Therefore we allow to call deepcopy on a Data node, which will call the internal clone method, which will return an identical, but unstored, clone of the original Data node. The clone will have no links and a newly generated UUID. Additionally, we raise on __deepcopy__ for a stored Data node. Deep copying an unstored node is fine however and just pipes through to the clone method.
* travis: explicitly install postgresql-server-dev-9.5 Based on travis-ci/travis-ci#9011 it seems we have to specify explicitly the dependency on the postgresql-dev package by now. * Revert "skip pgtest related testcases until aiidateam#1722 is fixed (aiidateam#1724)" This reverts commit 777c16c and fixes aiidateam#1723 * travis: add proper postgresql bin-dir to path fixes aiidateam#1722 * travis: fix ssh-keygen for private instances
SQLA/psycopg2 allows connection via unix sockets instead of TCP/IP. This allows for secure password-less authentication via PostgreSQL's socket peer authentication, which is the default on many distros. Example `pg_hba.conf`: local all all peer would allow a user `test` access to the PostgreSQL cluster if a user `test` exists in PostgreSQL without any password (implicitly assuming that it is sufficient that the user was already authenticated by the OS). Using `pg_ident.conf` one can also map local users to PostgreSQL users with a different name: Example `pg_hba.conf`: local aiida aiida peer map=aiida Example `pg_ident.conf`: aiida test aiida Would allow the user `aiida` access to the database `aiida` and the map allows the system user `test` to impersonate the database user `aiida`. psycopg2 automatically tries the local socket connection if no port is specified, but for that must the connection string not contain the colon char otherwise required for the host:port separation.
* Fix issue with JobCalculation inputs caching by copying the whole folder of the cached node. * Add a test that checks that cached calculations have the 'raw_input' folder. * Use abspath directly instead of via the 'get_abs_path' method. * Set move=False explicitly. * Add a test that the original calculation doesn't lose its raw_input folder.
* Fixing the missing headers * Upping aiida version number, and reinstating the version for the sphinx extension (needed) * Updated CHANGELOG
This was broken by running the command in a new process group
* adding minor bugfix for the direct parser to append the memory request to the job template * style update
Looks like you lost options.EXIT_STATUS in the merge somehow... edit: more like forgotten to rename options.FINISH_STATUS -> options.EXIT_STATUS |
dc5c9d5
to
d2f8a3e
Compare
Codecov Report
@@ Coverage Diff @@
## verdi #1791 +/- ##
==========================================
+ Coverage 65.53% 65.64% +0.11%
==========================================
Files 312 312
Lines 33004 33006 +2
==========================================
+ Hits 21628 21667 +39
+ Misses 11376 11339 -37
Continue to review full report at Codecov.
|
This is in preparation of merging the
verdi
branch, which was used to migrate theverdi
cli to the newclick
infrastructure