diff --git a/branch/azamat/baselines/update-perf-info/html/.buildinfo b/branch/azamat/baselines/update-perf-info/html/.buildinfo new file mode 100644 index 00000000000..52dedce8075 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: ac5b0bf17170fe40ba6b3c3a0b1ccacd +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/branch/azamat/baselines/update-perf-info/html/.nojekyll b/branch/azamat/baselines/update-perf-info/html/.nojekyll new file mode 100644 index 00000000000..e69de29bb2d diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.BuildTools.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.BuildTools.html new file mode 100644 index 00000000000..f4b628e632a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.BuildTools.html @@ -0,0 +1,266 @@ + + + + + + + CIME.BuildTools package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.BuildTools package

+
+

Submodules

+
+
+

CIME.BuildTools.configure module

+

This script writes CIME build information to a directory.

+

The pieces of information that will be written include:

+
    +
  1. Machine-specific build settings (i.e. the “Macros” file).

  2. +
  3. File-specific build settings (i.e. “Depends” files).

  4. +
  5. Environment variable loads (i.e. the env_mach_specific files).

  6. +
+

The .env_mach_specific.sh and .env_mach_specific.csh files are specific to a +given compiler, MPI library, and DEBUG setting. By default, these will be the +machine’s default compiler, the machine’s default MPI library, and FALSE, +respectively. These can be changed by setting the environment variables +COMPILER, MPILIB, and DEBUG, respectively.

+
+
+class CIME.BuildTools.configure.FakeCase(compiler, mpilib, debug, comp_interface, threading=False)[source]
+

Bases: object

+
+
+get_build_threaded()[source]
+
+ +
+
+get_case_root()[source]
+

Returns the root directory for this case.

+
+ +
+
+get_value(attrib)[source]
+
+ +
+
+set_value(attrib, value)[source]
+

Sets a given variable value for the case

+
+ +
+ +
+
+CIME.BuildTools.configure.configure(machobj, output_dir, macros_format, compiler, mpilib, debug, comp_interface, sysos, unit_testing=False, noenv=False, threaded=False, extra_machines_dir=None)[source]
+

Add Macros, Depends, and env_mach_specific files to a directory.

+

Arguments: +machobj - Machines argument for this machine. +output_dir - Directory in which to place output. +macros_format - Container containing the string ‘Makefile’ to produce

+
+

Makefile Macros output, and/or ‘CMake’ for CMake output.

+
+

compiler - String containing the compiler vendor to configure for. +mpilib - String containing the MPI implementation to configure for. +debug - Boolean specifying whether debugging options are enabled. +unit_testing - Boolean specifying whether we’re running unit tests (as

+
+

opposed to a system run)

+
+
+
extra_machines_dir - String giving path to an additional directory that will be

searched for cmake_macros.

+
+
+
+ +
+
+CIME.BuildTools.configure.copy_depends_files(machine_name, machines_dir, output_dir, compiler)[source]
+

Copy any system or compiler Depends files if they do not exist in the output directory +If there is a match for Depends.machine_name.compiler copy that and ignore the others

+
+ +
+
+CIME.BuildTools.configure.generate_env_mach_specific(output_dir, machobj, compiler, mpilib, debug, comp_interface, sysos, unit_testing, threaded, noenv=False)[source]
+

env_mach_specific generation.

+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.Servers.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.Servers.html new file mode 100644 index 00000000000..14eef41a6f2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.Servers.html @@ -0,0 +1,333 @@ + + + + + + + CIME.Servers package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.Servers package

+
+

Submodules

+
+
+

CIME.Servers.ftp module

+

FTP Server class. Interact with a server using FTP protocol

+
+
+class CIME.Servers.ftp.FTP(address, user='', passwd='', server=None)[source]
+

Bases: GenericServer

+
+
+fileexists(rel_path)[source]
+

Returns True if rel_path exists on server

+
+ +
+
+classmethod ftp_login(address, user='', passwd='')[source]
+
+ +
+
+getdirectory(rel_path, full_path)[source]
+
+ +
+
+getfile(rel_path, full_path)[source]
+

Get file from rel_path on server and place in location full_path on client +fail if full_path already exists on client, return True if successful

+
+ +
+ +
+
+

CIME.Servers.generic_server module

+

Generic Server class. There should be little or no functionality in this class, it serves only +to make sure that specific server classes maintain a consistant argument list and functionality +so that they are interchangable objects

+
+
+class CIME.Servers.generic_server.GenericServer(host=' ', user=' ', passwd=' ', acct=' ', timeout=<object object>)[source]
+

Bases: object

+
+
+fileexists(rel_path)[source]
+

Returns True if rel_path exists on server

+
+ +
+
+getfile(rel_path, full_path)[source]
+

Get file from rel_path on server and place in location full_path on client +fail if full_path already exists on client, return True if successful

+
+ +
+ +
+
+

CIME.Servers.gftp module

+

GridFTP Server class. Interact with a server using GridFTP protocol

+
+
+class CIME.Servers.gftp.GridFTP(address, user='', passwd='')[source]
+

Bases: GenericServer

+
+
+fileexists(rel_path)[source]
+

Returns True if rel_path exists on server

+
+ +
+
+getdirectory(rel_path, full_path)[source]
+
+ +
+
+getfile(rel_path, full_path)[source]
+

Get file from rel_path on server and place in location full_path on client +fail if full_path already exists on client, return True if successful

+
+ +
+ +
+
+

CIME.Servers.svn module

+

SVN Server class. Interact with a server using SVN protocol

+
+
+class CIME.Servers.svn.SVN(address, user='', passwd='')[source]
+

Bases: GenericServer

+
+
+fileexists(rel_path)[source]
+

Returns True if rel_path exists on server

+
+ +
+
+getdirectory(rel_path, full_path)[source]
+
+ +
+
+getfile(rel_path, full_path)[source]
+

Get file from rel_path on server and place in location full_path on client +fail if full_path already exists on client, return True if successful

+
+ +
+ +
+
+

CIME.Servers.wget module

+

WGET Server class. Interact with a server using WGET protocol

+
+
+class CIME.Servers.wget.WGET(address, user='', passwd='')[source]
+

Bases: GenericServer

+
+
+fileexists(rel_path)[source]
+

Returns True if rel_path exists on server

+
+ +
+
+getdirectory(rel_path, full_path)[source]
+
+ +
+
+getfile(rel_path, full_path)[source]
+

Get file from rel_path on server and place in location full_path on client +fail if full_path already exists on client, return True if successful

+
+ +
+
+classmethod wget_login(address, user='', passwd='')[source]
+
+ +
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.SystemTests.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.SystemTests.html new file mode 100644 index 00000000000..8ce044099c5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.SystemTests.html @@ -0,0 +1,1277 @@ + + + + + + + CIME.SystemTests package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.SystemTests package

+
+

Subpackages

+ +
+
+

Submodules

+
+
+

CIME.SystemTests.dae module

+

Implementation of the CIME data assimilation test: +Compares standard run with run broken into two data assimilation cycles. +Runs a simple DA script on each cycle which performs checks but does not +change any model state (restart files). Compares answers of two runs.

+
+
+class CIME.SystemTests.dae.DAE(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+

Implementation of the CIME data assimilation test: +Compares standard run with a run broken into two data assimilation cycles. +Runs a simple DA script on each cycle which performs checks but does not +change any model state (restart files). Compares answers of two runs. +Refers to a faux data assimilation script in the +cime/scripts/data_assimilation directory

+
+
+run_phase()[source]
+

Runs both phases of the two-phase test and compares their results +If success_change is True, success requires some files to be different

+
+ +
+ +
+
+

CIME.SystemTests.eri module

+

CIME ERI test This class inherits from SystemTestsCommon

+
+
+class CIME.SystemTests.eri.ERI(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.erio module

+

ERIO tests restart with different PIO methods

+

This class inherits from SystemTestsCommon

+
+
+class CIME.SystemTests.erio.ERIO(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.erp module

+

CIME ERP test. This class inherits from RestartTest

+

This is a pes counts hybrid (open-MP/MPI) restart bfb test from +startup. This is just like an ERS test but the pe-counts/threading +count are modified on restart. +(1) Do an initial run with pes set up out of the box (suffix base) +(2) Do a restart test with half the number of tasks and threads (suffix rest)

+
+
+class CIME.SystemTests.erp.ERP(case, **kwargs)[source]
+

Bases: RestartTest

+
+ +
+
+

CIME.SystemTests.err module

+

CIME ERR test This class inherits from ERS +ERR tests short term archiving and restart capabilities

+
+
+class CIME.SystemTests.err.ERR(case, **kwargs)[source]
+

Bases: RestartTest

+
+ +
+
+

CIME.SystemTests.erri module

+

CIME ERRI test This class inherits from ERR +ERRI tests short term archiving and restart capabilities with “incomplete” (unzipped) log files

+
+
+class CIME.SystemTests.erri.ERRI(case, **kwargs)[source]
+

Bases: ERR

+
+ +
+
+

CIME.SystemTests.ers module

+

CIME restart test This class inherits from SystemTestsCommon

+
+
+class CIME.SystemTests.ers.ERS(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.ers2 module

+

CIME restart test 2 This class inherits from SystemTestsCommon

+
+
+class CIME.SystemTests.ers2.ERS2(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.ert module

+

CIME production restart test This class inherits from SystemTestsCommon +Exact restart from startup, default 2 month + 1 month

+
+
+class CIME.SystemTests.ert.ERT(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.funit module

+

CIME FUNIT test. This class inherits from SystemTestsCommon. It runs +the fortran unit tests; grid and compset are ignored.

+
+
+class CIME.SystemTests.funit.FUNIT(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+get_test_spec_dir()[source]
+

Override this to change what gets tested.

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.homme module

+
+
+class CIME.SystemTests.homme.HOMME(case, **kwargs)[source]
+

Bases: HommeBase

+
+ +
+
+

CIME.SystemTests.hommebaseclass module

+

CIME HOMME test. This class inherits from SystemTestsCommon

+
+
+class CIME.SystemTests.hommebaseclass.HommeBase(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.hommebfb module

+
+
+class CIME.SystemTests.hommebfb.HOMMEBFB(case, **kwargs)[source]
+

Bases: HommeBase

+
+ +
+
+

CIME.SystemTests.icp module

+

CIME ICP test This class inherits from SystemTestsCommon

+
+
+class CIME.SystemTests.icp.ICP(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.irt module

+

Implementation of the CIME IRT. (Interim Restart Test) +This test the model’s restart capability as well as the short term archiver’s interim restart capability

+
    +
  1. Do a Run of length N with restart at N/2 and DOUT_S_SAVE_INTERIM_RESTART set to TRUE

  2. +
  3. Archive Run using ST archive tools

  4. +
  5. Recover first interim restart to the case2 run directory

  6. +
  7. Start case2 from restart and run to the end of case1

  8. +
  9. compare results.

  10. +
  11. this test does not save or compare history files in baselines.

  12. +
+
+
+class CIME.SystemTests.irt.IRT(case, **kwargs)[source]
+

Bases: RestartTest

+
+ +
+
+

CIME.SystemTests.ldsta module

+

CIME last date short term archiver test. This class inherits from SystemTestsCommon +It does a run without restarting, then runs the archiver with various last-date parameters +The test verifies the archive directory contains the expected files

+
+
+class CIME.SystemTests.ldsta.LDSTA(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.mcc module

+

Implemetation of CIME MCC test: Compares ensemble methods

+
+
This does two runs: In the first we run a three member ensemble using the

MULTI_DRIVER capability, then we run a second single instance case and compare

+
+
+
+
+class CIME.SystemTests.mcc.MCC(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.mvk module

+

Multivariate test for climate reproducibility using the Kolmogrov-Smirnov (K-S) +test and based on The CESM/E3SM model’s multi-instance capability is used to +conduct an ensemble of simulations starting from different initial conditions.

+

This class inherits from SystemTestsCommon.

+
+
+class CIME.SystemTests.mvk.MVK(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.nck module

+

Implementation of the CIME NCK test: Tests multi-instance

+

This does two runs: In the first, we use one instance per component; in the +second, we use two instances per components. NTASKS are changed in each run so +that the number of tasks per instance is the same for both runs.

+

Lay all of the components out sequentially

+
+
+class CIME.SystemTests.nck.NCK(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.ncr module

+

Implementation of the CIME NCR test. This class inherits from SystemTestsCommon

+

Build two exectuables for this test: +The first runs two instances for each component with the same total number of tasks, +and runs each of them concurrently +The second is a default build

+

NOTE: This is currently untested, and may not be working properly

+
+
+class CIME.SystemTests.ncr.NCR(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.nodefail module

+

CIME restart upon failed node test.

+
+
+class CIME.SystemTests.nodefail.NODEFAIL(case, **kwargs)[source]
+

Bases: ERS

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.pea module

+

Implementation of the CIME PEA test.

+

Builds runs and compares a single processor mpi model to a model built using mpi-serial +(1) do a run with default mpi library (suffix base) +(2) do a run with mpi-serial (suffix mpi-serial)

+
+
+class CIME.SystemTests.pea.PEA(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.pem module

+

Implementation of the CIME PEM test: Tests bfb with different MPI +processor counts

+

This is just like running a smoke test twice - but the pe-counts +are modified the second time. +(1) Run with pes set up out of the box (suffix base) +(2) Run with half the number of tasks (suffix modpes)

+
+
+class CIME.SystemTests.pem.PEM(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.pet module

+

Implementation of the CIME PET test. This class inherits from SystemTestsCommon

+

This is an openmp test to determine that changing thread counts does not change answers. +(1) do an initial run where all components are threaded by default (suffix: base) +(2) do another initial run with nthrds=1 for all components (suffix: single_thread)

+
+
+class CIME.SystemTests.pet.PET(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.pfs module

+

CIME performance test This class inherits from SystemTestsCommon

+

20 days performance test, no restart files written

+
+
+class CIME.SystemTests.pfs.PFS(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.pgn module

+

Perturbation Growth New (PGN) - The CESM/ACME model’s +multi-instance capability is used to conduct an ensemble +of simulations starting from different initial conditions.

+

This class inherits from SystemTestsCommon.

+
+
+class CIME.SystemTests.pgn.PGN(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+get_var_list()[source]
+

Get variable list for pergro specific output vars

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

CIME.SystemTests.pre module

+

Implementation of the CIME pause/resume test: Tests having driver +‘pause’ (write cpl restart file) and ‘resume’ (read cpl restart file) +possibly changing the restart file. Compared to non-pause/resume run. +Test can also be run with other component combinations. +Test requires DESP component to function correctly.

+
+
+class CIME.SystemTests.pre.PRE(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+

Implementation of the CIME pause/resume test: Tests having driver +‘pause’ (write cpl and/or other restart file(s)) and ‘resume’ +(read cpl and/or other restart file(s)) possibly changing restart +file. Compare to non-pause/resume run.

+
+
+run_phase()[source]
+

Runs both phases of the two-phase test and compares their results +If success_change is True, success requires some files to be different

+
+ +
+ +
+
+

CIME.SystemTests.rep module

+

Implementation of the CIME REP test

+

This test verifies that two identical runs give bit-for-bit results

+
+
+class CIME.SystemTests.rep.REP(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.restart_tests module

+

Abstract class for restart tests

+
+
+class CIME.SystemTests.restart_tests.RestartTest(case, separate_builds, run_two_suffix='restart', run_one_description='initial', run_two_description='restart', multisubmit=False, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.reuseinitfiles module

+

Implementation of the CIME REUSEINITFILES test

+

This test does two runs:

+
    +
  1. A standard initial run

  2. +
  3. A run that reuses the init-generated files from run (1).

  4. +
+

This verifies that it works to reuse these init-generated files, and that you can get +bit-for-bit results by doing so. This is important because these files are typically +reused whenever a user reruns an initial case.

+
+
+class CIME.SystemTests.reuseinitfiles.REUSEINITFILES(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.seq module

+

sequencing bfb test (10 day seq,conc tests)

+
+
+class CIME.SystemTests.seq.SEQ(case, **kwargs)[source]
+

Bases: SystemTestsCompareTwo

+
+ +
+
+

CIME.SystemTests.sms module

+

CIME smoke test This class inherits from SystemTestsCommon +It does a startup run with restarts off and optionally compares to or generates baselines

+
+
+class CIME.SystemTests.sms.SMS(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+ +
+
+

CIME.SystemTests.system_tests_common module

+

Base class for CIME system tests

+
+
+class CIME.SystemTests.system_tests_common.FakeTest(case, expected=None, **kwargs)[source]
+

Bases: SystemTestsCommon

+

Inheriters of the FakeTest Class are intended to test the code.

+

All members of the FakeTest Class must +have names beginning with “TEST” this is so that the find_system_test +in utils.py will work with these classes.

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_indv(suffix='base', st_archive=False, submit_resubmits=None, keep_init_generated_files=False)[source]
+

Perform an individual run. Raises an EXCEPTION on fail.

+

keep_init_generated_files: If False (the default), we remove the +init_generated_files subdirectory of the run directory before running the case. +This is usually what we want for tests, but some specific tests may want to leave +this directory in place, so can set this variable to True to do so.

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.SystemTestsCommon(case, expected=None, **kwargs)[source]
+

Bases: object

+
+
+build(sharedlib_only=False, model_only=False, ninja=False, dry_run=False, separate_builds=False, skip_submit=False)[source]
+

Do NOT override this method, this method is the framework that +controls the build phase. build_phase is the extension point +that subclasses should use.

+
+ +
+
+build_indv(sharedlib_only=False, model_only=False)[source]
+

Perform an individual build

+
+ +
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+clean_build(comps=None)[source]
+
+ +
+
+compare_env_run(expected=None)[source]
+

Compare env_run file to original and warn about differences

+
+ +
+
+run(skip_pnl=False)[source]
+

Do NOT override this method, this method is the framework that controls +the run phase. run_phase is the extension point that subclasses should use.

+
+ +
+
+run_indv(suffix='base', st_archive=False, submit_resubmits=None, keep_init_generated_files=False)[source]
+

Perform an individual run. Raises an EXCEPTION on fail.

+

keep_init_generated_files: If False (the default), we remove the +init_generated_files subdirectory of the run directory before running the case. +This is usually what we want for tests, but some specific tests may want to leave +this directory in place, so can set this variable to True to do so.

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTBUILDFAIL(case, expected=None, **kwargs)[source]
+

Bases: TESTRUNPASS

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTBUILDFAILEXC(case, **kwargs)[source]
+

Bases: FakeTest

+
+ +
+
+class CIME.SystemTests.system_tests_common.TESTMEMLEAKFAIL(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTMEMLEAKPASS(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNDIFF(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+

You can generate a diff with this test as follows: +1) Run the test and generate a baseline +2) set TESTRUNDIFF_ALTERNATE environment variable to TRUE +3) Re-run the same test from step 1 but do a baseline comparison instead of generation

+
+

3.a) This should give you a DIFF

+
+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNDIFFRESUBMIT(case, expected=None, **kwargs)[source]
+

Bases: TESTRUNDIFF

+
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNFAIL(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNFAILEXC(case, expected=None, **kwargs)[source]
+

Bases: TESTRUNPASS

+
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNFAILRESET(case, expected=None, **kwargs)[source]
+

Bases: TESTRUNFAIL

+

This fake test can fail for two reasons: +1. As in the TESTRUNFAIL test: If the environment variable TESTRUNFAIL_PASS is not set +2. Even if that environment variable is set, it will fail if STOP_N differs from the

+
+

original value

+
+

The purpose of (2) is to ensure that test’s values get properly reset if the test is +rerun after an initial failure.

+
+
+run_indv(suffix='base', st_archive=False, submit_resubmits=None, keep_init_generated_files=False)[source]
+

Perform an individual run. Raises an EXCEPTION on fail.

+

keep_init_generated_files: If False (the default), we remove the +init_generated_files subdirectory of the run directory before running the case. +This is usually what we want for tests, but some specific tests may want to leave +this directory in place, so can set this variable to True to do so.

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNPASS(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNSLOWPASS(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNSTARCFAIL(case, expected=None, **kwargs)[source]
+

Bases: TESTRUNPASS

+
+ +
+
+class CIME.SystemTests.system_tests_common.TESTRUNUSERXMLCHANGE(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+class CIME.SystemTests.system_tests_common.TESTTESTDIFF(case, expected=None, **kwargs)[source]
+

Bases: FakeTest

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+CIME.SystemTests.system_tests_common.fix_single_exe_case(case)[source]
+

Fixes cases created with –single-exe.

+

When tests are created using –single-exe, the test_scheduler will set +BUILD_COMPLETE to True, but some tests require calls to case.case_setup +which can resets BUILD_COMPLETE to false. This function will check if a +case was created with –single-exe and ensure BUILD_COMPLETE is True.

+
+
Returns:

True when case required modification otherwise False.

+
+
+
+ +
+
+CIME.SystemTests.system_tests_common.is_single_exe_case(case)[source]
+

Determines if the case was created with the –single-exe option.

+

If CASEROOT is not part of EXEROOT and the TEST variable is True, +then its safe to assume the case was created with ./create_test +and the –single-exe option.

+
+
Returns:

True when the case was created with –single-exe otherwise false.

+
+
+
+ +
+
+CIME.SystemTests.system_tests_common.perf_check_for_memory_leak(case, tolerance)[source]
+
+ +
+
+

CIME.SystemTests.system_tests_compare_n module

+

Base class for CIME system tests that involve doing multiple runs and comparing the base run (index=0) +with the subsequent runs (indices=1..N-1).

+

NOTE: Below is the flow of a multisubmit test. +Non-batch: +case_submit -> case_run # PHASE 1

+
+

-> case_run # PHASE 2 +… +-> case_run # PHASE N

+
+

batch: +case_submit -> case_run # PHASE 1 +case_run -> case_submit +case_submit -> case_run # PHASE 2 +… +case_submit -> case_run # PHASE N

+
+
In the __init__ method for your test, you MUST call

SystemTestsCompareN.__init__

+
+
+

See the documentation of that method for details.

+

Classes that inherit from this are REQUIRED to implement the following method:

+
    +
  1. _case_setup +This method will be called to set up case i, where i==0 corresponds to the base case +and i=={1,..N-1} corresponds to subsequent runs to be compared with the base.

  2. +
+

In addition, they MAY require the following methods:

+
    +
  1. _common_setup +This method will be called to set up all cases. It should contain any setup +that’s needed in all cases. This is called before _case_setup_config

  2. +
  3. _case_custom_prerun_action(self, i): +Use this to do arbitrary actions immediately before running case i

  4. +
  5. _case_custom_postrun_action(self, i): +Use this to do arbitrary actions immediately after running case one

  6. +
+
+
+class CIME.SystemTests.system_tests_compare_n.SystemTestsCompareN(case, N=2, separate_builds=False, run_suffixes=None, run_descriptions=None, multisubmit=False, ignore_fieldlist_diffs=False, dry_run=False, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_phase(success_change=False)[source]
+

Runs all phases of the N-phase test and compares base results with subsequent ones +If success_change is True, success requires some files to be different

+
+ +
+ +
+
+

CIME.SystemTests.system_tests_compare_two module

+

Base class for CIME system tests that involve doing two runs and comparing their +output.

+

NOTE: Below is the flow of a multisubmit test. +Non-batch: +case_submit -> case_run # PHASE 1

+
+

-> case_run # PHASE 2

+
+

batch: +case_submit -> case_run # PHASE 1 +case_run -> case_submit +case_submit -> case_run # PHASE 2

+
+
In the __init__ method for your test, you MUST call

SystemTestsCompareTwo.__init__

+
+
+

See the documentation of that method for details.

+

Classes that inherit from this are REQUIRED to implement the following methods:

+
    +
  1. _case_one_setup +This method will be called to set up case 1, the “base” case

  2. +
  3. _case_two_setup +This method will be called to set up case 2, the “test” case

  4. +
+

In addition, they MAY require the following methods:

+
    +
  1. _common_setup +This method will be called to set up both cases. It should contain any setup +that’s needed in both cases. This is called before _case_one_setup or +_case_two_setup.

  2. +
  3. _case_one_custom_prerun_action(self): +Use this to do arbitrary actions immediately before running case one

  4. +
  5. _case_two_custom_prerun_action(self): +Use this to do arbitrary actions immediately before running case two

  6. +
  7. _case_one_custom_postrun_action(self): +Use this to do arbitrary actions immediately after running case one

  8. +
  9. _case_two_custom_postrun_action(self): +Use this to do arbitrary actions immediately after running case two

  10. +
+
+
+class CIME.SystemTests.system_tests_compare_two.SystemTestsCompareTwo(case, separate_builds=False, run_two_suffix='test', run_one_description='', run_two_description='', multisubmit=False, ignore_fieldlist_diffs=False, case_two_keep_init_generated_files=False, dry_run=False, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+copy_case1_restarts_to_case2()[source]
+

Makes a copy (or symlink) of restart files and related files +(necessary history files, rpointer files) from case1 to case2.

+

This is not done automatically, but can be called by individual +tests where case2 does a continue_run using case1’s restart +files.

+
+ +
+
+run_phase(success_change=False)[source]
+

Runs both phases of the two-phase test and compares their results +If success_change is True, success requires some files to be different

+
+ +
+ +
+
+

CIME.SystemTests.tsc module

+

Solution reproducibility test based on time-step convergence +The CESM/ACME model’s +multi-instance capability is used to conduct an ensemble +of simulations starting from different initial conditions.

+

This class inherits from SystemTestsCommon.

+
+
+class CIME.SystemTests.tsc.TSC(case, **kwargs)[source]
+

Bases: SystemTestsCommon

+
+
+build_phase(sharedlib_only=False, model_only=False)[source]
+

This is the default build phase implementation, it just does an individual build. +This is the subclass’ extension point if they need to define a custom build +phase.

+

PLEASE THROW EXCEPTION ON FAIL

+
+ +
+
+run_phase()[source]
+

This is the default run phase implementation, it just does an individual run. +This is the subclass’ extension point if they need to define a custom run phase.

+

PLEASE THROW AN EXCEPTION ON FAIL

+
+ +
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.SystemTests.test_utils.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.SystemTests.test_utils.html new file mode 100644 index 00000000000..af6038dc379 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.SystemTests.test_utils.html @@ -0,0 +1,215 @@ + + + + + + + CIME.SystemTests.test_utils package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.SystemTests.test_utils package

+
+

Submodules

+
+
+

CIME.SystemTests.test_utils.user_nl_utils module

+

This module contains functions for working with user_nl files in system tests.

+
+
+CIME.SystemTests.test_utils.user_nl_utils.append_to_user_nl_files(caseroot, component, contents)[source]
+

Append the string(s) given by ‘contents’ to the end of each user_nl file for +the given component (there may be multiple such user_nl files in the case of +a multi-instance test).

+

Also puts new lines before and after the appended text - so ‘contents’ +does not need to contain a trailing new line (but it’s also okay if it +does).

+
+
Args:

caseroot (str): Full path to the case directory

+
+
component (str): Name of component (e.g., ‘clm’). This is used to

determine which user_nl files are appended to. For example, for +component=’clm’, this function will operate on all user_nl files +matching the pattern ‘user_nl_clm*’. (We do a wildcard match to +handle multi-instance tests.)

+
+
contents (str or list-like): Contents to append to the end of each user_nl

file. If list-like, each item will be appended on its own line.

+
+
+
+
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.Tools.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.Tools.html new file mode 100644 index 00000000000..73a004ca684 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.Tools.html @@ -0,0 +1,244 @@ + + + + + + + CIME.Tools package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.Tools package

+
+

Submodules

+
+
+

CIME.Tools.generate_cylc_workflow module

+

Generates a cylc workflow file for the case. See https://cylc.github.io for details about cylc

+
+
+CIME.Tools.generate_cylc_workflow.cylc_batch_job_template(job, jobname, case, ensemble)[source]
+
+ +
+
+CIME.Tools.generate_cylc_workflow.cylc_get_case_path_string(case, ensemble)[source]
+
+ +
+
+CIME.Tools.generate_cylc_workflow.cylc_get_ensemble_first_and_last(case, ensemble)[source]
+
+ +
+
+CIME.Tools.generate_cylc_workflow.cylc_script_job_template(job, case, ensemble)[source]
+
+ +
+
+CIME.Tools.generate_cylc_workflow.parse_command_line(args, description)[source]
+
+ +
+
+

CIME.Tools.standard_script_setup module

+

Encapsulate the importing of python utils and logging setup, things +that every script should do.

+
+
+CIME.Tools.standard_script_setup.check_minimum_python_version(major, minor)[source]
+

Check your python version.

+
>>> check_minimum_python_version(sys.version_info[0], sys.version_info[1])
+>>>
+
+
+
+ +
+
+

CIME.Tools.testreporter module

+

Simple script to populate CESM test database with test results.

+
+
+CIME.Tools.testreporter.get_testreporter_xml(testroot, testid, tagname, testtype)[source]
+
+ +
+
+CIME.Tools.testreporter.parse_command_line(args)[source]
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.XML.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.XML.html new file mode 100644 index 00000000000..43e218853b1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.XML.html @@ -0,0 +1,2110 @@ + + + + + + + CIME.XML package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.XML package

+
+

Submodules

+
+
+

CIME.XML.archive module

+

Interface to the archive.xml file. This class inherits from GenericXML.py

+
+
+class CIME.XML.archive.Archive(infile=None, files=None)[source]
+

Bases: ArchiveBase

+
+
+get_all_config_archive_files(files)[source]
+

Returns the list of ARCHIVE_SPEC_FILES that exist on disk as defined in config_files.xml

+
+ +
+
+setup(env_archive, components, files=None)[source]
+
+ +
+ +
+
+

CIME.XML.archive_base module

+

Base class for archive files. This class inherits from generic_xml.py

+
+
+class CIME.XML.archive_base.ArchiveBase(infile=None, schema=None, root_name_override=None, root_attrib_override=None, read_only=True)[source]
+

Bases: GenericXML

+
+
+exclude_testing(compname)[source]
+

Checks if component should be excluded from testing.

+
+ +
+
+get_all_hist_files(casename, model, from_dir, suffix='', ref_case=None)[source]
+

gets all history files in directory from_dir with suffix (if provided) +ignores files with ref_case in the name if ref_case is provided

+
+ +
+
+get_entry(compname)[source]
+

Returns an xml node corresponding to compname in comp_archive_spec

+
+ +
+
+get_entry_attributes(compname)[source]
+
+ +
+
+get_entry_value(name, archive_entry)[source]
+

get the xml text associated with name under root archive_entry +returns None if no entry is found, expects only one entry

+
+ +
+
+get_hist_file_ext_regexes(archive_entry)[source]
+

get the xml text associated with each of the hist_file_ext_regex entries +based at root archive_entry (root is based on component name) +returns a list of text entries or +an empty list if no entries are found

+
+ +
+
+get_hist_file_extensions(archive_entry)[source]
+

get the xml text associated with each of the hist_file_extensions +based at root archive_entry (root is based on component name) +returns a list of text entries or +an empty list if no entries are found

+
+ +
+
+get_latest_hist_files(casename, model, from_dir, suffix='', ref_case=None)[source]
+

get the most recent history files in directory from_dir with suffix if provided

+
+ +
+
+get_rest_file_extensions(archive_entry)[source]
+

get the xml text associated with each of the rest_file_extensions +based at root archive_entry (root is based on component name) +returns a list of text entries or +an empty list if no entries are found

+
+ +
+ +
+
+

CIME.XML.batch module

+

Interface to the config_batch.xml file. This class inherits from GenericXML.py

+

The batch_system type=”foo” blocks define most things. Machine-specific overrides +can be defined by providing a batch_system MACH=”mach” block.

+
+
+class CIME.XML.batch.Batch(batch_system=None, machine=None, infile=None, files=None, extra_machines_dir=None)[source]
+

Bases: GenericXML

+
+
+get_batch_jobs()[source]
+

Return a list of jobs with the first element the name of the case script +and the second a dict of qualifiers for the job

+
+ +
+
+get_batch_system()[source]
+

Return the name of the batch system

+
+ +
+
+get_optional_batch_node(nodename, attributes=None)[source]
+

Return data on a node for a batch system

+
+ +
+
+get_value(name, attribute=None, resolved=True, subgroup=None)[source]
+

Get Value of fields in the config_batch.xml file

+
+ +
+
+set_batch_system(batch_system, machine=None)[source]
+

Sets the batch system block in the Batch object

+
+ +
+ +
+
+

CIME.XML.component module

+

Interface to the config_component.xml files. This class inherits from EntryID.py

+
+
+class CIME.XML.component.Component(infile, comp_class)[source]
+

Bases: EntryID

+
+
+get_description(compsetname)[source]
+
+ +
+
+get_forcing_description(compsetname)[source]
+
+ +
+
+get_valid_model_components()[source]
+

return a list of all possible valid generic (e.g. atm, clm, …) model components +from the entries in the model CONFIG_CPL_FILE

+
+ +
+
+get_value(name, attribute=None, resolved=False, subgroup=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches

+
+ +
+
+print_values()[source]
+

print values for help and description in target config_component.xml file

+
+ +
+
+return_values()[source]
+

return a list of hashes from target config_component.xml file +This routine is used by external tools in https://github.com/NCAR/CESM_xml2html

+
+ +
+ +
+
+

CIME.XML.compsets module

+

Common interface to XML files which follow the compsets format,

+
+
+class CIME.XML.compsets.Compsets(infile=None, files=None)[source]
+

Bases: GenericXML

+
+
+get_compset_longnames()[source]
+
+ +
+
+get_compset_match(name)[source]
+

science support is used in cesm to determine if this compset and grid +is scientifically supported. science_support is returned as an array of grids for this compset

+
+ +
+
+get_compset_var_settings(compset, grid)[source]
+

Variables can be set in config_compsets.xml in entry id settings with compset and grid attributes +find and return id value pairs here

+
+ +
+
+get_value(name, attribute=None, resolved=False, subgroup=None)[source]
+

get_value is expected to be defined by the derived classes, if you get here +the value was not found in the class.

+
+ +
+
+print_values(arg_help=True)[source]
+
+ +
+ +
+
+

CIME.XML.entry_id module

+

Common interface to XML files which follow the entry id format, +this is an abstract class and is expected to +be used by other XML interface modules and not directly.

+
+
+class CIME.XML.entry_id.EntryID(infile=None, schema=None, read_only=True)[source]
+

Bases: GenericXML

+
+
+add_elements_by_group(srcobj, attributes=None, infile=None)[source]
+

Add elements from srcobj to self under the appropriate +group element, entries to be added must have a child element +<file> with value “infile”

+
+ +
+
+check_if_comp_var(vid, attribute=None, node=None)[source]
+
+ +
+
+cleanupnode(node)[source]
+

in env_base.py, not expected to get here

+
+ +
+
+compare_xml(other, root=None, otherroot=None)[source]
+
+ +
+
+get_child_content(vid, childname)[source]
+
+ +
+
+get_default_value(node, attributes=None)[source]
+

Set the value of an entry to the default value for that entry

+
+ +
+
+get_description(node)[source]
+
+ +
+
+get_elements_from_child_content(childname, childcontent)[source]
+
+ +
+
+get_groups(node)[source]
+
+ +
+
+get_node_element_info(vid, element_name)[source]
+
+ +
+
+get_nodes_by_id(vid)[source]
+
+ +
+
+get_type_info(vid)[source]
+
+ +
+
+get_valid_value_string(node, value, vid=None, ignore_type=False)[source]
+
+ +
+
+get_valid_values(vid)[source]
+
+ +
+
+get_value(vid, attribute=None, resolved=True, subgroup=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches

+
+ +
+
+get_value_match(vid, attributes=None, exact_match=False, entry_node=None, replacement_for_none=None)[source]
+

Handle this case: +<entry id …>

+
+
+
<values>

<value A=”a1”>X</value> +<value A=”a2”>Y</value> +<value A=”a3” B=”b1”>Z</value>

+
+
+

</values>

+
+

</entry>

+

If replacement_for_none is provided, then: if the found text value would give a +None value, instead replace it with the value given by the replacement_for_none +argument. (However, still return None if no match is found.) This may or may not +be needed, but is in place to maintain some old logic.

+
+ +
+
+get_values(vid, attribute=None, resolved=True, subgroup=None)[source]
+

Same functionality as get_value but it returns a list, if the +value in xml contains commas the list have multiple elements split on +commas

+
+ +
+
+overwrite_existing_entries()[source]
+
+ +
+
+set_default_value(vid, val)[source]
+
+ +
+
+set_valid_values(vid, new_valid_values)[source]
+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

Set the value of an entry-id field to value +Returns the value or None if not found +subgroup is ignored in the general routine and applied in specific methods

+
+ +
+ +
+
+

CIME.XML.env_archive module

+

Interface to the env_archive.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_archive.EnvArchive(case_root=None, infile='env_archive.xml', read_only=False)[source]
+

Bases: ArchiveBase, EnvBase

+
+
+get_entries()[source]
+
+ +
+
+get_entry_info(archive_entry)[source]
+
+ +
+
+get_rpointer_contents(archive_entry)[source]
+
+ +
+
+get_type_info(vid)[source]
+
+ +
+ +
+
+

CIME.XML.env_base module

+

Base class for env files. This class inherits from EntryID.py

+
+
+class CIME.XML.env_base.EnvBase(case_root, infile, schema=None, read_only=False)[source]
+

Bases: EntryID

+
+
+change_file(newfile, copy=False)[source]
+
+ +
+
+check_if_comp_var(vid, attribute=None, node=None)[source]
+
+ +
+
+cleanupnode(node)[source]
+

Remove the <group>, <file>, <values> and <value> childnodes from node

+
+ +
+
+get_children(name=None, attributes=None, root=None)[source]
+

This is the critical function, its interface and performance are crucial.

+

You can specify attributes={key:None} if you want to select children +with the key attribute but you don’t care what its value is.

+
+ +
+
+get_nodes_by_id(varid)[source]
+
+ +
+
+get_value(vid, attribute=None, resolved=True, subgroup=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches

+
+ +
+
+scan_children(nodename, attributes=None, root=None)[source]
+
+ +
+
+set_components(components)[source]
+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

Set the value of an entry-id field to value +Returns the value or None if not found +subgroup is ignored in the general routine and applied in specific methods

+
+ +
+ +
+
+

CIME.XML.env_batch module

+

Interface to the env_batch.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_batch.EnvBatch(case_root=None, infile='env_batch.xml', read_only=False)[source]
+

Bases: EnvBase

+
+
+cancel_job(jobid)[source]
+
+ +
+
+cleanupnode(node)[source]
+

Remove the <group>, <file>, <values> and <value> childnodes from node

+
+ +
+
+compare_xml(other)[source]
+
+ +
+
+create_job_groups(batch_jobs, is_test)[source]
+
+ +
+
+get_all_queues(name=None)[source]
+
+ +
+
+get_batch_directives(case, job, overrides=None, output_format='default')[source]
+
+ +
+
+get_batch_mail_type(mail_type)[source]
+
+ +
+
+get_batch_system_type()[source]
+
+ +
+
+get_children(name=None, attributes=None, root=None)[source]
+

This is the critical function, its interface and performance are crucial.

+

You can specify attributes={key:None} if you want to select children +with the key attribute but you don’t care what its value is.

+
+ +
+
+get_default_queue()[source]
+
+ +
+
+get_job_id(output)[source]
+
+ +
+
+get_job_overrides(job, case)[source]
+
+ +
+
+get_jobs()[source]
+
+ +
+
+get_queue_specs(qnode)[source]
+

Get queue specifications from node.

+

Returns (nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, is_strict)

+
+ +
+
+get_status(jobid)[source]
+
+ +
+
+get_submit_args(case, job, resolve=True)[source]
+

return a list of touples (flag, name)

+
+ +
+
+get_type_info(vid)[source]
+
+ +
+
+get_value(item, attribute=None, resolved=True, subgroup=None)[source]
+

Must default subgroup to something in order to provide single return value

+
+ +
+
+make_all_batch_files(case)[source]
+
+ +
+
+make_batch_script(input_template, job, case, outfile=None)[source]
+
+ +
+
+queue_meets_spec(queue, num_nodes, num_tasks, walltime=None, job=None)[source]
+
+ +
+
+select_best_queue(num_nodes, num_tasks, name=None, walltime=None, job=None)[source]
+
+ +
+
+set_batch_system(batchobj, batch_system_type=None)[source]
+
+ +
+
+set_batch_system_type(batchtype)[source]
+
+ +
+
+set_job_defaults(batch_jobs, case)[source]
+
+ +
+
+set_value(item, value, subgroup=None, ignore_type=False)[source]
+

Override the entry_id set_value function with some special cases for this class

+
+ +
+
+submit_jobs(case, no_batch=False, job=None, user_prereq=None, skip_pnl=False, allow_fail=False, resubmit_immediate=False, mail_user=None, mail_type=None, batch_args=None, dry_run=False, workflow=True)[source]
+

no_batch indicates that the jobs should be run directly rather that submitted to a queueing system +job is the first job in the workflow sequence to start +user_prereq is a batch system prerequisite as requested by the user +skip_pnl indicates that the preview_namelist should not be run by this job +allow_fail indicates that the prereq job need only complete not nessasarily successfully to start the next job +resubmit_immediate indicates that all jobs indicated by the RESUBMIT option should be submitted at the same time instead of

+
+

waiting to resubmit at the end of the first sequence

+
+

workflow is a logical indicating whether only “job” is submitted or the workflow sequence starting with “job” is submitted

+
+ +
+ +
+
+CIME.XML.env_batch.get_job_deps(dependency, depid, prev_job=None, user_prereq=None)[source]
+

Gather list of job batch ids that a job depends on.

+
+

Parameters

+
+
dependencystr

List of dependent job names.

+
+
depiddict

Lookup where keys are job names and values are the batch id.

+
+
user_prereqstr

User requested dependency.

+
+
+
+
+

Returns

+
+
list

List of batch ids that job depends on.

+
+
+
+
+ +
+
+

CIME.XML.env_build module

+

Interface to the env_build.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_build.EnvBuild(case_root=None, infile='env_build.xml', components=None, read_only=False)[source]
+

Bases: EnvBase

+
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

Set the value of an entry-id field to value +Returns the value or None if not found +subgroup is ignored in the general routine and applied in specific methods

+
+ +
+ +
+
+

CIME.XML.env_case module

+

Interface to the env_case.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_case.EnvCase(case_root=None, infile='env_case.xml', components=None, read_only=False)[source]
+

Bases: EnvBase

+
+ +
+
+

CIME.XML.env_mach_pes module

+

Interface to the env_mach_pes.xml file. This class inherits from EntryID

+
+
+class CIME.XML.env_mach_pes.EnvMachPes(case_root=None, infile='env_mach_pes.xml', components=None, read_only=False, comp_interface='mct')[source]
+

Bases: EnvBase

+
+
+add_comment(comment)[source]
+
+ +
+
+get_max_thread_count(comp_classes)[source]
+

Find the maximum number of openmp threads for any component in the case

+
+ +
+
+get_spare_nodes(num_nodes)[source]
+
+ +
+
+get_tasks_per_node(total_tasks, max_thread_count)[source]
+
+ +
+
+get_total_nodes(total_tasks, max_thread_count)[source]
+

Return (num_active_nodes, num_spare_nodes)

+
+ +
+
+get_total_tasks(comp_classes, async_interface=False)[source]
+
+ +
+
+get_value(vid, attribute=None, resolved=True, subgroup=None, max_mpitasks_per_node=None, max_cputasks_per_gpu_node=None, ngpus_per_node=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches

+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

Set the value of an entry-id field to value +Returns the value or None if not found +subgroup is ignored in the general routine and applied in specific methods

+
+ +
+ +
+
+

CIME.XML.env_mach_specific module

+

Interface to the env_mach_specific.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_mach_specific.EnvMachSpecific(caseroot=None, infile='env_mach_specific.xml', components=None, unit_testing=False, read_only=False, standalone_configure=False, comp_interface=None)[source]
+

Bases: EnvBase

+
+
+allow_error()[source]
+

Return True if stderr output from module commands should be assumed +to be an error. Default False. This is necessary since implementations +of environment modules are highlty variable and some systems produce +stderr output even when things are working fine.

+
+ +
+
+get_aprun_args(case, attribs, job, overrides=None)[source]
+
+ +
+
+get_aprun_mode(attribs)[source]
+
+ +
+
+get_module_system_cmd_path(lang)[source]
+
+ +
+
+get_module_system_init_path(lang)[source]
+
+ +
+
+get_module_system_type()[source]
+

Return the module system used on this machine

+
+ +
+
+get_mpirun(case, attribs, job, exe_only=False, overrides=None)[source]
+

Find best match, return (executable, {arg_name : text})

+
+ +
+
+get_overrides_nodes(case)[source]
+
+ +
+
+get_type_info(vid)[source]
+
+ +
+
+list_modules()[source]
+
+ +
+
+load_env(case, force_method=None, job=None, verbose=False)[source]
+

Should only be called by case.load_env

+
+ +
+
+make_env_mach_specific_file(shell, case, output_dir='')[source]
+

Writes .env_mach_specific.sh or .env_mach_specific.csh

+

Args: +shell: string - ‘sh’ or ‘csh’ +case: case object +output_dir: string - path to output directory (if empty string, uses current directory)

+
+ +
+
+populate(machobj, attributes=None)[source]
+

Add entries to the file using information from a Machines object. +mpilib must match attributes if set

+
+ +
+
+save_all_env_info(filename)[source]
+

Get a string representation of all current environment info and +save it to file.

+
+ +
+ +
+
+

CIME.XML.env_run module

+

Interface to the env_run.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_run.EnvRun(case_root=None, infile='env_run.xml', components=None, read_only=False)[source]
+

Bases: EnvBase

+
+
+get_value(vid, attribute=None, resolved=True, subgroup=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches. Special case for pio variables when PIO_ASYNC_INTERFACE is True.

+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

Set the value of an entry-id field to value +Returns the value or None if not found +subgroup is ignored in the general routine and applied in specific methods

+
+ +
+ +
+
+

CIME.XML.env_test module

+

Interface to the env_test.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_test.EnvTest(case_root=None, infile='env_test.xml', components=None, read_only=False)[source]
+

Bases: EnvBase

+
+
+add_test(testnode)[source]
+
+ +
+
+cleanupnode(node)[source]
+

keep the values component set

+
+ +
+
+get_settings_for_phase(name, cnt)[source]
+
+ +
+
+get_step_phase_cnt(step)[source]
+
+ +
+
+get_test_parameter(name)[source]
+
+ +
+
+get_value(vid, attribute=None, resolved=True, subgroup=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches

+
+ +
+
+run_phase_get_clone_name(phase)[source]
+
+ +
+
+set_initial_values(case)[source]
+

The values to initialize a test are defined in env_test.xml +copy them to the appropriate case env files to initialize a test +ignore fields set in the BUILD and RUN clauses, they are set in +the appropriate build and run phases.

+
+ +
+
+set_test_parameter(name, value)[source]
+

If a node already exists update the value +otherwise create a node and initialize it to value

+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

check if vid is in test section of file

+
+ +
+ +
+
+

CIME.XML.env_workflow module

+

Interface to the env_workflow.xml file. This class inherits from EnvBase

+
+
+class CIME.XML.env_workflow.EnvWorkflow(case_root=None, infile='env_workflow.xml', read_only=False)[source]
+

Bases: EnvBase

+
+
+create_job_groups(batch_jobs, is_test)[source]
+
+ +
+
+get_children(name=None, attributes=None, root=None)[source]
+

This is the critical function, its interface and performance are crucial.

+

You can specify attributes={key:None} if you want to select children +with the key attribute but you don’t care what its value is.

+
+ +
+
+get_job_specs(case, job)[source]
+
+ +
+
+get_jobs()[source]
+
+ +
+
+get_type_info(vid)[source]
+
+ +
+
+get_value(item, attribute=None, resolved=True, subgroup='PRIMARY')[source]
+

Must default subgroup to something in order to provide single return value

+
+ +
+
+set_value(item, value, subgroup=None, ignore_type=False)[source]
+

Override the entry_id set_value function with some special cases for this class

+
+ +
+ +
+
+

CIME.XML.expected_fails_file module

+

Interface to an expected failure xml file

+

Here is an example:

+

<?xml version= “1.0”?>

+
+
<expectedFails version=”1.1”>
+
<test name=”ERP_D_Ld10_P36x2.f10_f10_musgs.IHistClm50BgcCrop.cheyenne_intel.clm-ciso_decStart”>
+
<phase name=”RUN”>

<status>FAIL</status> +<issue>#404</issue>

+
+
+

</phase> +<phase name=”COMPARE_base_rest”>

+
+

<status>PEND</status> +<issue>#404</issue> +<comment>Because of the RUN failure, this phase is listed as PEND</comment>

+
+

</phase>

+
+
+

</test> +<test name=”PFS_Ld20.f09_g17.I2000Clm50BgcCrop.cheyenne_intel”>

+
+
+
<phase name=”GENERATE”>

<status>FAIL</status> +<issue>ESMCI/cime#2917</issue>

+
+
+

</phase> +<phase name=”BASELINE”>

+
+

<status>FAIL</status> +<issue>ESMCI/cime#2917</issue>

+
+

</phase>

+
+

</test>

+
+
+

</expectedFails>

+

However, many of the above elements are optional, for human consumption only (i.e., not +parsed here). The only required elements are given by this example:

+

<?xml version= “1.0”?>

+
+
<expectedFails version=”1.1”>
+
<test name=”…”>
+
<phase name=”…”>

<status>…</status>

+
+
+

</phase>

+
+
+

</test>

+
+
+

</expectedFails>

+
+
+class CIME.XML.expected_fails_file.ExpectedFailsFile(infile)[source]
+

Bases: GenericXML

+
+
+get_expected_fails()[source]
+

Returns a dictionary of ExpectedFails objects, where the keys are test names

+
+ +
+ +
+
+

CIME.XML.files module

+

Interface to the config_files.xml file. This class inherits from EntryID.py

+
+
+class CIME.XML.files.Files(comp_interface=None)[source]
+

Bases: EntryID

+
+
+get_components(nodename)[source]
+
+ +
+
+get_schema(nodename, attributes=None)[source]
+
+ +
+
+get_value(vid, attribute=None, resolved=True, subgroup=None)[source]
+

Get a value for entry with id attribute vid. +or from the values field if the attribute argument is provided +and matches

+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=False)[source]
+

Set the value of an entry-id field to value +Returns the value or None if not found +subgroup is ignored in the general routine and applied in specific methods

+
+ +
+ +
+
+

CIME.XML.generic_xml module

+

Common interface to XML files, this is an abstract class and is expected to +be used by other XML interface modules and not directly.

+
+
+class CIME.XML.generic_xml.GenericXML(infile=None, schema=None, root_name_override=None, root_attrib_override=None, read_only=True)[source]
+

Bases: object

+
+
+class CacheEntry(tree, root, modtime)
+

Bases: tuple

+
+
+modtime
+

Alias for field number 2

+
+ +
+
+root
+

Alias for field number 1

+
+ +
+
+tree
+

Alias for field number 0

+
+ +
+ +
+
+DISABLE_CACHING = False
+
+ +
+
+add_child(node, root=None, position=None)[source]
+

Add element node to self at root

+
+ +
+
+attrib(node)[source]
+
+ +
+
+change_file(newfile, copy=False)[source]
+
+ +
+
+check_timestamp()[source]
+

Returns True if timestamp matches what is expected

+
+ +
+
+copy(node)[source]
+
+ +
+
+get(node, attrib_name, default=None)[source]
+
+ +
+
+get_child(name=None, attributes=None, root=None, err_msg=None)[source]
+
+ +
+
+get_children(name=None, attributes=None, root=None)[source]
+

This is the critical function, its interface and performance are crucial.

+

You can specify attributes={key:None} if you want to select children +with the key attribute but you don’t care what its value is.

+
+ +
+
+get_element_text(element_name, attributes=None, root=None)[source]
+
+ +
+
+get_id()[source]
+
+ +
+
+get_optional_child(name=None, attributes=None, root=None, err_msg=None)[source]
+
+ +
+
+get_raw_record(root=None)[source]
+
+ +
+
+get_resolved_value(raw_value, allow_unresolved_envvars=False)[source]
+

A value in the xml file may contain references to other xml +variables or to environment variables. These are refered to in +the perl style with $name and $ENV{name}.

+
>>> obj = GenericXML()
+>>> os.environ["FOO"] = "BAR"
+>>> os.environ["BAZ"] = "BARF"
+>>> obj.get_resolved_value("one $ENV{FOO} two $ENV{BAZ} three")
+'one BAR two BARF three'
+>>> obj.get_resolved_value("2 + 3 - 1")
+'4'
+>>> obj.get_resolved_value("0001-01-01")
+'0001-01-01'
+>>> obj.get_resolved_value("$SHELL{echo hi}") == 'hi'
+True
+
+
+
+ +
+
+get_value(item, attribute=None, resolved=True, subgroup=None)[source]
+

get_value is expected to be defined by the derived classes, if you get here +the value was not found in the class.

+
+ +
+
+get_values(vid, attribute=None, resolved=True, subgroup=None)[source]
+
+ +
+
+get_version()[source]
+
+ +
+
+has(node, attrib_name)[source]
+
+ +
+
+classmethod invalidate(filename)[source]
+
+ +
+
+lock()[source]
+

A subclass is doing caching, we need to lock the tree structure +in order to avoid invalidating cache.

+
+ +
+
+make_child(name, attributes=None, root=None, text=None)[source]
+
+ +
+
+make_child_comment(root=None, text=None)[source]
+
+ +
+
+name(node)[source]
+
+ +
+
+pop(node, attrib_name)[source]
+
+ +
+
+read(infile, schema=None)[source]
+

Read and parse an xml file into the object

+
+ +
+
+read_fd(fd)[source]
+
+ +
+
+remove_child(node, root=None)[source]
+
+ +
+
+scan_child(nodename, attributes=None, root=None)[source]
+

Get an xml element matching nodename with optional attributes.

+

Error unless exactly one match.

+
+ +
+
+scan_children(nodename, attributes=None, root=None)[source]
+
+ +
+
+scan_optional_child(nodename, attributes=None, root=None)[source]
+

Get an xml element matching nodename with optional attributes.

+

Return None if no match.

+
+ +
+
+set(node, attrib_name, value)[source]
+
+ +
+
+set_element_text(element_name, new_text, attributes=None, root=None)[source]
+
+ +
+
+set_name(node, name)[source]
+
+ +
+
+set_text(node, text)[source]
+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=True)[source]
+

ignore_type is not used in this flavor

+
+ +
+
+text(node)[source]
+
+ +
+
+to_string(node, method='xml', encoding='us-ascii')[source]
+
+ +
+
+unlock()[source]
+
+ +
+
+validate_timestamp()[source]
+
+ +
+
+validate_xml_file(filename, schema)[source]
+

validate an XML file against a provided schema file using pylint

+
+ +
+
+write(outfile=None, force_write=False)[source]
+

Write an xml file from data in self

+
+ +
+ +
+
+

CIME.XML.grids module

+

Common interface to XML files which follow the grids format, +This is not an abstract class - but inherits from the abstact class GenericXML

+
+
+class CIME.XML.grids.Grids(infile=None, files=None, comp_interface=None)[source]
+

Bases: GenericXML

+
+
+get_grid_info(name, compset, driver)[source]
+

Find the matching grid node

+

Returns a dictionary containing relevant grid variables: domains, gridmaps, etc.

+
+ +
+
+print_values(long_output=None)[source]
+
+ +
+ +
+
+

CIME.XML.headers module

+

Interface to the config_headers.xml file. This class inherits from EntryID.py

+
+
+class CIME.XML.headers.Headers(infile=None)[source]
+

Bases: GenericXML

+
+
+get_header_node(fname)[source]
+
+ +
+ +
+
+

CIME.XML.inputdata module

+

Interface to the config_inputdata.xml file. This class inherits from GenericXML.py

+
+
+class CIME.XML.inputdata.Inputdata(infile=None, files=None)[source]
+

Bases: GenericXML

+
+
+get_next_server(attributes=None)[source]
+
+ +
+ +
+
+

CIME.XML.machines module

+

Interface to the config_machines.xml file. This class inherits from GenericXML.py

+
+
+class CIME.XML.machines.Machines(infile=None, files=None, machine=None, extra_machines_dir=None)[source]
+

Bases: GenericXML

+
+
+get_child(name=None, attributes=None, root=None, err_msg=None)[source]
+
+ +
+
+get_default_MPIlib(attributes=None)[source]
+

Get the MPILIB to use from the list of MPILIBS

+
+ +
+
+get_default_compiler()[source]
+

Get the compiler to use from the list of COMPILERS

+
+ +
+
+get_extra_machines_dir()[source]
+
+ +
+
+get_field_from_list(listname, reqval=None, attributes=None)[source]
+

Some of the fields have lists of valid values in the xml, parse these +lists and return the first value if reqval is not provided and reqval +if it is a valid setting for the machine

+
+ +
+
+get_first_child_nodes(nodename)[source]
+

Return the names of all the child nodes for the target machine

+
+ +
+
+get_machine_name()[source]
+

Return the name of the machine

+
+ +
+
+get_machines_dir()[source]
+

Return the directory of the machines file

+
+ +
+
+get_node_names()[source]
+

Return the names of all the child nodes for the target machine

+
+ +
+
+get_suffix(suffix_type)[source]
+
+ +
+
+get_value(name, attributes=None, resolved=True, subgroup=None)[source]
+

Get Value of fields in the config_machines.xml file

+
+ +
+
+has_batch_system()[source]
+

Return if this machine has a batch system

+
+ +
+
+is_valid_MPIlib(mpilib, attributes=None)[source]
+

Check the MPILIB is valid for the current machine

+
+ +
+
+is_valid_compiler(compiler)[source]
+

Check the compiler is valid for the current machine

+
+ +
+
+list_available_machines()[source]
+

Return a list of machines defined for a given CIME_MODEL

+
+ +
+
+print_values()[source]
+
+ +
+
+probe_machine_name(warn=True)[source]
+

Find a matching regular expression for hostname +in the NODENAME_REGEX field in the file. First match wins.

+
+ +
+
+return_values()[source]
+

return a dictionary of machine info +This routine is used by external tools in https://github.com/NCAR/CESM_xml2html

+
+ +
+
+set_machine(machine)[source]
+

Sets the machine block in the Machines object

+
>>> machobj = Machines(machine="melvin")
+>>> machobj.get_machine_name()
+'melvin'
+>>> machobj.set_machine("trump") 
+Traceback (most recent call last):
+...
+CIMEError: ERROR: No machine trump found
+
+
+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=True)[source]
+

ignore_type is not used in this flavor

+
+ +
+ +
+
+

CIME.XML.namelist_definition module

+

Interface to namelist_definition.xml.

+

This module contains only one class, NamelistDefinition, inheriting from +EntryID.

+
+
+class CIME.XML.namelist_definition.CaseInsensitiveDict(data)[source]
+

Bases: dict

+

Basic case insensitive dict with strings only keys. +From https://stackoverflow.com/a/27890005

+
+
+get(k, default=None)[source]
+

Return the value for key if key is in the dictionary, else default.

+
+ +
+
+proxy = {}
+
+ +
+ +
+
+class CIME.XML.namelist_definition.NamelistDefinition(infile, files=None)[source]
+

Bases: EntryID

+

Class representing variable definitions for a namelist. +This class inherits from EntryID, and supports most inherited methods; +however, set_value is unsupported.

+

Additional public methods: +- dict_to_namelist. +- is_valid_value +- validate

+
+
+add_attributes(attributes)[source]
+
+ +
+
+dict_to_namelist(dict_, filename=None)[source]
+

Converts a dictionary of name-value pairs to a Namelist.

+

The input is assumed to be similar to the output of parse when +groupless=True is set. This function uses the namelist definition file +to look up the namelist group associated with each variable, and uses +this information to create a true Namelist object.

+

The optional filename argument can be used to assist in error +reporting when the namelist comes from a specific, known file.

+
+ +
+
+get_attributes()[source]
+

Return this object’s attributes dictionary

+
+ +
+
+get_default_value(item, attribute=None)[source]
+

Return the default value for the variable named item.

+

The return value is a list of strings corresponding to the +comma-separated list of entries for the value (length 1 for scalars). If +there is no default value in the file, this returns None.

+
+ +
+
+get_entry_nodes()[source]
+
+ +
+
+get_group(name)[source]
+
+ +
+
+get_group_name(node=None)[source]
+
+ +
+
+get_input_pathname(name)[source]
+
+ +
+
+get_per_stream_entries()[source]
+
+ +
+
+get_value_match(vid, attributes=None, exact_match=True, entry_node=None)[source]
+

Return the default value for the variable named vid.

+

The return value is a list of strings corresponding to the +comma-separated list of entries for the value (length 1 for scalars). If +there is no default value in the file, this returns None.

+
+ +
+
+is_valid_value(name, value)[source]
+

Determine whether a value is valid for the named variable.

+

The value argument must be a list of strings formatted as they would +appear in the namelist (even for scalar variables, in which case the +length of the list is always 1).

+
+ +
+
+rename_group(oldgroup, newgroup)[source]
+
+ +
+
+set_node_values(name, node)[source]
+
+ +
+
+set_nodes(skip_groups=None)[source]
+

populates the object data types for all nodes that are not part of the skip_groups array +returns nodes that do not have attributes of skip_default_entry or per_stream_entry

+
+ +
+
+set_value(vid, value, subgroup=None, ignore_type=True)[source]
+

This function is not implemented.

+
+ +
+
+split_type_string(name)[source]
+

Split a ‘type’ attribute string into its component parts.

+

The name argument is the variable name. +This is used for error reporting purposes.

+

The return value is a tuple consisting of the type itself, a length +(which is an integer for character variables, otherwise None), and the +size of the array (which is 1 for scalar variables).

+
+ +
+
+validate(namelist, filename=None)[source]
+

Validate a namelist object against this definition.

+

The optional filename argument can be used to assist in error +reporting when the namelist comes from a specific, known file.

+
+ +
+ +
+
+

CIME.XML.pes module

+

Interface to the config_pes.xml file. This class inherits from GenericXML.py

+
+
+class CIME.XML.pes.Pes(infile, files=None)[source]
+

Bases: GenericXML

+
+
+find_pes_layout(grid, compset, machine, pesize_opts='M', mpilib=None)[source]
+
+ +
+ +
+
+

CIME.XML.pio module

+

Class for config_pio files . This class inherits from EntryID.py

+
+
+class CIME.XML.pio.PIO(comp_classes, infile=None, files=None)[source]
+

Bases: EntryID

+
+
+check_if_comp_var(vid, attribute=None, node=None)[source]
+
+ +
+
+get_defaults(grid=None, compset=None, mach=None, compiler=None, mpilib=None)[source]
+
+ +
+ +
+
+

CIME.XML.standard_module_setup module

+
+
+

CIME.XML.stream module

+

Interface to the streams.xml style files. This class inherits from GenericXML.py

+

stream files predate cime and so do not conform to entry id format

+
+
+class CIME.XML.stream.Stream(infile=None, files=None)[source]
+

Bases: GenericXML

+
+
+get_value(item, attribute=None, resolved=True, subgroup=None)[source]
+

Get Value of fields in a stream.xml file

+
+ +
+ +
+
+

CIME.XML.test_reporter module

+

Interface to the testreporter xml. This class inherits from GenericXML.py

+
+
+class CIME.XML.test_reporter.TestReporter[source]
+

Bases: GenericXML

+
+
+add_result(test_name, test_status)[source]
+
+ +
+
+push2testdb()[source]
+
+ +
+
+setup_header(tagname, machine, compiler, mpilib, testroot, testtype, baseline)[source]
+
+ +
+ +
+
+

CIME.XML.testlist module

+

Interface to the config_files.xml file. This class inherits from generic_xml.py +It supports version 2.0 of the testlist.xml file

+

In version 2 of the file options can be specified to further refine a test or +set of tests. They can be specified either at the top level, in which case they +apply to all machines/compilers for this test:

+
+
<test …>
+
<options>

<option name=”wallclock”>00:20</option>

+
+
+

</options> +…

+
+
+

</test>

+

or at the level of a particular machine/compiler:

+
+
<test …>
+
<machines>
+
<machine …>
+
<options>

<option name=”wallclock”>00:20</option>

+
+
+

</options>

+
+
+

</machine>

+
+
+

</machines>

+
+
+

</test>

+

Currently supported options are:

+
    +
  • walltime: sets the wallclock limit in the queuing system

  • +
  • memleak_tolerance: specifies the relative memory growth expected for this test

  • +
  • comment: has no effect, but is written out when printing the test list

  • +
  • workflow: adds a workflow to the test

  • +
+
+
+class CIME.XML.testlist.Testlist(infile, files=None)[source]
+

Bases: GenericXML

+
+
+get_tests(machine=None, category=None, compiler=None, compset=None, grid=None, supported_only=False)[source]
+
+ +
+ +
+
+

CIME.XML.tests module

+

Interface to the config_tests.xml file. This class inherits from GenericEntry

+
+
+class CIME.XML.tests.Tests(infile=None, files=None)[source]
+

Bases: GenericXML

+
+
+get_test_node(testname)[source]
+
+ +
+
+print_values(skip_infrastructure_tests=True)[source]
+

Print each test type and its description.

+

If skip_infrastructure_tests is True, then this does not write +information for tests with the attribute +INFRASTRUCTURE_TEST=”TRUE”.

+
+ +
+
+support_single_exe(case)[source]
+

Checks if case supports –single-exe.

+
+
Raises:

Exception: If system test cannot be found. +Exception: If case does not support –single-exe.

+
+
+
+ +
+ +
+
+

CIME.XML.testspec module

+

Interface to the testspec.xml file. This class inherits from generic_xml.py

+
+
+class CIME.XML.testspec.TestSpec(infile)[source]
+

Bases: GenericXML

+
+
+add_test(compiler, mpilib, testname)[source]
+
+ +
+
+set_header(testroot, machine, testid, baselinetag=None, baselineroot=None)[source]
+
+ +
+
+update_test_status(testname, phase, status)[source]
+
+ +
+ +
+
+

CIME.XML.workflow module

+

Interface to the config_workflow.xml file. This class inherits from GenericXML.py

+
+
+class CIME.XML.workflow.Workflow(infile=None, files=None)[source]
+

Bases: GenericXML

+
+
+get_workflow_jobs(machine, workflowid='default')[source]
+

Return a list of jobs with the first element the name of the script +and the second a dict of qualifiers for the job

+
+ +
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.baselines.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.baselines.html new file mode 100644 index 00000000000..9535d66eadc --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.baselines.html @@ -0,0 +1,411 @@ + + + + + + + CIME.baselines package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.baselines package

+
+

Submodules

+
+
+

CIME.baselines.performance module

+
+
+CIME.baselines.performance.get_cpl_mem_usage(cpllog)[source]
+

Read memory usage from coupler log.

+
+

Parameters

+
+
cpllogstr

Path to the coupler log.

+
+
+
+
+

Returns

+
+
list

Memory usage (data, highwater) as recorded by the coupler or empty list.

+
+
+
+
+ +
+
+CIME.baselines.performance.get_cpl_throughput(cpllog)[source]
+

Reads throuhgput from coupler log.

+
+

Parameters

+
+
cpllogstr

Path to the coupler log.

+
+
+
+
+

Returns

+
+
int or None

Throughput as recorded by the coupler or None

+
+
+
+
+ +
+
+CIME.baselines.performance.get_latest_cpl_logs(case)[source]
+

find and return the latest cpl log file in the run directory

+
+ +
+
+CIME.baselines.performance.load_coupler_customization(case)[source]
+

Loads customizations from the coupler cime_config directory.

+
+

Parameters

+
+
caseCIME.case.case.Case

Current case object.

+
+
+
+
+

Returns

+
+
CIME.config.Config

Runtime configuration.

+
+
+
+
+ +
+
+CIME.baselines.performance.perf_compare_memory_baseline(case, baseline_dir=None)[source]
+

Compares model highwater memory usage.

+
+

Parameters

+
+
caseCIME.case.case.Case

Current case object.

+
+
baseline_dirstr

Overrides the baseline directory.

+
+
+
+
+

Returns

+
+
below_tolerancebool

Whether the comparison was below the tolerance.

+
+
commentstr

Provides explanation from comparison.

+
+
+
+
+ +
+
+CIME.baselines.performance.perf_compare_throughput_baseline(case, baseline_dir=None)[source]
+

Compares model throughput.

+
+

Parameters

+
+
caseCIME.case.case.Case

Current case object.

+
+
baseline_dirstr

Overrides the baseline directory.

+
+
+
+
+

Returns

+
+
below_tolerancebool

Whether the comparison was below the tolerance.

+
+
commentstr

Provides explanation from comparison.

+
+
+
+
+ +
+
+CIME.baselines.performance.perf_get_memory(case, config)[source]
+

Gets the model memory usage.

+

First attempts to use a coupler defined method to retrieve the +models memory usage. If this is not defined then the default +method of parsing the coupler log is used.

+
+

Parameters

+
+
caseCIME.case.case.Case

Current case object.

+
+
+
+
+

Returns

+
+
str or None

Model memory usage.

+
+
+
+
+ +
+
+CIME.baselines.performance.perf_get_memory_list(case, cpllog)[source]
+
+ +
+
+CIME.baselines.performance.perf_get_throughput(case, config)[source]
+

Gets the model throughput.

+

First attempts to use a coupler define method to retrieve the +models throughput. If this is not defined then the default +method of parsing the coupler log is used.

+
+

Parameters

+
+
caseCIME.case.case.Case

Current case object.

+
+
+
+
+

Returns

+
+
str or None

Model throughput.

+
+
+
+
+ +
+
+CIME.baselines.performance.perf_write_baseline(case, basegen_dir, throughput=True, memory=True)[source]
+

Writes the baseline performance files.

+
+

Parameters

+
+
caseCIME.case.case.Case

Current case object.

+
+
basegen_dirstr

Path to baseline directory.

+
+
throughputbool

If true, write throughput baseline.

+
+
memorybool

If true, write memory baseline.

+
+
+
+
+ +
+
+CIME.baselines.performance.read_baseline_file(baseline_file)[source]
+

Reads value from baseline_file.

+

Strips comments and returns the raw content to be decoded.

+
+

Parameters

+
+
baseline_filestr

Path to the baseline file.

+
+
+
+
+

Returns

+
+
str

Value stored in baseline file without comments.

+
+
+
+
+ +
+
+CIME.baselines.performance.write_baseline_file(baseline_file, value, mode='a')[source]
+

Writes value to baseline_file.

+
+

Parameters

+
+
baseline_filestr

Path to the baseline file.

+
+
valuestr

Value to write.

+
+
modestr

Mode to open file with.

+
+
+
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.build_scripts.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.build_scripts.html new file mode 100644 index 00000000000..4025c8014aa --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.build_scripts.html @@ -0,0 +1,183 @@ + + + + + + + CIME.build_scripts package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.build_scripts package

+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.case.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.case.html new file mode 100644 index 00000000000..9cc13bd54f7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.case.html @@ -0,0 +1,834 @@ + + + + + + + CIME.case package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.case package

+
+

Submodules

+
+
+

CIME.case.case module

+

Wrapper around all env XML for a case.

+

All interaction with and between the module files in XML/ takes place +through the Case module.

+
+
+class CIME.case.case.Case(case_root=None, read_only=True, record=False, non_local=False)[source]
+

Bases: object

+

https://github.com/ESMCI/cime/wiki/Developers-Introduction +The Case class is the heart of the CIME Case Control system. All +interactions with a Case take part through this class. All of the +variables used to create and manipulate a case are defined in xml +files and for every xml file there is a python class to interact +with that file.

+

XML files which are part of the CIME distribution and are meant to +be readonly with respect to a case are typically named +config_something.xml and the corresponding python Class is +Something and can be found in file CIME.XML.something.py. I’ll +refer to these as the CIME config classes.

+

XML files which are part of a case and thus are read/write to a +case are typically named env_whatever.xml and the cooresponding +python modules are CIME.XML.env_whatever.py and classes are +EnvWhatever. I’ll refer to these as the Case env classes.

+

The Case Class includes an array of the Case env classes, in the +configure function and it’s supporting functions defined below +the case object creates and manipulates the Case env classes +by reading and interpreting the CIME config classes.

+

This class extends across multiple files, class members external to this file +are listed in the following imports

+
+
+apply_user_mods(user_mods_dirs=None)[source]
+

User mods can be specified on the create_newcase command line (usually when called from create test) +or they can be in the compset definition, or both.

+

If user_mods_dirs is specified, it should be a list of paths giving the user mods +specified on the create_newcase command line.

+
+ +
+
+archive_last_restarts(archive_restdir, rundir, last_date=None, link_to_restart_files=False)
+

Convenience function for archiving just the last set of restart +files to a given directory. This also saves files attached to the +restart set, such as rpointer files and necessary history +files. However, it does not save other files that are typically +archived (e.g., history files, log files).

+

Files are copied to the directory given by archive_restdir.

+

If link_to_restart_files is True, then symlinks rather than copies +are done for the restart files. (This has no effect on the history +files that are associated with these restart files.)

+
+ +
+
+cancel_batch_jobs(jobids)[source]
+
+ +
+
+case_cmpgen_namelists(compare=False, generate=False, compare_name=None, generate_name=None, baseline_root=None, logfile_name='TestStatus.log')
+
+ +
+
+case_run(skip_pnl=False, set_continue_run=False, submit_resubmits=False)
+
+ +
+
+case_setup(clean=False, test_mode=False, reset=False, keep=None)
+
+ +
+
+case_st_archive(last_date_str=None, archive_incomplete_logs=True, copy_only=False, resubmit=True)
+

Create archive object and perform short term archiving

+
+ +
+
+case_test(testname=None, reset=False, skip_pnl=False)
+
+ +
+
+check_DA_settings()
+
+ +
+
+check_all_input_data(protocol=None, address=None, input_data_root=None, data_list_dir='Buildconf', download=True, chksum=False)
+

Read through all files of the form *.input_data_list in the data_list_dir directory. These files +contain a list of input and boundary files needed by each model component. For each file in the +list confirm that it is available in input_data_root and if not (optionally download it from a +server at address using protocol. Perform a chksum of the downloaded file.

+
+ +
+
+check_case(skip_pnl=False, chksum=False)
+
+ +
+
+check_if_comp_var(vid)[source]
+
+ +
+
+check_input_data(protocol='svn', address=None, input_data_root=None, data_list_dir='Buildconf', download=False, user=None, passwd=None, chksum=False, ic_filepath=None)
+

For a given case check for the relevant input data as specified in data_list_dir/*.input_data_list +in the directory input_data_root, if not found optionally download it using the servers specified +in config_inputdata.xml. If a chksum file is available compute the chksum and compare it to that +in the file. +Return True if no files missing

+
+ +
+
+check_lockedfile(filebase)
+
+ +
+
+check_lockedfiles(skip=None)
+

Check that all lockedfiles match what’s in case

+

If caseroot is not specified, it is set to the current working directory

+
+ +
+
+check_pelayouts_require_rebuild(models)
+

Create if we require a rebuild, expects cwd is caseroot

+
+ +
+
+check_timestamps(short_name=None)[source]
+
+ +
+
+clean_up_lookups(allow_undefined=False)[source]
+
+ +
+
+configure(compset_name, grid_name, machine_name=None, project=None, pecount=None, compiler=None, mpilib=None, pesfile=None, gridfile=None, multi_driver=False, ninst=1, test=False, walltime=None, queue=None, output_root=None, run_unsupported=False, answer=None, input_dir=None, driver=None, workflowid='default', non_local=False, extra_machines_dir=None, case_group=None, ngpus_per_node=0, gpu_type=None, gpu_offload=None)[source]
+
+ +
+
+copy(newcasename, newcaseroot, newcimeroot=None, newsrcroot=None)[source]
+
+ +
+
+create(casename, srcroot, compset_name, grid_name, user_mods_dirs=None, machine_name=None, project=None, pecount=None, compiler=None, mpilib=None, pesfile=None, gridfile=None, multi_driver=False, ninst=1, test=False, walltime=None, queue=None, output_root=None, run_unsupported=False, answer=None, input_dir=None, driver=None, workflowid='default', non_local=False, extra_machines_dir=None, case_group=None, ngpus_per_node=0, gpu_type=None, gpu_offload=None)[source]
+
+ +
+
+create_caseroot(clone=False)[source]
+
+ +
+
+create_clone(newcaseroot, keepexe=False, mach_dir=None, project=None, cime_output_root=None, exeroot=None, rundir=None, user_mods_dirs=None)
+

Create a case clone

+

If exeroot or rundir are provided (not None), sets these directories +to the given paths; if not provided, uses default values for these +directories. It is an error to provide exeroot if keepexe is True.

+
+ +
+
+create_dirs()
+

Make necessary directories for case

+
+ +
+
+create_namelists(component=None)
+

Create component namelists

+
+ +
+
+fix_sys_argv_quotes(cmd)[source]
+

Fixes removed quotes from argument list.

+

Restores quotes to –val and KEY=VALUE from sys.argv.

+
+ +
+
+flush(flushall=False)[source]
+
+ +
+
+get_baseline_dir()[source]
+
+ +
+
+get_batch_jobs()[source]
+
+ +
+
+get_build_threaded()[source]
+

Returns True if current settings require a threaded build/run.

+
+ +
+
+get_case_root()[source]
+

Returns the root directory for this case.

+
+ +
+
+get_compset_components()[source]
+
+ +
+
+get_compset_var_settings(files)[source]
+
+ +
+
+get_env(short_name, allow_missing=False)[source]
+
+ +
+
+get_first_job()[source]
+
+ +
+
+get_job_id(output)[source]
+
+ +
+
+get_job_info()[source]
+

Get information on batch jobs associated with this case

+
+ +
+
+get_latest_cpl_log(coupler_log_path=None, cplname='cpl')[source]
+

find and return the latest cpl log file in the +coupler_log_path directory

+
+ +
+
+get_mpirun_cmd(job=None, allow_unresolved_envvars=True, overrides=None)[source]
+
+ +
+
+get_primary_component()[source]
+
+ +
+
+get_primary_job()[source]
+
+ +
+
+get_record_fields(variable, field)[source]
+

get_record_fields gets individual requested field from an entry_id file +this routine is used only by xmlquery

+
+ +
+
+get_resolved_value(item, recurse=0, allow_unresolved_envvars=False)[source]
+
+ +
+
+get_type_info(item)[source]
+
+ +
+
+get_value(item, attribute=None, resolved=True, subgroup=None)[source]
+
+ +
+
+get_values(item, attribute=None, resolved=True, subgroup=None)[source]
+
+ +
+
+initialize_derived_attributes()[source]
+

These are derived variables which can be used in the config_* files +for variable substitution using the {{ var }} syntax

+
+ +
+
+is_save_timing_dir_project(project)[source]
+

Check whether the project is permitted to archive performance data in the location +specified for the current machine

+
+ +
+
+load_env(reset=False, job=None, verbose=False)[source]
+
+ +
+
+new_hash()[source]
+

Creates a hash

+
+ +
+
+preview_run(write, job)[source]
+
+ +
+
+read_xml()[source]
+
+ +
+
+record_cmd(cmd=None, init=False)[source]
+
+ +
+
+report_job_status()[source]
+
+ +
+
+restore_from_archive(rest_dir=None, dout_s_root=None, rundir=None, test=False)
+

Take archived restart files and load them into current case. Use rest_dir if provided otherwise use most recent +restore_from_archive is a member of Class Case

+
+ +
+
+set_comp_classes(comp_classes)[source]
+
+ +
+
+set_file(xmlfile)[source]
+

force the case object to consider only xmlfile

+
+ +
+
+set_initial_test_values()[source]
+
+ +
+
+set_lookup_value(item, value)[source]
+
+ +
+
+set_model_version(model)[source]
+
+ +
+
+set_valid_values(item, valid_values)[source]
+

Update or create a valid_values entry for item and populate it

+
+ +
+
+set_value(item, value, subgroup=None, ignore_type=False, allow_undefined=False, return_file=False)[source]
+

If a file has been defined, and the variable is in the file, +then that value will be set in the file object and the resovled value +is returned unless return_file is True, in which case (resolved_value, filename) +is returned where filename is the name of the modified file.

+
+ +
+
+stage_refcase(input_data_root=None, data_list_dir=None)
+

Get a REFCASE for a hybrid or branch run +This is the only case in which we are downloading an entire directory instead of +a single file at a time.

+
+ +
+
+submit(job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False, resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None, batch_args=None, workflow=True, chksum=False)
+
+ +
+
+submit_jobs(no_batch=False, job=None, skip_pnl=None, prereq=None, allow_fail=False, resubmit_immediate=False, mail_user=None, mail_type=None, batch_args=None, dry_run=False, workflow=True)[source]
+
+ +
+
+test_env_archive(testdir='env_archive_test')
+
+ +
+
+test_st_archive(testdir='st_archive_test')
+
+ +
+
+update_env(new_object, env_file, blow_away=False)[source]
+

Replace a case env object file

+
+ +
+
+valid_compset(compset_name, compset_alias, files)[source]
+

Add stub models missing in <compset_name>, return full compset name. +<files> is used to collect set of all supported components.

+
+ +
+ +
+
+

CIME.case.case_clone module

+

create_clone is a member of the Case class from file case.py

+
+
+CIME.case.case_clone.create_clone(self, newcaseroot, keepexe=False, mach_dir=None, project=None, cime_output_root=None, exeroot=None, rundir=None, user_mods_dirs=None)[source]
+

Create a case clone

+

If exeroot or rundir are provided (not None), sets these directories +to the given paths; if not provided, uses default values for these +directories. It is an error to provide exeroot if keepexe is True.

+
+ +
+
+

CIME.case.case_cmpgen_namelists module

+

Library for case.cmpgen_namelists. +case_cmpgen_namelists is a member of Class case from file case.py

+
+
+CIME.case.case_cmpgen_namelists.case_cmpgen_namelists(self, compare=False, generate=False, compare_name=None, generate_name=None, baseline_root=None, logfile_name='TestStatus.log')[source]
+
+ +
+
+

CIME.case.case_run module

+

case_run is a member of Class Case +‘

+
+
+CIME.case.case_run.case_run(self, skip_pnl=False, set_continue_run=False, submit_resubmits=False)[source]
+
+ +
+
+

CIME.case.case_setup module

+

Library for case.setup. +case_setup is a member of class Case from file case.py

+
+
+CIME.case.case_setup.case_setup(self, clean=False, test_mode=False, reset=False, keep=None)[source]
+
+ +
+
+

CIME.case.case_st_archive module

+

short term archiving +case_st_archive, restore_from_archive, archive_last_restarts +are members of class Case from file case.py

+
+
+CIME.case.case_st_archive.archive_last_restarts(self, archive_restdir, rundir, last_date=None, link_to_restart_files=False)[source]
+

Convenience function for archiving just the last set of restart +files to a given directory. This also saves files attached to the +restart set, such as rpointer files and necessary history +files. However, it does not save other files that are typically +archived (e.g., history files, log files).

+

Files are copied to the directory given by archive_restdir.

+

If link_to_restart_files is True, then symlinks rather than copies +are done for the restart files. (This has no effect on the history +files that are associated with these restart files.)

+
+ +
+
+CIME.case.case_st_archive.case_st_archive(self, last_date_str=None, archive_incomplete_logs=True, copy_only=False, resubmit=True)[source]
+

Create archive object and perform short term archiving

+
+ +
+
+CIME.case.case_st_archive.get_histfiles_for_restarts(rundir, archive, archive_entry, restfile, testonly=False)[source]
+

query restart files to determine history files that are needed for restarts

+

Not doc-testable due to filesystem dependence

+
+ +
+
+CIME.case.case_st_archive.restore_from_archive(self, rest_dir=None, dout_s_root=None, rundir=None, test=False)[source]
+

Take archived restart files and load them into current case. Use rest_dir if provided otherwise use most recent +restore_from_archive is a member of Class Case

+
+ +
+
+CIME.case.case_st_archive.test_env_archive(self, testdir='env_archive_test')[source]
+
+ +
+
+CIME.case.case_st_archive.test_st_archive(self, testdir='st_archive_test')[source]
+
+ +
+
+

CIME.case.case_submit module

+

case.submit - Submit a cesm workflow to the queueing system or run it +if there is no queueing system. A cesm workflow may include multiple +jobs. +submit, check_case and check_da_settings are members of class Case in file case.py

+
+
+CIME.case.case_submit.check_DA_settings(self)[source]
+
+ +
+
+CIME.case.case_submit.check_case(self, skip_pnl=False, chksum=False)[source]
+
+ +
+
+CIME.case.case_submit.submit(self, job=None, no_batch=False, prereq=None, allow_fail=False, resubmit=False, resubmit_immediate=False, skip_pnl=False, mail_user=None, mail_type=None, batch_args=None, workflow=True, chksum=False)[source]
+
+ +
+
+

CIME.case.case_test module

+

Run a testcase. +case_test is a member of class Case from case.py

+
+
+CIME.case.case_test.case_test(self, testname=None, reset=False, skip_pnl=False)[source]
+
+ +
+
+

CIME.case.check_input_data module

+

API for checking input for testcase

+
+
+CIME.case.check_input_data.check_all_input_data(self, protocol=None, address=None, input_data_root=None, data_list_dir='Buildconf', download=True, chksum=False)[source]
+

Read through all files of the form *.input_data_list in the data_list_dir directory. These files +contain a list of input and boundary files needed by each model component. For each file in the +list confirm that it is available in input_data_root and if not (optionally download it from a +server at address using protocol. Perform a chksum of the downloaded file.

+
+ +
+
+CIME.case.check_input_data.check_input_data(case, protocol='svn', address=None, input_data_root=None, data_list_dir='Buildconf', download=False, user=None, passwd=None, chksum=False, ic_filepath=None)[source]
+

For a given case check for the relevant input data as specified in data_list_dir/*.input_data_list +in the directory input_data_root, if not found optionally download it using the servers specified +in config_inputdata.xml. If a chksum file is available compute the chksum and compare it to that +in the file. +Return True if no files missing

+
+ +
+
+CIME.case.check_input_data.md5(fname)[source]
+

performs an md5 sum one chunk at a time to avoid memory issues with large files.

+
+ +
+
+CIME.case.check_input_data.stage_refcase(self, input_data_root=None, data_list_dir=None)[source]
+

Get a REFCASE for a hybrid or branch run +This is the only case in which we are downloading an entire directory instead of +a single file at a time.

+
+ +
+
+CIME.case.check_input_data.verify_chksum(input_data_root, rundir, filename, isdirectory)[source]
+

For file in filename perform a chksum and compare the result to that stored in +the local checksumfile, if isdirectory chksum all files in the directory of form .

+
+ +
+
+

CIME.case.check_lockedfiles module

+

API for checking locked files +check_lockedfile, check_lockedfiles, check_pelayouts_require_rebuild are members +of Class case.py from file case.py

+
+
+CIME.case.check_lockedfiles.check_lockedfile(self, filebase)[source]
+
+ +
+
+CIME.case.check_lockedfiles.check_lockedfiles(self, skip=None)[source]
+

Check that all lockedfiles match what’s in case

+

If caseroot is not specified, it is set to the current working directory

+
+ +
+
+CIME.case.check_lockedfiles.check_pelayouts_require_rebuild(self, models)[source]
+

Create if we require a rebuild, expects cwd is caseroot

+
+ +
+
+

CIME.case.preview_namelists module

+

API for preview namelist +create_dirs and create_namelists are members of Class case from file case.py

+
+
+CIME.case.preview_namelists.create_dirs(self)[source]
+

Make necessary directories for case

+
+ +
+
+CIME.case.preview_namelists.create_namelists(self, component=None)[source]
+

Create component namelists

+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.config.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.config.html new file mode 100644 index 00000000000..16d848315bf --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.config.html @@ -0,0 +1,184 @@ + + + + + + + CIME.data.config package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.data.config package

+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.html new file mode 100644 index 00000000000..2b3a131fb2a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.html @@ -0,0 +1,198 @@ + + + + + + + CIME.data package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.data package

+
+

Subpackages

+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.templates.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.templates.html new file mode 100644 index 00000000000..7338032628c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.data.templates.html @@ -0,0 +1,184 @@ + + + + + + + CIME.data.templates package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.data.templates package

+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.html new file mode 100644 index 00000000000..6575569aec8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.html @@ -0,0 +1,5651 @@ + + + + + + + CIME package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME package

+
+

Subpackages

+
+ +
+
+
+

Submodules

+
+
+

CIME.aprun module

+

Aprun is far too complex to handle purely through XML. We need python +code to compute and assemble aprun commands.

+
+
+CIME.aprun.get_aprun_cmd_for_case(case, run_exe, overrides=None, extra_args=None)[source]
+

Given a case, construct and return the aprun command and optimized node count

+
+ +
+
+

CIME.bless_test_results module

+
+
+CIME.bless_test_results.bless_history(test_name, case, baseline_name, baseline_root, report_only, force)[source]
+
+ +
+
+CIME.bless_test_results.bless_namelists(test_name, report_only, force, pes_file, baseline_name, baseline_root, new_test_root=None, new_test_id=None)[source]
+
+ +
+
+CIME.bless_test_results.bless_test_results(baseline_name, baseline_root, test_root, compiler, test_id=None, namelists_only=False, hist_only=False, report_only=False, force=False, pes_file=None, bless_tests=None, no_skip_pass=False, new_test_root=None, new_test_id=None, exclude=None, bless_tput=False, bless_mem=False, bless_perf=False, **_)[source]
+
+ +
+
+CIME.bless_test_results.is_bless_needed(test_name, ts, broken_blesses, overall_result, no_skip_pass, phase)[source]
+
+ +
+
+

CIME.build module

+

functions for building CIME models

+
+
+class CIME.build.CmakeTmpBuildDir(macroloc=None, rootdir=None, tmpdir=None)[source]
+

Bases: object

+

Use to create a temporary cmake build dir for the purposes of querying +Macros.

+
+
+get_full_tmpdir()[source]
+
+ +
+
+get_makefile_vars(case=None, comp=None, cmake_args=None)[source]
+

Run cmake and process output to a list of variable settings

+

case can be None if caller is providing their own cmake args

+
+ +
+ +
+
+CIME.build.case_build(caseroot, case, sharedlib_only=False, model_only=False, buildlist=None, save_build_provenance=True, separate_builds=False, ninja=False, dry_run=False)[source]
+
+ +
+
+CIME.build.clean(case, cleanlist=None, clean_all=False, clean_depends=None)[source]
+
+ +
+
+CIME.build.generate_makefile_macro(case, caseroot)[source]
+

Generates a flat Makefile macro file based on the CMake cache system. +This macro is only used by certain sharedlibs since components use CMake. +Since indirection based on comp_name is allowed for sharedlibs, each sharedlib must generate +their own macro.

+
+ +
+
+CIME.build.get_standard_cmake_args(case, sharedpath)[source]
+
+ +
+
+CIME.build.get_standard_makefile_args(case, shared_lib=False)[source]
+
+ +
+
+CIME.build.post_build(case, logs, build_complete=False, save_build_provenance=True)[source]
+
+ +
+
+CIME.build.uses_kokkos(case)[source]
+
+ +
+
+CIME.build.xml_to_make_variable(case, varname, cmake=False)[source]
+
+ +
+
+

CIME.buildlib module

+

common utilities for buildlib

+
+
+CIME.buildlib.build_cime_component_lib(case, compname, libroot, bldroot)[source]
+
+ +
+
+CIME.buildlib.parse_input(argv)[source]
+
+ +
+
+CIME.buildlib.run_gmake(case, compclass, compname, libroot, bldroot, libname='', user_cppdefs='')[source]
+
+ +
+
+

CIME.buildnml module

+

common implementation for building namelist commands

+

These are used by components/<model_type>/<component>/cime_config/buildnml

+
+
+CIME.buildnml.build_xcpl_nml(case, caseroot, compname)[source]
+
+ +
+
+CIME.buildnml.copy_inputs_to_rundir(caseroot, compname, confdir, rundir, inst_string)[source]
+
+ +
+
+CIME.buildnml.create_namelist_infile(case, user_nl_file, namelist_infile, infile_text='')[source]
+
+ +
+
+CIME.buildnml.parse_input(argv)[source]
+
+ +
+
+

CIME.code_checker module

+

Libraries for checking python code with pylint

+
+
+CIME.code_checker.check_code(files, num_procs=10, interactive=False)[source]
+

Check all python files in the given directory

+

Returns True if all files had no problems

+
+ +
+
+CIME.code_checker.get_all_checkable_files()[source]
+
+ +
+
+

CIME.compare_namelists module

+
+
+CIME.compare_namelists.compare_namelist_files(gold_file, compare_file, case=None)[source]
+

Returns (is_match, comments)

+
+ +
+
+CIME.compare_namelists.is_namelist_file(file_path)[source]
+
+ +
+
+

CIME.compare_test_results module

+
+
+CIME.compare_test_results.append_status_cprnc_log(msg, logfile_name, test_dir)[source]
+
+ +
+
+CIME.compare_test_results.compare_history(case, baseline_name, baseline_root, log_id)[source]
+
+ +
+
+CIME.compare_test_results.compare_namelists(case, baseline_name, baseline_root, logfile_name)[source]
+
+ +
+
+CIME.compare_test_results.compare_test_results(baseline_name, baseline_root, test_root, compiler, test_id=None, compare_tests=None, namelists_only=False, hist_only=False)[source]
+

Compares with baselines for all matching tests

+

Outputs results for each test to stdout (one line per test); possible status +codes are: PASS, FAIL, SKIP. (A SKIP denotes a test that did not make it to +the run phase or a test for which the run phase did not pass: we skip +baseline comparisons in this case.)

+

In addition, creates files named compare.log.BASELINE_NAME.TIMESTAMP in each +test directory, which contain more detailed output. Also creates +*.cprnc.out.BASELINE_NAME.TIMESTAMP files in each run directory.

+

Returns True if all tests generated either PASS or SKIP results, False if +there was at least one FAIL result.

+
+ +
+
+

CIME.config module

+
+
+class CIME.config.Config[source]
+

Bases: ConfigBase

+
+ +
+
+class CIME.config.ConfigBase[source]
+

Bases: object

+
+
+classmethod instance()[source]
+

Access singleton.

+

Explicit way to access singleton, same as calling constructor.

+
+ +
+
+classmethod load(customize_path)[source]
+
+ +
+
+property loaded
+
+ +
+
+print_rst_table()[source]
+
+ +
+ +
+
+

CIME.cs_status module

+

Implementation of the cs.status script, which prints the status of all +of the tests in one or more test suites

+
+
+CIME.cs_status.cs_status(test_paths, summary=False, fails_only=False, count_fails_phase_list=None, check_throughput=False, check_memory=False, expected_fails_filepath=None, force_rebuild=False, out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
+

Print the test statuses of all tests in test_paths. The default +is to print to stdout, but this can be overridden with the ‘out’ +argument.

+

If summary is True, then only the overall status of each test is printed

+

If fails_only is True, then only test failures are printed (this +includes PENDs as well as FAILs).

+

If count_fails_phase_list is provided, it should be a list of phases +(from the phases given by test_status.ALL_PHASES). For each phase in +this list: do not give line-by-line output; instead, just report the +total number of tests that have not PASSed this phase (this includes +PENDs and FAILs). (This is typically used with the fails_only +option, but it can also be used without that option.)

+

If expected_fails_filepath is provided, it should be a string giving +the full path to a file listing expected failures for this test +suite. Expected failures are then labeled as such in the output.

+
+ +
+
+

CIME.cs_status_creator module

+

Creates a test suite-specific cs.status file from a template

+
+
+CIME.cs_status_creator.create_cs_status(test_root, test_id, extra_args='', filename=None)[source]
+

Create a test suite-specific cs.status file from the template

+

Arguments: +test_root (string): path to test root; the file will be put here. If

+
+

this directory doesn’t exist, it is created.

+
+
+
test_id (string): test id for this test suite. This can contain

shell wildcards if you want this one cs.status file to work +across multiple test suites. However, be careful not to make +this too general: for example, ending this with ‘*’ will pick up +the *.ref1 directories for ERI and other tests, which is NOT +what you want.

+
+
extra_args (string): extra arguments to the cs.status command

(If there are multiple arguments, these should be in a space-delimited string.)

+
+
filename (string): name of the generated cs.status file. If not

given, this will be built from the test_id.

+
+
+
+ +
+
+

CIME.date module

+
+
+class CIME.date.date(year=1, month=1, day=1, hour=0, minute=0, second=0)[source]
+

Bases: object

+

Simple struct for holding dates and the time of day and performing comparisons

+

Difference in Hour, Minute, or Second +>>> date(4, 5, 6, 9) == date(4, 5, 6, 8) +False +>>> date(4, 5, 6, 9) != date(4, 5, 6, 8) +True +>>> date(4, 5, 6, 9) < date(4, 5, 6, 8) +False +>>> date(4, 5, 6, 9) <= date(4, 5, 6, 8) +False +>>> date(4, 5, 6, 9) >= date(4, 5, 6, 8) +True +>>> date(4, 5, 6, 9) > date(4, 5, 6, 8) +True

+
>>> date(4, 5, 6, 4) == date(4, 5, 6, 8)
+False
+>>> date(4, 5, 6, 4) != date(4, 5, 6, 8)
+True
+>>> date(4, 5, 6, 4) < date(4, 5, 6, 8)
+True
+>>> date(4, 5, 6, 4) <= date(4, 5, 6, 8)
+True
+>>> date(4, 5, 6, 4) >= date(4, 5, 6, 8)
+False
+>>> date(4, 5, 6, 4) > date(4, 5, 6, 8)
+False
+
+
+

Difference in Day +>>> date(4, 5, 8, 8) == date(4, 5, 6, 8) +False +>>> date(4, 5, 8, 8) != date(4, 5, 6, 8) +True +>>> date(4, 5, 8, 8) < date(4, 5, 6, 8) +False +>>> date(4, 5, 8, 8) <= date(4, 5, 6, 8) +False +>>> date(4, 5, 8, 8) >= date(4, 5, 6, 8) +True +>>> date(4, 5, 8, 8) > date(4, 5, 6, 8) +True

+
>>> date(4, 5, 5, 8) == date(4, 5, 6, 8)
+False
+>>> date(4, 5, 5, 8) != date(4, 5, 6, 8)
+True
+>>> date(4, 5, 5, 8) < date(4, 5, 6, 8)
+True
+>>> date(4, 5, 5, 8) <= date(4, 5, 6, 8)
+True
+>>> date(4, 5, 5, 8) >= date(4, 5, 6, 8)
+False
+>>> date(4, 5, 5, 8) > date(4, 5, 6, 8)
+False
+
+
+

Difference in Month +>>> date(4, 6, 6, 8) == date(4, 5, 6, 8) +False +>>> date(4, 6, 6, 8) != date(4, 5, 6, 8) +True +>>> date(4, 6, 6, 8) < date(4, 5, 6, 8) +False +>>> date(4, 6, 6, 8) <= date(4, 5, 6, 8) +False +>>> date(4, 6, 6, 8) >= date(4, 5, 6, 8) +True +>>> date(4, 6, 6, 8) > date(4, 5, 6, 8) +True

+
>>> date(4, 4, 6, 8) == date(4, 5, 6, 8)
+False
+>>> date(4, 4, 6, 8) != date(4, 5, 6, 8)
+True
+>>> date(4, 4, 6, 8) < date(4, 5, 6, 8)
+True
+>>> date(4, 4, 6, 8) <= date(4, 5, 6, 8)
+True
+>>> date(4, 4, 6, 8) >= date(4, 5, 6, 8)
+False
+>>> date(4, 4, 6, 8) > date(4, 5, 6, 8)
+False
+
+
+

Difference in Year +>>> date(5, 5, 6, 8) == date(4, 5, 6, 8) +False +>>> date(5, 5, 6, 8) != date(4, 5, 6, 8) +True +>>> date(5, 5, 6, 8) < date(4, 5, 6, 8) +False +>>> date(5, 5, 6, 8) <= date(4, 5, 6, 8) +False +>>> date(5, 5, 6, 8) >= date(4, 5, 6, 8) +True +>>> date(5, 5, 6, 8) > date(4, 5, 6, 8) +True

+
>>> date(3, 5, 6, 8) == date(4, 5, 6, 8)
+False
+>>> date(3, 5, 6, 8) != date(4, 5, 6, 8)
+True
+>>> date(3, 5, 6, 8) < date(4, 5, 6, 8)
+True
+>>> date(3, 5, 6, 8) <= date(4, 5, 6, 8)
+True
+>>> date(3, 5, 6, 8) >= date(4, 5, 6, 8)
+False
+>>> date(3, 5, 6, 8) > date(4, 5, 6, 8)
+False
+
+
+
+
+day()[source]
+
+ +
+
+static hms_to_second(hour, minute, second)[source]
+
+ +
+
+hour()[source]
+
+ +
+
+minute()[source]
+
+ +
+
+month()[source]
+
+ +
+
+second()[source]
+
+ +
+
+second_of_day()[source]
+
+ +
+
+static second_to_hms(second)[source]
+
+ +
+
+year()[source]
+
+ +
+ +
+
+CIME.date.get_file_date(filename)[source]
+

Returns the date associated with the filename as a date object representing the correct date +Formats supported: +“%Y-%m-%d_%h.%M.%s +“%Y-%m-%d_%05s” +“%Y-%m-%d-%05s” +“%Y-%m-%d” +“%Y-%m” +“%Y.%m”

+
>>> get_file_date("./ne4np4_oQU240.cam.r.0001-01-06-00435.nc")
+date(1, 1, 6, 0, 7, 15)
+>>> get_file_date("./ne4np4_oQU240.cam.r.0010-1-06_00435.nc")
+date(10, 1, 6, 0, 7, 15)
+>>> get_file_date("./ne4np4_oQU240.cam.r.0010-10.nc")
+date(10, 10, 1, 0, 0, 0)
+>>> get_file_date("0064-3-8_10.20.30.nc")
+date(64, 3, 8, 10, 20, 30)
+>>> get_file_date("0140-3-5")
+date(140, 3, 5, 0, 0, 0)
+>>> get_file_date("0140-3")
+date(140, 3, 1, 0, 0, 0)
+>>> get_file_date("0140.3")
+date(140, 3, 1, 0, 0, 0)
+
+
+
+ +
+
+

CIME.expected_fails module

+

Contains the definition of a class to hold information on expected failures for a single test

+
+
+class CIME.expected_fails.ExpectedFails[source]
+

Bases: object

+
+
+add_failure(phase, expected_status)[source]
+

Add an expected failure to the list

+
+ +
+
+expected_fails_comment(phase, status)[source]
+

Returns a string giving the expected fails comment for this phase and status

+
+ +
+ +
+
+

CIME.get_tests module

+
+
+CIME.get_tests.get_build_groups(tests)[source]
+

Given a list of tests, return a list of lists, with each list representing +a group of tests that can share executables.

+
>>> tests = ["SMS_P2.f19_g16_rx1.A.melvin_gnu", "SMS_P4.f19_g16_rx1.A.melvin_gnu", "SMS_P2.f19_g16_rx1.X.melvin_gnu", "SMS_P4.f19_g16_rx1.X.melvin_gnu", "TESTRUNSLOWPASS_P1.f19_g16_rx1.A.melvin_gnu", "TESTRUNSLOWPASS_P1.ne30_g16_rx1.A.melvin_gnu"]
+>>> get_build_groups(tests)
+[('SMS_P2.f19_g16_rx1.A.melvin_gnu', 'SMS_P4.f19_g16_rx1.A.melvin_gnu'), ('SMS_P2.f19_g16_rx1.X.melvin_gnu', 'SMS_P4.f19_g16_rx1.X.melvin_gnu'), ('TESTRUNSLOWPASS_P1.f19_g16_rx1.A.melvin_gnu',), ('TESTRUNSLOWPASS_P1.ne30_g16_rx1.A.melvin_gnu',)]
+
+
+
+ +
+
+CIME.get_tests.get_full_test_names(testargs, machine, compiler)[source]
+

Return full test names in the form: +TESTCASE.GRID.COMPSET.MACHINE_COMPILER.TESTMODS +Testmods are optional

+

Testargs can be categories or test names and support the NOT symbol ‘^’

+
>>> get_full_test_names(["cime_tiny"], "melvin", "gnu")
+['ERS.f19_g16_rx1.A.melvin_gnu', 'NCK.f19_g16_rx1.A.melvin_gnu']
+
+
+
>>> get_full_test_names(["cime_tiny", "PEA_P1_M.f45_g37_rx1.A"], "melvin", "gnu")
+['ERS.f19_g16_rx1.A.melvin_gnu', 'NCK.f19_g16_rx1.A.melvin_gnu', 'PEA_P1_M.f45_g37_rx1.A.melvin_gnu']
+
+
+
>>> get_full_test_names(['ERS.f19_g16_rx1.A', 'NCK.f19_g16_rx1.A', 'PEA_P1_M.f45_g37_rx1.A'], "melvin", "gnu")
+['ERS.f19_g16_rx1.A.melvin_gnu', 'NCK.f19_g16_rx1.A.melvin_gnu', 'PEA_P1_M.f45_g37_rx1.A.melvin_gnu']
+
+
+
>>> get_full_test_names(["cime_tiny", "^NCK.f19_g16_rx1.A"], "melvin", "gnu")
+['ERS.f19_g16_rx1.A.melvin_gnu']
+
+
+
>>> get_full_test_names(["cime_test_multi_inherit"], "melvin", "gnu")
+['TESTBUILDFAILEXC_P1.f19_g16_rx1.A.melvin_gnu', 'TESTBUILDFAIL_P1.f19_g16_rx1.A.melvin_gnu', 'TESTMEMLEAKFAIL_P1.f09_g16.X.melvin_gnu', 'TESTMEMLEAKPASS_P1.f09_g16.X.melvin_gnu', 'TESTRUNDIFF_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNFAILEXC_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNFAIL_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P1.f45_g37_rx1.A.melvin_gnu', 'TESTRUNPASS_P1.ne30_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P2.ne30_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P4.f45_g37_rx1.A.melvin_gnu', 'TESTRUNSTARCFAIL_P1.f19_g16_rx1.A.melvin_gnu', 'TESTTESTDIFF_P1.f19_g16_rx1.A.melvin_gnu']
+
+
+
+ +
+ +
>>> get_recommended_test_time("ERS.f19_g16_rx1.A.melvin_gnu")
+'0:10:00'
+
+
+
>>> get_recommended_test_time("TESTRUNPASS_P69.f19_g16_rx1.A.melvin_gnu.testmod")
+'0:13:00'
+
+
+
>>> get_recommended_test_time("PET_Ln20.ne30_ne30.FC5.sandiatoss3_intel.cam-outfrq9s")
+>>>
+
+
+
+ +
+
+CIME.get_tests.get_test_data(suite)[source]
+

For a given suite, returns (inherit, time, share, perf, tests)

+
+ +
+
+CIME.get_tests.get_test_suite(suite, machine=None, compiler=None, skip_inherit=False, skip_tests=None)[source]
+

Return a list of FULL test names for a suite.

+
+ +
+
+CIME.get_tests.get_test_suites()[source]
+
+ +
+
+CIME.get_tests.infer_arch_from_tests(testargs)[source]
+

Return a tuple (machine, [compilers]) that can be inferred from the test args

+
>>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu"])
+('melvin', ['gnu'])
+>>> infer_arch_from_tests(["NCK.f19_g16_rx1.A"])
+(None, [])
+>>> infer_arch_from_tests(["NCK.f19_g16_rx1.A", "NCK.f19_g16_rx1.A.melvin_gnu"])
+('melvin', ['gnu'])
+>>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu", "NCK.f19_g16_rx1.A.melvin_gnu"])
+('melvin', ['gnu'])
+>>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu9", "NCK.f19_g16_rx1.A.melvin_gnu"])
+('melvin', ['gnu9', 'gnu'])
+>>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu", "NCK.f19_g16_rx1.A.mappy_gnu"])
+Traceback (most recent call last):
+    ...
+CIME.utils.CIMEError: ERROR: Must have consistent machine 'melvin' != 'mappy'
+
+
+
+ +
+
+CIME.get_tests.is_perf_test(test)[source]
+

Is the provided test in a suite with perf=True?

+
>>> is_perf_test("SMS_P2.T42_T42.S.melvin_gnu")
+True
+>>> is_perf_test("SMS_P2.f19_g16_rx1.X.melvin_gnu")
+False
+>>> is_perf_test("PFS_P2.f19_g16_rx1.X.melvin_gnu")
+True
+
+
+
+ +
+
+CIME.get_tests.key_test_time(test_full_name)[source]
+
+ +
+
+CIME.get_tests.suite_has_test(suite, test_full_name, skip_inherit=False)[source]
+
+ +
+
+

CIME.get_timing module

+

Library for implementing getTiming tool which gets timing +information from a run.

+
+
+CIME.get_timing.get_timing(case, lid)[source]
+
+ +
+
+

CIME.hist_utils module

+

Functions for actions pertaining to history files.

+
+
+CIME.hist_utils.compare_baseline(case, baseline_dir=None, outfile_suffix='')[source]
+

compare the current test output to a baseline result

+

case - The case containing the hist files to be compared against baselines +baseline_dir - Optionally, specify a specific baseline dir, otherwise it will be computed from case config +outfile_suffix - if non-blank, then the cprnc output file name ends with

+
+

this suffix (with a ‘.’ added before the given suffix). if None, no output file saved.

+
+

returns (SUCCESS, comments) +SUCCESS means all hist files matched their corresponding baseline

+
+ +
+
+CIME.hist_utils.compare_test(case, suffix1, suffix2, ignore_fieldlist_diffs=False)[source]
+

Compares two sets of component history files in the testcase directory

+

case - The case containing the hist files to compare +suffix1 - The suffix that identifies the first batch of hist files +suffix1 - The suffix that identifies the second batch of hist files +ignore_fieldlist_diffs (bool): If True, then: If the two cases differ only in their

+
+

field lists (i.e., all shared fields are bit-for-bit, but one case has some +diagnostic fields that are missing from the other case), treat the two cases as +identical.

+
+

returns (SUCCESS, comments, num_compared)

+
+ +
+
+CIME.hist_utils.copy_histfiles(case, suffix, match_suffix=None)[source]
+

Copy the most recent batch of hist files in a case, adding the given suffix.

+

This can allow you to temporarily “save” these files so they won’t be blown +away if you re-run the case.

+

case - The case containing the files you want to save +suffix - The string suffix you want to add to saved files, this can be used to find them later.

+

returns (comments, num_copied)

+
+ +
+
+CIME.hist_utils.cprnc(model, file1, file2, case, rundir, multiinst_driver_compare=False, outfile_suffix='', ignore_fieldlist_diffs=False, cprnc_exe=None)[source]
+

Run cprnc to compare two individual nc files

+

file1 - the full or relative path of the first file +file2 - the full or relative path of the second file +case - the case containing the files +rundir - the rundir for the case +outfile_suffix - if non-blank, then the output file name ends with this

+
+

suffix (with a ‘.’ added before the given suffix). +Use None to avoid permissions issues in the case dir.

+
+
+
ignore_fieldlist_diffs (bool): If True, then: If the two cases differ only in their

field lists (i.e., all shared fields are bit-for-bit, but one case has some +diagnostic fields that are missing from the other case), treat the two cases as +identical.

+
+
returns (True if the files matched, log_name, comment)

where ‘comment’ is either an empty string or one of the module-level constants +beginning with CPRNC_ (e.g., CPRNC_FIELDLISTS_DIFFER)

+
+
+
+ +
+
+CIME.hist_utils.generate_baseline(case, baseline_dir=None, allow_baseline_overwrite=False)[source]
+
+ +
+
+CIME.hist_utils.generate_teststatus(testdir, baseline_dir)[source]
+

CESM stores it’s TestStatus file in baselines. Do not let exceptions +escape from this function.

+
+ +
+
+CIME.hist_utils.get_ts_synopsis(comments)[source]
+

Reduce case diff comments down to a single line synopsis so that we can put +something in the TestStatus file. It’s expected that the comments provided +to this function came from compare_baseline, not compare_tests.

+
>>> get_ts_synopsis('')
+''
+>>> get_ts_synopsis('big error')
+'big error'
+>>> get_ts_synopsis('big error\n')
+'big error'
+>>> get_ts_synopsis('stuff\n    File foo had a different field list from bar with suffix baz\nPass\n')
+'FIELDLIST field lists differ (otherwise bit-for-bit)'
+>>> get_ts_synopsis('stuff\n    File foo had no compare counterpart in bar with suffix baz\nPass\n')
+'ERROR BFAIL some baseline files were missing'
+>>> get_ts_synopsis('stuff\n    File foo had a different field list from bar with suffix baz\n    File foo had no compare counterpart in bar with suffix baz\nPass\n')
+'MULTIPLE ISSUES: field lists differ and some baseline files were missing'
+>>> get_ts_synopsis('stuff\n    File foo did NOT match bar with suffix baz\nPass\n')
+'DIFF'
+>>> get_ts_synopsis('stuff\n    File foo did NOT match bar with suffix baz\n    File foo had a different field list from bar with suffix baz\nPass\n')
+'DIFF'
+>>> get_ts_synopsis('stuff\n    File foo did NOT match bar with suffix baz\n    File foo had no compare counterpart in bar with suffix baz\nPass\n')
+'DIFF'
+>>> get_ts_synopsis('File foo had no compare counterpart in bar with suffix baz\n File foo had no original counterpart in bar with suffix baz\n')
+'DIFF'
+
+
+
+ +
+
+CIME.hist_utils.rename_all_hist_files(case, suffix)[source]
+

Renaming all hist files in a case, adding the given suffix.

+

case - The case containing the files you want to save +suffix - The string suffix you want to add to saved files, this can be used to find them later.

+
+ +
+
+

CIME.jenkins_generic_job module

+
+
+CIME.jenkins_generic_job.archive_old_test_data(machine, mach_comp, test_id_root, test_root, old_test_archive, avoid_test_id)[source]
+
+ +
+
+CIME.jenkins_generic_job.cleanup_queue(test_root, test_id)[source]
+

Delete all jobs left in the queue

+
+ +
+
+CIME.jenkins_generic_job.delete_old_test_data(mach_comp, test_id_root, scratch_root, test_root, run_area, build_area, archive_area, avoid_test_id)[source]
+
+ +
+
+CIME.jenkins_generic_job.handle_old_test_data(machine, compiler, test_id_root, scratch_root, test_root, avoid_test_id)[source]
+
+ +
+
+CIME.jenkins_generic_job.jenkins_generic_job(generate_baselines, submit_to_cdash, no_batch, baseline_name, arg_cdash_build_name, cdash_project, arg_test_suite, cdash_build_group, baseline_compare, scratch_root, parallel_jobs, walltime, machine, compiler, real_baseline_name, baseline_root, update_success, check_throughput, check_memory, ignore_memleak, ignore_namelists, save_timing, pes_file, jenkins_id, queue)[source]
+

Return True if all tests passed

+
+ +
+
+CIME.jenkins_generic_job.scan_for_test_ids(old_test_archive, mach_comp, test_id_root)[source]
+
+ +
+
+

CIME.locked_files module

+
+
+CIME.locked_files.is_locked(filename, caseroot=None)[source]
+
+ +
+
+CIME.locked_files.lock_file(filename, caseroot=None, newname=None)[source]
+
+ +
+
+CIME.locked_files.unlock_file(filename, caseroot=None)[source]
+
+ +
+
+

CIME.namelist module

+

Module containing tools for dealing with Fortran namelists.

+

The public interface consists of the following functions: +- character_literal_to_string +- compress_literal_list +- expand_literal_list +- fortran_namelist_base_value +- is_valid_fortran_name +- is_valid_fortran_namelist_literal +- literal_to_python_value +- merge_literal_lists +- parse +- string_to_character_literal

+

In addition, the Namelist class represents a namelist held in memory.

+

For the moment, only a subset of namelist syntax is supported; specifically, we +assume that only variables of intrinsic type are used, and indexing/co-indexing +of arrays to set a portion of a variable is not supported. (However, null values +and repeated values may be used to set or fill a variable as indexing would.)

+

We also always assume that a period (“.”) is the decimal separator, not a comma +(“,”). We also assume that the file encoding is UTF-8 or some compatible format +(e.g. ASCII).

+

Otherwise, most Fortran syntax rules implemented here are compatible with +Fortran 2008 (which is largely the same as previous standards, and will be +similar to Fortran 2015). The only exceptions should be cases where (a) we +deliberately prohibit “troublesome” behavior that would be allowed by the +standard, or (b) we rely on conventions shared by all major compilers.

+

One important convention is that newline characters can be used to denote the +end of a record. This makes them equivalent to spaces at most locations in a +Fortran namelist, except that newlines also end comments, and they are ignored +entirely within strings.

+

While the treatment of comments in this module is standard, it may be somewhat +surprising. Namelist comments are only allowed in two situations:

+
    +
  1. As the only thing on a line (aside from optional indentation with spaces).

  2. +
+

(2) Immediately after a “value separator” (the space, newline, comma, or slash +after a value).

+

This implies that all lines except for the last are syntax errors, in this +example:

+

` +&group_name! This is not a valid comment because it's after the group name. +foo ! Neither is this, because it's between a name and an equals sign. += 2 ! Nor this, because it comes between the value and the following comma. +, bar = ! Nor this, because it's between an equals sign and a value. +2! Nor this, because it should be separated from the value by a comma or space. +bazz = 3 ! Nor this, because it comes between the value and the following slash. +/! This is fine, but technically it is outside the namelist, not a comment. +`

+

However, the above would actually be valid if all the “comments” were removed. +The Fortran standard is not clear about whether whitespace is allowed after +inline comments and before subsequent non-whitespace text (!), but this module +allows such whitespace, to preserve the sanity of both implementors and users.

+

The Fortran standard only applies to the interior of namelist groups, and not to +text between one namelist group and the next. This module assumes that namelist +groups are separated by (optional) whitespace and comments, and nothing else.

+
+
+class CIME.namelist.Namelist(groups=None)[source]
+

Bases: object

+

Class representing a Fortran namelist.

+

Public methods: +__init__ +delete_variable +get_group_names +get_value +get_variable_names +get_variable_value +merge_nl +set_variable_value +write

+
+
+clean_groups()[source]
+
+ +
+
+delete_variable(group_name, variable_name)[source]
+

Delete a variable from a specified group.

+

If the specified group or variable does not exist, this is a no-op.

+
>>> x = parse(text='&foo bar=1 /')
+>>> x.delete_variable('FOO', 'BAR')
+>>> x.delete_variable('foo', 'bazz')
+>>> x.delete_variable('brack', 'bazz')
+>>> x.get_variable_names('foo')
+[]
+>>> x.get_variable_names('brack')
+[]
+
+
+
+ +
+
+get_group_names()[source]
+

Return a list of all groups in the namelist.

+
>>> Namelist().get_group_names()
+[]
+>>> sorted(parse(text='&foo / &bar /').get_group_names())
+['bar', 'foo']
+
+
+
+ +
+
+get_group_variables(group_name)[source]
+
+ +
+
+get_value(variable_name)[source]
+

Return the value of a uniquely-named variable.

+

This function is similar to get_variable_value, except that it does +not require a group_name, and it requires that the variable_name be +unique across all groups.

+
>>> parse(text='&foo bar=1 / &bazz bar=1 /').get_value('bar')  
+Traceback (most recent call last):
+...
+CIMEError: ERROR: Namelist.get_value: Variable {} is present in multiple groups: ...
+>>> parse(text='&foo bar=1 / &bazz /').get_value('Bar')
+['1']
+>>> parse(text='&foo bar(2)=1 / &bazz /').get_value('Bar(2)')
+['1']
+>>> parse(text='&foo / &bazz /').get_value('bar')
+['']
+
+
+
+ +
+
+get_variable_names(group_name)[source]
+

Return a list of all variables in the given namelist group.

+

If the specified group is not in the namelist, returns an empty list.

+
>>> Namelist().get_variable_names('foo')
+[]
+>>> x = parse(text='&foo bar=,bazz=true,bazz(2)=fred,bang=6*""/')
+>>> sorted(x.get_variable_names('fOo'))
+['bang', 'bar', 'bazz', 'bazz(2)']
+>>> x = parse(text='&foo bar=,bazz=true,bang=6*""/')
+>>> sorted(x.get_variable_names('fOo'))
+['bang', 'bar', 'bazz']
+>>> x = parse(text='&foo bar(::)=,bazz=false,bazz(2)=true,bazz(:2:)=6*""/')
+>>> sorted(x.get_variable_names('fOo'))
+['bar(::)', 'bazz', 'bazz(2)', 'bazz(:2:)']
+
+
+
+ +
+
+get_variable_value(group_name, variable_name)[source]
+

Return the value of the specified variable.

+

This function always returns a non-empty list containing strings. If the +specified group_name or variable_name is not present, [‘’] is +returned.

+
>>> Namelist().get_variable_value('foo', 'bar')
+['']
+>>> parse(text='&foo bar=1,2 /').get_variable_value('foo', 'bazz')
+['']
+>>> parse(text='&foo bar=1,2 /').get_variable_value('foO', 'Bar')
+['1', '2']
+
+
+
+ +
+
+merge_nl(other, overwrite=False)[source]
+

Merge this namelist object with another.

+

Values in the invoking (self) Namelist will take precedence over +values in the other Namelist, unless overwrite=True is passed in, +in which case other values take precedence.

+
>>> x = parse(text='&foo bar=1 bazz=,2 brat=3/')
+>>> y = parse(text='&foo bar=2 bazz=3*1 baker=4 / &foo2 barter=5 /')
+>>> y.get_value('bazz')
+['1', '1', '1']
+>>> x.merge_nl(y)
+>>> sorted(x.get_group_names())
+['foo', 'foo2']
+>>> sorted(x.get_variable_names('foo'))
+['baker', 'bar', 'bazz', 'brat']
+>>> sorted(x.get_variable_names('foo2'))
+['barter']
+>>> x.get_value('bar')
+['1']
+>>> x.get_value('bazz')
+['1', '2', '1']
+>>> x.get_value('brat')
+['3']
+>>> x.get_value('baker')
+['4']
+>>> x.get_value('barter')
+['5']
+>>> x = parse(text='&foo bar=1 bazz=,2 brat=3/')
+>>> y = parse(text='&foo bar=2 bazz=3*1 baker=4 / &foo2 barter=5 /')
+>>> x.merge_nl(y, overwrite=True)
+>>> sorted(x.get_group_names())
+['foo', 'foo2']
+>>> sorted(x.get_variable_names('foo'))
+['baker', 'bar', 'bazz', 'brat']
+>>> sorted(x.get_variable_names('foo2'))
+['barter']
+>>> x.get_value('bar')
+['2']
+>>> x.get_value('bazz')
+['1', '1', '1']
+>>> x.get_value('brat')
+['3']
+>>> x.get_value('baker')
+['4']
+>>> x.get_value('barter')
+['5']
+
+
+
+ +
+
+set_variable_value(group_name, variable_name, value, var_size=1)[source]
+

Set the value of the specified variable.

+
>>> x = parse(text='&foo bar=1 /')
+>>> x.get_variable_value('foo', 'bar')
+['1']
+>>> x.set_variable_value('foo', 'bar(2)', ['3'], var_size=4)
+>>> x.get_variable_value('foo', 'bar')
+['1', '3']
+>>> x.set_variable_value('foo', 'bar(1)', ['2'])
+>>> x.get_variable_value('foo', 'bar')
+['2', '3']
+>>> x.set_variable_value('foo', 'bar', ['1'])
+>>> x.get_variable_value('foo', 'bar')
+['1', '3']
+>>> x.set_variable_value('foo', 'bazz', ['3'])
+>>> x.set_variable_value('Brack', 'baR', ['4'])
+>>> x.get_variable_value('foo', 'bazz')
+['3']
+>>> x.get_variable_value('brack', 'bar')
+['4']
+>>> x.set_variable_value('foo', 'red(2:6:2)', ['2', '4', '6'], var_size=12)
+>>> x.get_variable_value('foo', 'red')
+['', '2', '', '4', '', '6']
+
+
+
+ +
+
+write(out_file, groups=None, append=False, format_='nml', sorted_groups=True)[source]
+

Write a the output data (normally fortran namelist) to the out_file

+

As with parse, the out_file argument can be either a file name, or a +file object with a write method that accepts unicode. If specified, +the groups argument specifies a subset of all groups to write out.

+

If out_file is a file name, and append=True is passed in, the +namelist will be appended to the named file instead of overwriting it. +The append option has no effect if a file object is passed in.

+

The format_ option can be either ‘nml’ (namelist) or ‘rc’, and +specifies the file format. Formats other than ‘nml’ may not support all +possible output values.

+
+ +
+
+write_nuopc(out_file, groups=None, sorted_groups=True)[source]
+

Write a nuopc config file out_file

+

As with parse, the out_file argument can be either a file name, or a +file object with a write method that accepts unicode. If specified, +the groups argument specifies a subset of all groups to write out.

+
+ +
+ +
+
+CIME.namelist.character_literal_to_string(literal)[source]
+

Convert a Fortran character literal to a Python string.

+

This function assumes (without checking) that literal is a valid literal.

+
>>> character_literal_to_string("'blah'")
+'blah'
+>>> character_literal_to_string('"blah"')
+'blah'
+>>> character_literal_to_string("'don''t'")
+"don't"
+>>> character_literal_to_string('"' + '""Hello!""' + '"')
+'"Hello!"'
+
+
+
+ +
+
+CIME.namelist.compress_literal_list(literals)[source]
+

Uses repetition syntax to shorten a literal list.

+
>>> compress_literal_list([])
+[]
+>>> compress_literal_list(['true'])
+['true']
+>>> compress_literal_list(['1', '2', 'f*', '3', '3', '3', '5'])
+['1', '2', 'f*', '3', '3', '3', '5']
+>>> compress_literal_list(['f*', 'f*'])
+['f*', 'f*']
+
+
+
+ +
+
+CIME.namelist.expand_literal_list(literals)[source]
+

Expands a list of literal values to get rid of repetition syntax.

+
>>> expand_literal_list([])
+[]
+>>> expand_literal_list(['true'])
+['true']
+>>> expand_literal_list(['1', '2', 'f*', '3*3', '5'])
+['1', '2', 'f*', '3', '3', '3', '5']
+>>> expand_literal_list(['2*f*'])
+['f*', 'f*']
+
+
+
+ +
+
+CIME.namelist.fortran_namelist_base_value(string)[source]
+

Strip off whitespace and repetition syntax from a namelist value.

+
>>> fortran_namelist_base_value("")
+''
+>>> fortran_namelist_base_value("f")
+'f'
+>>> fortran_namelist_base_value("6*")
+''
+>>> fortran_namelist_base_value("6*f")
+'f'
+>>> fortran_namelist_base_value(" \n6* \n")
+''
+>>> fortran_namelist_base_value("\n 6*f\n ")
+'f'
+
+
+
+ +
+
+CIME.namelist.get_fortran_name_only(full_var)[source]
+

remove array section if any and return only the variable name +>>> get_fortran_name_only(‘foo’) +‘foo’ +>>> get_fortran_name_only(‘foo(3)’) +‘foo’ +>>> get_fortran_name_only(‘foo(::)’) +‘foo’ +>>> get_fortran_name_only(‘foo(1::)’) +‘foo’ +>>> get_fortran_name_only(‘foo(:+2:)’) +‘foo’ +>>> get_fortran_name_only(‘foo(::-3)’) +‘foo’ +>>> get_fortran_name_only(‘foo(::)’) +‘foo’

+
+ +
+
+CIME.namelist.get_fortran_variable_indices(varname, varlen=1, allow_any_len=False)[source]
+

get indices from a fortran namelist variable as a triplet of minindex, maxindex and step

+
>>> get_fortran_variable_indices('foo(3)')
+(3, 3, 1)
+>>> get_fortran_variable_indices('foo(1:2:3)')
+(1, 2, 3)
+>>> get_fortran_variable_indices('foo(::)', varlen=4)
+(1, 4, 1)
+>>> get_fortran_variable_indices('foo(::2)', varlen=4)
+(1, 4, 2)
+>>> get_fortran_variable_indices('foo(::)', allow_any_len=True)
+(1, -1, 1)
+
+
+
+ +
+
+CIME.namelist.is_valid_fortran_name(string)[source]
+

Check that a variable name is allowed in Fortran.

+

The rules are: +1. The name must start with a letter. +2. All characters in a name must be alphanumeric (or underscores). +3. The maximum name length is 63 characters. +4. We only handle a single dimension !!!

+
>>> is_valid_fortran_name("")
+False
+>>> is_valid_fortran_name("a")
+True
+>>> is_valid_fortran_name("A")
+True
+>>> is_valid_fortran_name("A(4)")
+True
+>>> is_valid_fortran_name("A(::)")
+True
+>>> is_valid_fortran_name("A(1:2:3)")
+True
+>>> is_valid_fortran_name("A(1::)")
+True
+>>> is_valid_fortran_name("A(:-2:)")
+True
+>>> is_valid_fortran_name("A(1::+3)")
+True
+>>> is_valid_fortran_name("A(1,3)")
+False
+>>> is_valid_fortran_name("2")
+False
+>>> is_valid_fortran_name("_")
+False
+>>> is_valid_fortran_name("abc#123")
+False
+>>> is_valid_fortran_name("aLiBi_123")
+True
+>>> is_valid_fortran_name("A" * 64)
+False
+>>> is_valid_fortran_name("A" * 63)
+True
+
+
+
+ +
+
+CIME.namelist.is_valid_fortran_namelist_literal(type_, string)[source]
+

Determine whether a literal is valid in a Fortran namelist.

+

Note that kind parameters are not allowed in namelists, which simplifies +this check a bit. Internal whitespace is allowed for complex and character +literals only. BOZ literals and compiler extensions (e.g. backslash escapes) +are not allowed.

+

Null values, however, are allowed for all types. This means that passing in +a string containing nothing but spaces and newlines will always cause +True to be returned. Repetition (e.g. 5*’a’) is also allowed, including +repetition of null values.

+

Detailed rules and examples follow.

+

Integers: Must be a sequence of one or more digits, with an optional sign.

+
>>> is_valid_fortran_namelist_literal("integer", "")
+True
+>>> is_valid_fortran_namelist_literal("integer", " ")
+True
+>>> is_valid_fortran_namelist_literal("integer", "\n")
+True
+>>> is_valid_fortran_namelist_literal("integer", "5*")
+True
+>>> is_valid_fortran_namelist_literal("integer", "1")
+True
+>>> is_valid_fortran_namelist_literal("integer", "5*1")
+True
+>>> is_valid_fortran_namelist_literal("integer", " 5*1")
+True
+>>> is_valid_fortran_namelist_literal("integer", "5* 1")
+False
+>>> is_valid_fortran_namelist_literal("integer", "5 *1")
+False
+>>> is_valid_fortran_namelist_literal("integer", "a")
+False
+>>> is_valid_fortran_namelist_literal("integer", " 1")
+True
+>>> is_valid_fortran_namelist_literal("integer", "1 ")
+True
+>>> is_valid_fortran_namelist_literal("integer", "1 2")
+False
+>>> is_valid_fortran_namelist_literal("integer", "0123456789")
+True
+>>> is_valid_fortran_namelist_literal("integer", "+22")
+True
+>>> is_valid_fortran_namelist_literal("integer", "-26")
+True
+>>> is_valid_fortran_namelist_literal("integer", "2A")
+False
+>>> is_valid_fortran_namelist_literal("integer", "1_8")
+False
+>>> is_valid_fortran_namelist_literal("integer", "2.1")
+False
+>>> is_valid_fortran_namelist_literal("integer", "2e6")
+False
+
+
+

Reals: +- For fixed-point format, there is an optional sign, followed by an integer +part, or a decimal point followed by a fractional part, or both. +- Scientific notation is allowed, with an optional, case-insensitive “e” or +“d” followed by an optionally-signed integer exponent. (Either the “e”/”d” +or a sign must be present to separate the number from the exponent.) +- The (case-insensitive) strings “inf”, “infinity”, and “nan” are allowed. +NaN values can also contain additional information in parentheses, e.g. +“NaN(x1234ABCD)”.

+
>>> is_valid_fortran_namelist_literal("real", "")
+True
+>>> is_valid_fortran_namelist_literal("real", "a")
+False
+>>> is_valid_fortran_namelist_literal("real", "1")
+True
+>>> is_valid_fortran_namelist_literal("real", " 1")
+True
+>>> is_valid_fortran_namelist_literal("real", "1 ")
+True
+>>> is_valid_fortran_namelist_literal("real", "1 2")
+False
+>>> is_valid_fortran_namelist_literal("real", "+1")
+True
+>>> is_valid_fortran_namelist_literal("real", "-1")
+True
+>>> is_valid_fortran_namelist_literal("real", "1.")
+True
+>>> is_valid_fortran_namelist_literal("real", "1.5")
+True
+>>> is_valid_fortran_namelist_literal("real", ".5")
+True
+>>> is_valid_fortran_namelist_literal("real", "+.5")
+True
+>>> is_valid_fortran_namelist_literal("real", ".")
+False
+>>> is_valid_fortran_namelist_literal("real", "+")
+False
+>>> is_valid_fortran_namelist_literal("real", "1e6")
+True
+>>> is_valid_fortran_namelist_literal("real", "1e-6")
+True
+>>> is_valid_fortran_namelist_literal("real", "1e+6")
+True
+>>> is_valid_fortran_namelist_literal("real", ".5e6")
+True
+>>> is_valid_fortran_namelist_literal("real", "1e")
+False
+>>> is_valid_fortran_namelist_literal("real", "1D6")
+True
+>>> is_valid_fortran_namelist_literal("real", "1q6")
+False
+>>> is_valid_fortran_namelist_literal("real", "1+6")
+True
+>>> is_valid_fortran_namelist_literal("real", "1.6.5")
+False
+>>> is_valid_fortran_namelist_literal("real", "1._8")
+False
+>>> is_valid_fortran_namelist_literal("real", "1,5")
+False
+>>> is_valid_fortran_namelist_literal("real", "inf")
+True
+>>> is_valid_fortran_namelist_literal("real", "INFINITY")
+True
+>>> is_valid_fortran_namelist_literal("real", "NaN")
+True
+>>> is_valid_fortran_namelist_literal("real", "nan(x56)")
+True
+>>> is_valid_fortran_namelist_literal("real", "nan())")
+False
+
+
+

Complex numbers: +- A pair of real numbers enclosed by parentheses, and separated by a comma. +- Any number of spaces or newlines may be placed before or after each real.

+
>>> is_valid_fortran_namelist_literal("complex", "")
+True
+>>> is_valid_fortran_namelist_literal("complex", "()")
+False
+>>> is_valid_fortran_namelist_literal("complex", "(,)")
+False
+>>> is_valid_fortran_namelist_literal("complex", "( ,\n)")
+False
+>>> is_valid_fortran_namelist_literal("complex", "(a,2.)")
+False
+>>> is_valid_fortran_namelist_literal("complex", "(1.,b)")
+False
+>>> is_valid_fortran_namelist_literal("complex", "(1,2)")
+True
+>>> is_valid_fortran_namelist_literal("complex", "(-1.e+06,+2.d-5)")
+True
+>>> is_valid_fortran_namelist_literal("complex", "(inf,NaN)")
+True
+>>> is_valid_fortran_namelist_literal("complex", "(  1. ,  2. )")
+True
+>>> is_valid_fortran_namelist_literal("complex", "( \n \n 1. \n,\n 2.\n)")
+True
+>>> is_valid_fortran_namelist_literal("complex", " (1.,2.)")
+True
+>>> is_valid_fortran_namelist_literal("complex", "(1.,2.) ")
+True
+
+
+

Character sequences (strings): +- Must begin and end with the same delimiter character, either a single +quote (apostrophe), or a double quote (quotation mark). +- Whichever character is used as a delimiter must not appear in the +string itself, unless it appears in doubled pairs (e.g. ‘’’’ or “’” are the +two ways of representing a string containing a single apostrophe). +- Note that newlines cannot be represented in a namelist character literal +since they are interpreted as an “end of record”, but they are allowed as +long as they don’t come between one of the aforementioned double pairs of +characters.

+
>>> is_valid_fortran_namelist_literal("character", "")
+True
+>>> is_valid_fortran_namelist_literal("character", "''")
+True
+>>> is_valid_fortran_namelist_literal("character", " ''")
+True
+>>> is_valid_fortran_namelist_literal("character", "'\n'")
+True
+>>> is_valid_fortran_namelist_literal("character", "''\n''")
+False
+>>> is_valid_fortran_namelist_literal("character", "'''")
+False
+>>> is_valid_fortran_namelist_literal("character", "''''")
+True
+>>> is_valid_fortran_namelist_literal("character", "'''Cookie'''")
+True
+>>> is_valid_fortran_namelist_literal("character", "'''Cookie''")
+False
+>>> is_valid_fortran_namelist_literal("character", "'\"'")
+True
+>>> is_valid_fortran_namelist_literal("character", "'\"\"'")
+True
+>>> is_valid_fortran_namelist_literal("character", '""')
+True
+>>> is_valid_fortran_namelist_literal("character", '"" ')
+True
+>>> is_valid_fortran_namelist_literal("character", '"\n"')
+True
+>>> is_valid_fortran_namelist_literal("character", '""\n""')
+False
+>>> is_valid_fortran_namelist_literal("character", '""' + '"')
+False
+>>> is_valid_fortran_namelist_literal("character", '""' + '""')
+True
+>>> is_valid_fortran_namelist_literal("character", '"' + '""Cookie""' + '"')
+True
+>>> is_valid_fortran_namelist_literal("character", '""Cookie""' + '"')
+False
+>>> is_valid_fortran_namelist_literal("character", '"\'"')
+True
+>>> is_valid_fortran_namelist_literal("character", '"\'\'"')
+True
+
+
+

Logicals: +- Must contain a (case-insensitive) “t” or “f”. +- This must be either the first nonblank character, or the second following +a period. +- The rest of the string is ignored, but cannot contain a comma, newline, +equals sign, slash, or space (except that trailing spaces are allowed and +ignored).

+
>>> is_valid_fortran_namelist_literal("logical", "")
+True
+>>> is_valid_fortran_namelist_literal("logical", "t")
+True
+>>> is_valid_fortran_namelist_literal("logical", "F")
+True
+>>> is_valid_fortran_namelist_literal("logical", ".T")
+True
+>>> is_valid_fortran_namelist_literal("logical", ".f")
+True
+>>> is_valid_fortran_namelist_literal("logical", " f")
+True
+>>> is_valid_fortran_namelist_literal("logical", " .t")
+True
+>>> is_valid_fortran_namelist_literal("logical", "at")
+False
+>>> is_valid_fortran_namelist_literal("logical", ".TRUE.")
+True
+>>> is_valid_fortran_namelist_literal("logical", ".false.")
+True
+>>> is_valid_fortran_namelist_literal("logical", ".TEXAS$")
+True
+>>> is_valid_fortran_namelist_literal("logical", ".f=")
+False
+>>> is_valid_fortran_namelist_literal("logical", ".f/1")
+False
+>>> is_valid_fortran_namelist_literal("logical", ".t\nted")
+False
+>>> is_valid_fortran_namelist_literal("logical", ".Fant astic")
+False
+>>> is_valid_fortran_namelist_literal("logical", ".t2 ")
+True
+
+
+
+ +
+
+CIME.namelist.literal_to_python_value(literal, type_=None)[source]
+

Convert a Fortran literal string to a Python value.

+

This function assumes that the input contains a single value, i.e. +repetition syntax is not used. The type can be specified by passing a string +as the type_ argument, or if this option is not provided, this function +will attempt to autodetect the variable type.

+

Note that it is not possible to be certain whether a literal like “123” is +intended to represent an integer or a floating-point value, however, nor can +we be certain of the precision that will be used to hold this value in +actual Fortran code. We also cannot use the optional information in a NaN +float, so this will cause the function to throw an error if that information +is present (e.g. a string like “NAN(1234)” will cause an error).

+

The Python type of the return value is as follows for different type_ +arguments: +“character” - str +“complex” - complex +“integer” - int +“logical” - bool +“real” - float

+

If a null value is input (i.e. the empty string), None will be returned.

+
>>> literal_to_python_value("'She''s a winner!'")
+"She's a winner!"
+>>> literal_to_python_value("1")
+1
+>>> literal_to_python_value("1.")
+1.0
+>>> literal_to_python_value(" (\n 1. , 2. )\n ")
+(1+2j)
+>>> literal_to_python_value(".true.")
+True
+>>> literal_to_python_value("Fortune")
+False
+>>> literal_to_python_value("bacon") 
+Traceback (most recent call last):
+...
+CIMEError: ERROR: 'bacon' is not a valid literal for any Fortran type.
+>>> literal_to_python_value("1", type_="real")
+1.0
+>>> literal_to_python_value("bacon", type_="logical") 
+Traceback (most recent call last):
+...
+CIMEError: ERROR: 'bacon' is not a valid literal of type 'logical'.
+>>> literal_to_python_value("1", type_="booga") 
+Traceback (most recent call last):
+...
+CIMEError: ERROR: Invalid Fortran type for a namelist: 'booga'
+>>> literal_to_python_value("2*1") 
+Traceback (most recent call last):
+...
+CIMEError: ERROR: Cannot use repetition syntax in literal_to_python_value
+>>> literal_to_python_value("")
+>>> literal_to_python_value("-1.D+10")
+-10000000000.0
+>>> shouldRaise(ValueError, literal_to_python_value, "nan(1234)")
+
+
+
+ +
+
+CIME.namelist.merge_literal_lists(default, overwrite)[source]
+

Merge two lists of literal value strings.

+

The overwrite values have higher precedence, so will overwrite the +default values. However, if overwrite contains null values, or is +shorter than default (and thus implicitly ends in null values), the +elements of default will be used where overwrite is null.

+
>>> merge_literal_lists([], [])
+[]
+>>> merge_literal_lists(['true'], ['false'])
+['false']
+>>> merge_literal_lists([], ['false'])
+['false']
+>>> merge_literal_lists(['true'], [''])
+['true']
+>>> merge_literal_lists([], [''])
+['']
+>>> merge_literal_lists(['true'], [])
+['true']
+>>> merge_literal_lists(['true'], [])
+['true']
+>>> merge_literal_lists(['3*false', '3*true'], ['true', '4*', 'false'])
+['true', 'false', 'false', 'true', 'true', 'false']
+
+
+
+ +
+
+CIME.namelist.parse(in_file=None, text=None, groupless=False, convert_tab_to_space=True)[source]
+

Parse a Fortran namelist.

+
+

The in_file argument must be either a str or unicode object containing +a file name, or a text I/O object with a read method that returns the text +of the namelist.

+

Alternatively, the text argument can be provided, in which case it must be +the text of the namelist itself.

+

The groupless argument changes namelist parsing in two ways:

+
    +
  1. parse allows an alternate file format where no group names or slashes +are present. In effect, the file is parsed as if an invisible, arbitrary +group name was prepended, and an invisible slash was appended. However, +if any group names actually are present, the file is parsed normally.

  2. +
  3. The return value of this function is not a Namelist object. Instead a +single, flattened dictionary of name-value pairs is returned.

  4. +
+

The convert_tab_to_space option can be used to force all tabs in the file +to be converted to spaces, and is on by default. Note that this will usually +allow files that use tabs as whitespace to be parsed. However, the +implementation of this option is crude; it converts all tabs in the file, +including those in character literals. (Note that there are many characters +that cannot be passed in via namelist in any standard way, including ‘

+
+
+
‘,

so it is already a bad idea to assume that the namelist will preserve +whitespace in strings, aside from simple spaces.)

+

The return value, if groupless=False, is a Namelist object.

+

All names and values returned are ultimately unicode strings. E.g. a value +of “6*2” is returned as that string; it is not converted to 6 copies of the +Python integer 2. Null values are returned as the empty string (“”).

+
+
+
+ +
+
+CIME.namelist.shouldRaise(eclass, method, *args, **kw)[source]
+

A helper function to make doctests py3 compatible +http://python3porting.com/problems.html#running-doctests

+
+ +
+
+CIME.namelist.string_to_character_literal(string)[source]
+

Convert a Python string to a Fortran character literal.

+

This function always uses double quotes (”) as the delimiter.

+
>>> string_to_character_literal('blah')
+'"blah"'
+>>> string_to_character_literal("'blah'")
+'"\'blah\'"'
+>>> string_to_character_literal('She said "Hi!".')
+'"She said ""Hi!""."'
+
+
+
+ +
+
+

CIME.nmlgen module

+

Class for generating component namelists.

+
+
+class CIME.nmlgen.NamelistGenerator(case, definition_files, files=None)[source]
+

Bases: object

+

Utility class for generating namelists for a given component.

+
+
+add_default(name, value=None, ignore_abs_path=None)[source]
+

Add a value for the specified variable to the namelist.

+

If the specified variable is already defined in the object, the existing +value is preserved. Otherwise, the value argument, if provided, will +be used to set the value. If no such value is found, the defaults file +will be consulted. If null values are present in any of the above, the +result will be a merged array of values.

+

If no value for the variable is found via any of the above, this method +will raise an exception.

+
+ +
+
+add_defaults_for_group(group)[source]
+

Call add_default for namelist variables in the given group

+

This still skips variables that have attributes of skip_default_entry or +per_stream_entry.

+

This must be called after init_defaults. It is often paired with use of +skip_default_for_groups in the init_defaults call.

+
+ +
+
+add_nmlcontents(filename, group, append=True, format_='nmlcontents', sorted_groups=True)[source]
+

Write only contents of nml group

+
+ +
+
+clean_streams()[source]
+
+ +
+
+confirm_group_is_empty(group_name, errmsg)[source]
+

Confirms that no values have been added to the given group

+

If any values HAVE been added to this group, aborts with the given error message.

+

This is often paired with use of skip_default_for_groups in the init_defaults call +and add_defaults_for_group, as in:

+
+
+
if nmlgen.get_value(“enable_frac_overrides”) == “.true.”:

nmlgen.add_defaults_for_group(“glc_override_nml”)

+
+
else:

nmlgen.confirm_empty(“glc_override_nml”, “some message”)

+
+
+
+

Args: +group_name: string - name of namelist group +errmsg: string - error message to print if group is not empty

+
+ +
+
+create_shr_strdata_nml()[source]
+

Set defaults for shr_strdata_nml variables other than the variable domainfile

+
+ +
+
+create_stream_file_and_update_shr_strdata_nml(config, caseroot, stream, stream_path, data_list_path)[source]
+

Write the pseudo-XML file corresponding to a given stream.

+

Arguments: +config - Used to look up namelist defaults. This is used in addition

+
+

to the config used to construct the namelist generator. The +main reason to supply additional configuration options here +is to specify stream-specific settings.

+
+

stream - Name of the stream. +stream_path - Path to write the stream file to. +data_list_path - Path of file to append input data information to.

+
+ +
+
+get_default(name, config=None, allow_none=False)[source]
+

Get the value of a variable from the namelist definition file.

+

The config argument is passed through to the underlying +NamelistDefaults.get_value call as the attribute argument.

+

The return value of this function is a list of values that were found in +the defaults file. If there is no matching default, this function +returns None if allow_none=True is passed, otherwise an error is +raised.

+

Note that we perform some translation of the values, since there are a +few differences between Fortran namelist literals and values in the +defaults file: +1) In the defaults file, whitespace is ignored except within strings, so

+
+

the output of this function strips out most whitespace. (This implies +that commas are the only way to separate array elements in the +defaults file.)

+
+
    +
  1. In the defaults file, quotes around character literals (strings) are +optional, as long as the literal does not contain whitespace, commas, +or (single or double) quotes. If a setting for a character variable +does not seem to have quotes (and is not a null value), this function +will add them.

  2. +
  3. Default values may refer to variables in a case’s env_*.xml files. +This function replaces references of the form $VAR or ${VAR} with +the value of the variable VAR in an env file, if that variable +exists. This behavior is suppressed within single-quoted strings +(similar to parameter expansion in shell scripts).

  4. +
+
+ +
+
+get_group_variables(group_name)[source]
+
+ +
+
+get_streams()[source]
+

Get a list of all streams used for the current data model mode.

+
+ +
+
+get_value(name)[source]
+

Get the current value of a given namelist variable.

+

Note that the return value of this function is always a string or a list +of strings. E.g. the scalar logical value .false. will be returned as +“.false.”, while an array of two .false. values will be returned as +[“.false.”, “.false.”]. Whether or not a value is scalar is determined +by checking the array size in the namelist definition file.

+

Null values are converted to None, and repeated values are expanded, +e.g. [‘2*3’] is converted to [‘3’, ‘3’, ‘3’].

+

For character variables, the value is converted to a Python string (e.g. +quotation marks are removed).

+

All other literals are returned as the raw string values that will be +written to the namelist.

+
+ +
+
+init_defaults(infiles, config, skip_groups=None, skip_entry_loop=False, skip_default_for_groups=None, set_group_name=None)[source]
+

Return array of names of all definition nodes

+

infiles should be a list of file paths, each one giving namelist settings that +take precedence over the default values. Often there will be only one file in this +list. If there are multiple files, earlier files take precedence over later files.

+

If skip_default_for_groups is provided, it should be a list of namelist group +names; the add_default call will not be done for any variables in these +groups. This is often paired with later conditional calls to +add_defaults_for_group.

+
+ +
+
+new_instance()[source]
+

Clean the object just enough to introduce a new instance

+
+ +
+
+static quote_string(string)[source]
+

Convert a string to a quoted Fortran literal.

+

Does nothing if the string appears to be quoted already.

+
+ +
+
+rename_group(group, newgroup)[source]
+

Pass through to namelist definition

+
+ +
+
+set_abs_file_path(file_path)[source]
+

If file_path is relative, make it absolute using DIN_LOC_ROOT.

+

If an absolute path is input, it is returned unchanged.

+
+ +
+
+set_value(name, value)[source]
+

Set the current value of a given namelist variable.

+

Usually, you should use add_default instead of this function.

+

The name argument is the name of the variable to set, and the value +is a list of strings to use as settings. If the variable is scalar, the +list is optional; i.e. a scalar logical can be set using either +value=’.false.’ or value=[‘.false.’]. If the variable is of type +character, and the input is missing quotes, quotes will be added +automatically. If None is provided in place of a string, this will be +translated to a null value.

+

Note that this function will overwrite the current value, which may hold +a user-specified setting. Even if value is (or contains) a null value, +the old setting for the variable will be thrown out completely.

+
+ +
+
+update_shr_strdata_nml(config, stream, stream_path)[source]
+

Updates values for the shr_strdata_nml namelist group.

+

This should be done once per stream, and it shouldn’t usually be called +directly, since create_stream_file calls this method itself.

+
+ +
+
+write_modelio_file(filename)[source]
+

Write mct component modelio files

+
+ +
+
+write_nuopc_config_file(filename, data_list_path=None, sorted_groups=False)[source]
+

Write the nuopc config file

+
+ +
+
+write_nuopc_modelio_file(filename)[source]
+

Write nuopc component modelio files

+
+ +
+
+write_output_file(namelist_file, data_list_path=None, groups=None, sorted_groups=True)[source]
+

Write out the namelists and input data files.

+

The namelist_file and modelio_file are the locations to which the +component and modelio namelists will be written, respectively. The +data_list_path argument is the location of the *.input_data_list +file, which will have the input data files added to it.

+
+ +
+
+write_seq_maps(filename)[source]
+

Write mct out seq_maps.rc

+
+ +
+ +
+
+

CIME.provenance module

+

Library for saving build/run provenance.

+
+ +
+ +
+
+CIME.provenance.get_test_success(baseline_root, src_root, test, testing=False)[source]
+

Returns (was prev run success, commit when test last passed, commit when test last transitioned from pass to fail)

+

Unknown history is expressed as None

+
+ +
+
+CIME.provenance.save_test_success(baseline_root, src_root, test, succeeded, force_commit_test=None)[source]
+

Update success data accordingly based on succeeded flag

+
+ +
+
+CIME.provenance.save_test_time(baseline_root, test, time_seconds, commit)[source]
+
+ +
+
+

CIME.simple_compare module

+
+
+CIME.simple_compare.compare_files(gold_file, compare_file, case=None)[source]
+

Returns true if files are the same, comments are returned too: +(success, comments)

+
+ +
+
+CIME.simple_compare.compare_runconfigfiles(gold_file, compare_file, case=None)[source]
+

Returns true if files are the same, comments are returned too: +(success, comments)

+
+ +
+
+CIME.simple_compare.findDiff(d1, d2, path='', case=None)[source]
+
+ +
+
+

CIME.test_scheduler module

+

A library for scheduling/running through the phases of a set +of system tests. Supports phase-level parallelism (can make progres +on multiple system tests at once).

+

TestScheduler will handle the TestStatus for the 1-time setup +phases. All other phases need to handle their own status because +they can be run outside the context of TestScheduler.

+
+
+class CIME.test_scheduler.TestScheduler(test_names, test_data=None, no_run=False, no_build=False, no_setup=False, no_batch=None, test_root=None, test_id=None, machine_name=None, compiler=None, baseline_root=None, baseline_cmp_name=None, baseline_gen_name=None, clean=False, namelists_only=False, project=None, parallel_jobs=None, walltime=None, proc_pool=None, use_existing=False, save_timing=False, queue=None, allow_baseline_overwrite=False, output_root=None, force_procs=None, force_threads=None, mpilib=None, input_dir=None, pesfile=None, run_count=0, mail_user=None, mail_type=None, allow_pnl=False, non_local=False, single_exe=False, workflow=None, chksum=False, force_rebuild=False)[source]
+

Bases: object

+
+
+get_testnames()[source]
+
+ +
+
+run_tests(wait=False, check_throughput=False, check_memory=False, ignore_namelists=False, ignore_memleak=False)[source]
+

Main API for this class.

+

Return True if all tests passed.

+
+ +
+ +
+
+

CIME.test_status module

+

Contains the crucial TestStatus class which manages phase-state of a test +case and ensure that this state is represented by the TestStatus file in +the case.

+

TestStatus objects are only modifiable via the set_status method and this +is only allowed if the object is being accessed within the context of a +context manager. Example:

+
+
+
with TestStatus(test_dir=caseroot) as ts:

ts.set_status(RUN_PHASE, TEST_PASS_STATUS)

+
+
+
+

This file also contains all of the hardcoded phase information which includes +the phase names, phase orders, potential phase states, and which phases are +required (core phases).

+

Additional important design decisions: +1) In order to ensure that incomplete tests are always left in a PEND

+
+

state, updating a core phase to a PASS state will automatically set the next +core state to PEND.

+
+
    +
  1. If the user repeats a core state, that invalidates all subsequent state. For +example, if a user rebuilds their case, then any of the post-run states like the +RUN state are no longer valid.

  2. +
+
+
+class CIME.test_status.TestStatus(test_dir=None, test_name=None, no_io=False)[source]
+

Bases: object

+
+
+current_is(phase, status)[source]
+
+ +
+
+flush()[source]
+
+ +
+
+get_comment(phase)[source]
+
+ +
+
+get_latest_phase()[source]
+
+ +
+
+get_name()[source]
+
+ +
+
+get_overall_test_status(wait_for_run=False, check_throughput=False, check_memory=False, ignore_namelists=False, ignore_memleak=False, no_run=False)[source]
+

Given the current phases and statuses, produce a single results for this test. Preference +is given to PEND since we don’t want to stop waiting for a test +that hasn’t finished. Namelist diffs are given the lowest precedence.

+
>>> _test_helper2('PASS ERS.foo.A RUN')
+('PASS', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A SHAREDLIB_BUILD\nPEND ERS.foo.A RUN')
+('PEND', 'RUN')
+>>> _test_helper2('FAIL ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN')
+('FAIL', 'MODEL_BUILD')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPASS ERS.foo.A RUN')
+('PASS', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A TPUTCOMP')
+('PASS', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A TPUTCOMP', check_throughput=True)
+('DIFF', 'TPUTCOMP')
+>>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A MEMCOMP', check_memory=True)
+('DIFF', 'MEMCOMP')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPASS ERS.foo.A RUN\nFAIL ERS.foo.A NLCOMP')
+('NLFAIL', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN\nFAIL ERS.foo.A NLCOMP')
+('PEND', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A MEMCOMP')
+('PASS', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A NLCOMP', ignore_namelists=True)
+('PASS', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A COMPARE_1\nFAIL ERS.foo.A NLCOMP\nFAIL ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN')
+('FAIL', 'COMPARE_2')
+>>> _test_helper2('FAIL ERS.foo.A BASELINE\nFAIL ERS.foo.A NLCOMP\nPASS ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN')
+('DIFF', 'BASELINE')
+>>> _test_helper2('FAIL ERS.foo.A BASELINE\nFAIL ERS.foo.A NLCOMP\nFAIL ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN')
+('FAIL', 'COMPARE_2')
+>>> _test_helper2('PEND ERS.foo.A COMPARE_2\nFAIL ERS.foo.A RUN')
+('FAIL', 'RUN')
+>>> _test_helper2('PEND ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN')
+('PEND', 'COMPARE_2')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD')
+('PASS', 'MODEL_BUILD')
+>>> _test_helper2('PEND ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN')
+('PEND', 'MODEL_BUILD')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD', wait_for_run=True)
+('PEND', 'RUN')
+>>> _test_helper2('FAIL ERS.foo.A MODEL_BUILD', wait_for_run=True)
+('FAIL', 'MODEL_BUILD')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN', wait_for_run=True)
+('PEND', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nFAIL ERS.foo.A RUN', wait_for_run=True)
+('FAIL', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPASS ERS.foo.A RUN', wait_for_run=True)
+('PASS', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nFAIL ERS.foo.A RUN\nPEND ERS.foo.A COMPARE')
+('FAIL', 'RUN')
+>>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN', no_run=True)
+('PASS', 'MODEL_BUILD')
+>>> s = '''PASS ERS.foo.A CREATE_NEWCASE
+... PASS ERS.foo.A XML
+... PASS ERS.foo.A SETUP
+... PASS ERS.foo.A SHAREDLIB_BUILD time=454
+... PASS ERS.foo.A NLCOMP
+... PASS ERS.foo.A MODEL_BUILD time=363
+... PASS ERS.foo.A SUBMIT
+... PASS ERS.foo.A RUN time=73
+... PEND ERS.foo.A COMPARE_base_single_thread
+... FAIL ERS.foo.A BASELINE master: DIFF
+... PASS ERS.foo.A TPUTCOMP
+... PASS ERS.foo.A MEMLEAK insuffiencient data for memleak test
+... PASS ERS.foo.A SHORT_TERM_ARCHIVER
+... '''
+>>> _test_helper2(s, no_perm=True)
+('PEND', 'COMPARE_base_single_thread')
+>>> s = '''PASS ERS.foo.A CREATE_NEWCASE
+... PASS ERS.foo.A XML
+... PASS ERS.foo.A SETUP
+... PEND ERS.foo.A SHAREDLIB_BUILD
+... FAIL ERS.foo.A NLCOMP
+... '''
+>>> _test_helper2(s, no_run=True)
+('NLFAIL', 'SETUP')
+>>> _test_helper2(s, no_run=False)
+('PEND', 'SHAREDLIB_BUILD')
+
+
+
+ +
+
+get_status(phase)[source]
+
+ +
+
+increment_non_pass_counts(non_pass_counts)[source]
+

Increment counts of the number of times given phases did not pass

+

non_pass_counts is a dictionary whose keys are phases of +interest and whose values are running counts of the number of +non-passes. This method increments those counts based on results +in the given TestStatus object.

+
+ +
+
+phase_statuses_dump(prefix='', skip_passes=False, skip_phase_list=None, xfails=None)[source]
+
+
Args:

prefix: string printed at the start of each line +skip_passes: if True, do not output lines that have a PASS status +skip_phase_list: list of phases (from the phases given by

+
+

ALL_PHASES) for which we skip output

+
+

xfails: object of type ExpectedFails, giving expected failures for this test

+
+
+
+ +
+
+set_status(phase, status, comments='')[source]
+

Update the status of this test by changing the status of given phase to the +given status.

+
>>> with TestStatus(test_dir="/", test_name="ERS.foo.A", no_io=True) as ts:
+...     ts.set_status(CREATE_NEWCASE_PHASE, "PASS")
+...     ts.set_status(XML_PHASE, "PASS")
+...     ts.set_status(SETUP_PHASE, "FAIL")
+...     ts.set_status(SETUP_PHASE, "PASS")
+...     ts.set_status("{}_base_rest".format(COMPARE_PHASE), "FAIL")
+...     ts.set_status(SHAREDLIB_BUILD_PHASE, "PASS", comments='Time=42')
+>>> ts._phase_statuses
+OrderedDict([('CREATE_NEWCASE', ('PASS', '')), ('XML', ('PASS', '')), ('SETUP', ('PASS', '')), ('SHAREDLIB_BUILD', ('PASS', 'Time=42')), ('COMPARE_base_rest', ('FAIL', '')), ('MODEL_BUILD', ('PEND', ''))])
+
+
+
>>> with TestStatus(test_dir="/", test_name="ERS.foo.A", no_io=True) as ts:
+...     ts.set_status(CREATE_NEWCASE_PHASE, "PASS")
+...     ts.set_status(XML_PHASE, "PASS")
+...     ts.set_status(SETUP_PHASE, "FAIL")
+...     ts.set_status(SETUP_PHASE, "PASS")
+...     ts.set_status(BASELINE_PHASE, "PASS")
+...     ts.set_status("{}_base_rest".format(COMPARE_PHASE), "FAIL")
+...     ts.set_status(SHAREDLIB_BUILD_PHASE, "PASS", comments='Time=42')
+...     ts.set_status(SETUP_PHASE, "PASS")
+>>> ts._phase_statuses
+OrderedDict([('CREATE_NEWCASE', ('PASS', '')), ('XML', ('PASS', '')), ('SETUP', ('PASS', '')), ('SHAREDLIB_BUILD', ('PEND', ''))])
+
+
+
>>> with TestStatus(test_dir="/", test_name="ERS.foo.A", no_io=True) as ts:
+...     ts.set_status(CREATE_NEWCASE_PHASE, "FAIL")
+>>> ts._phase_statuses
+OrderedDict([('CREATE_NEWCASE', ('FAIL', ''))])
+
+
+
+ +
+ +
+
+

CIME.test_utils module

+

Utility functions used in test_scheduler.py, and by other utilities that need to +get test lists.

+
+
+CIME.test_utils.get_test_status_files(test_root, compiler, test_id=None)[source]
+
+ +
+
+CIME.test_utils.get_tests_from_xml(xml_machine=None, xml_category=None, xml_compiler=None, xml_testlist=None, machine=None, compiler=None, driver=None)[source]
+

Parse testlists for a list of tests

+
+ +
+
+CIME.test_utils.test_to_string(test, category_field_width=0, test_field_width=0, show_options=False)[source]
+

Given a test dictionary, return a string representation suitable for printing

+
+
Args:
+
test (dict): dictionary for a single test - e.g., one element from the

list returned by get_tests_from_xml

+
+
+

category_field_width (int): minimum amount of space to use for printing the test category +test_field_width (int): minimum amount of space to use for printing the test category +show_options (bool): if True, print test options, too (note that the ‘comment’

+
+

option is always printed, if present)

+
+
+
+

Basic functionality: +>>> mytest = {‘name’: ‘SMS.f19_g16.A.cheyenne_intel’, ‘category’: ‘prealpha’, ‘options’: {}} +>>> test_to_string(mytest, 10) +‘prealpha : SMS.f19_g16.A.cheyenne_intel’

+

Printing comments: +>>> mytest = {‘name’: ‘SMS.f19_g16.A.cheyenne_intel’, ‘category’: ‘prealpha’, ‘options’: {‘comment’: ‘my remarks’}} +>>> test_to_string(mytest, 10) +‘prealpha : SMS.f19_g16.A.cheyenne_intel # my remarks’

+

Newlines in comments are converted to spaces +>>> mytest = {‘name’: ‘SMS.f19_g16.A.cheyenne_intel’, ‘category’: ‘prealpha’, ‘options’: {‘comment’: ‘mynremarks’}} +>>> test_to_string(mytest, 10) +‘prealpha : SMS.f19_g16.A.cheyenne_intel # my remarks’

+

Printing other options, too: +>>> mytest = {‘name’: ‘SMS.f19_g16.A.cheyenne_intel’, ‘category’: ‘prealpha’, ‘options’: {‘comment’: ‘my remarks’, ‘wallclock’: ‘0:20’, ‘memleak_tolerance’: 0.2}} +>>> test_to_string(mytest, 10, show_options=True) +‘prealpha : SMS.f19_g16.A.cheyenne_intel # my remarks # memleak_tolerance: 0.2 # wallclock: 0:20’

+
+ +
+
+

CIME.user_mod_support module

+

user_mod_support.py

+
+
+CIME.user_mod_support.apply_user_mods(caseroot, user_mods_path, keepexe=None)[source]
+

Recursivlely apply user_mods to caseroot - this includes updating user_nl_xxx, +updating SourceMods and creating case shell_commands and xmlchange_cmds files

+

First remove case shell_commands files if any already exist

+

If this function is called multiple times, settings from later calls will +take precedence over earlier calls, if there are conflicts.

+

keepexe is an optional argument that is needed for cases where apply_user_mods is +called from create_clone

+
+ +
+
+CIME.user_mod_support.build_include_dirs_list(user_mods_path, include_dirs=None)[source]
+

If user_mods_path has a file “include_user_mods” read that +file and add directories to the include_dirs, recursively check +each of those directories for further directories. +The file may also include comments deleneated with # in the first column

+
+ +
+
+

CIME.utils module

+

Common functions used by cime python scripts +Warning: you cannot use CIME Classes in this module as it causes circular dependencies

+
+
+exception CIME.utils.CIMEError[source]
+

Bases: SystemExit, Exception

+
+ +
+
+class CIME.utils.EnvironmentContext(**kwargs)[source]
+

Bases: object

+

Context manager for environment variables +Usage:

+
+

os.environ[‘MYVAR’] = ‘oldvalue’ +with EnvironmentContex(MYVAR=’myvalue’, MYVAR2=’myvalue2’):

+
+

print os.getenv(‘MYVAR’) # Should print myvalue. +print os.getenv(‘MYVAR2’) # Should print myvalue2.

+
+

print os.getenv(‘MYVAR’) # Should print oldvalue. +print os.getenv(‘MYVAR2’) # Should print None.

+
+

CREDIT: https://github.com/sakurai-youhei/envcontext

+
+ +
+
+class CIME.utils.IndentFormatter(indent, fmt=None, datefmt=None)[source]
+

Bases: Formatter

+
+
+format(record)[source]
+

Format the specified record as text.

+

The record’s attribute dictionary is used as the operand to a +string formatting operation which yields the returned string. +Before formatting the dictionary, a couple of preparatory steps +are carried out. The message attribute of the record is computed +using LogRecord.getMessage(). If the formatting string uses the +time (as determined by a call to usesTime(), formatTime() is +called to format the event time. If there is exception information, +it is formatted using formatException() and appended to the message.

+
+ +
+ +
+
+class CIME.utils.SharedArea(new_perms=2)[source]
+

Bases: object

+

Enable 0002 umask within this manager

+
+ +
+
+class CIME.utils.Timeout(seconds, action=None)[source]
+

Bases: object

+

A context manager that implements a timeout. By default, it +will raise exception, but a custon function call can be provided. +Provided None as seconds makes this class a no-op

+
+ +
+
+CIME.utils.add_flag_to_cmd(flag, val)[source]
+

Given a flag and value for a shell command, return a string

+
>>> add_flag_to_cmd("-f", "hi")
+'-f hi'
+>>> add_flag_to_cmd("--foo", 42)
+'--foo 42'
+>>> add_flag_to_cmd("--foo=", 42)
+'--foo=42'
+>>> add_flag_to_cmd("--foo:", 42)
+'--foo:42'
+>>> add_flag_to_cmd("--foo:", " hi ")
+'--foo:hi'
+
+
+
+ +
+
+CIME.utils.add_mail_type_args(parser)[source]
+
+ +
+
+CIME.utils.analyze_build_log(comp, log, compiler)[source]
+

Capture and report warning count, +capture and report errors and undefined references.

+
+ +
+
+CIME.utils.append_case_status(phase, status, msg=None, caseroot='.')[source]
+

Update CaseStatus file

+
+ +
+
+CIME.utils.append_status(msg, sfile, caseroot='.')[source]
+

Append msg to sfile in caseroot

+
+ +
+
+CIME.utils.append_testlog(msg, caseroot='.')[source]
+

Add to TestStatus.log file

+
+ +
+
+CIME.utils.batch_jobid(case=None)[source]
+
+ +
+
+CIME.utils.check_name(fullname, additional_chars=None, fullpath=False)[source]
+

check for unallowed characters in name, this routine only +checks the final name and does not check if path exists or is +writable

+
>>> check_name("test.id", additional_chars=".")
+False
+>>> check_name("case.name", fullpath=False)
+True
+>>> check_name("/some/file/path/case.name", fullpath=True)
+True
+>>> check_name("mycase+mods")
+False
+>>> check_name("mycase?mods")
+False
+>>> check_name("mycase*mods")
+False
+>>> check_name("/some/full/path/name/")
+False
+
+
+
+ +
+
+CIME.utils.clear_folder(_dir)[source]
+
+ +
+
+CIME.utils.compute_total_time(job_cost_map, proc_pool)[source]
+

Given a map: jobname -> (procs, est-time), return a total time +estimate for a given processor pool size

+
>>> job_cost_map = {"A" : (4, 3000), "B" : (2, 1000), "C" : (8, 2000), "D" : (1, 800)}
+>>> compute_total_time(job_cost_map, 8)
+5160
+>>> compute_total_time(job_cost_map, 12)
+3180
+>>> compute_total_time(job_cost_map, 16)
+3060
+
+
+
+ +
+
+CIME.utils.configure_logging(verbose, debug, silent)[source]
+
+ +
+
+CIME.utils.convert_to_babylonian_time(seconds)[source]
+

Convert time value to seconds to HH:MM:SS

+
>>> convert_to_babylonian_time(3661)
+'01:01:01'
+>>> convert_to_babylonian_time(360000)
+'100:00:00'
+
+
+
+ +
+
+CIME.utils.convert_to_seconds(time_str)[source]
+

Convert time value in [[HH:]MM:]SS to seconds

+

We assume that XX:YY is likely to be HH:MM, not MM:SS

+
>>> convert_to_seconds("42")
+42
+>>> convert_to_seconds("01:01:01")
+3661
+>>> convert_to_seconds("01:01")
+3660
+
+
+
+ +
+
+CIME.utils.convert_to_string(value, type_str=None, vid='')[source]
+

Convert value back to string. +vid is only for generating better error messages. +>>> convert_to_string(6, type_str=”integer”) == ‘6’ +True +>>> convert_to_string(‘6’, type_str=”integer”) == ‘6’ +True +>>> convert_to_string(‘6.0’, type_str=”real”) == ‘6.0’ +True +>>> convert_to_string(6.01, type_str=”real”) == ‘6.01’ +True

+
+ +
+
+CIME.utils.convert_to_type(value, type_str, vid='')[source]
+

Convert value from string to another type. +vid is only for generating better error messages.

+
+ +
+
+CIME.utils.convert_to_unknown_type(value)[source]
+

Convert value to it’s real type by probing conversions.

+
+ +
+
+CIME.utils.copy_globs(globs_to_copy, output_directory, lid=None)[source]
+

Takes a list of globs and copies all files to output_directory.

+

Hiddens files become unhidden i.e. removing starting dot.

+

Output filename is derviced from the basename of the input path and can +be appended with the lid.

+
+ +
+
+CIME.utils.copy_local_macros_to_dir(destination, extra_machdir=None)[source]
+

Copy any local macros files to the path given by ‘destination’.

+

Local macros files are potentially found in: +(1) extra_machdir/cmake_macros/.cmake +(2) $HOME/.cime/.cmake

+
+ +
+
+CIME.utils.copyifnewer(src, dest)[source]
+

if dest does not exist or is older than src copy src to dest

+
+ +
+
+CIME.utils.deprecate_action(message)[source]
+
+ +
+
+CIME.utils.does_file_have_string(filepath, text)[source]
+

Does the text string appear in the filepath file

+
+ +
+
+CIME.utils.expect(condition, error_msg, exc_type=<class 'CIME.utils.CIMEError'>, error_prefix='ERROR:')[source]
+

Similar to assert except doesn’t generate an ugly stacktrace. Useful for +checking user error, not programming error.

+
>>> expect(True, "error1")
+>>> expect(False, "error2") 
+Traceback (most recent call last):
+    ...
+CIMEError: ERROR: error2
+
+
+
+ +
+
+CIME.utils.file_contains_python_function(filepath, funcname)[source]
+

Checks whether the given file contains a top-level definition of the function ‘funcname’

+

Returns a boolean value (True if the file contains this function definition, False otherwise)

+
+ +
+
+CIME.utils.filter_unicode(unistr)[source]
+

Sometimes unicode chars can cause problems

+
+ +
+
+CIME.utils.find_files(rootdir, pattern)[source]
+

recursively find all files matching a pattern

+
+ +
+
+CIME.utils.find_proc_id(proc_name=None, children_only=False, of_parent=None)[source]
+

Children implies recursive.

+
+ +
+
+CIME.utils.find_system_test(testname, case)[source]
+

Find and import the test matching testname +Look through the paths set in config_files.xml variable SYSTEM_TESTS_DIR +for components used in this case to find a test matching testname. Add the +path to that directory to sys.path if its not there and return the test object +Fail if the test is not found in any of the paths.

+
+ +
+
+CIME.utils.fixup_sys_path(*additional_paths)[source]
+
+ +
+
+CIME.utils.format_time(time_format, input_format, input_time)[source]
+

Converts the string input_time from input_format to time_format +Valid format specifiers are “%H”, “%M”, and “%S” +% signs must be followed by an H, M, or S and then a separator +Separators can be any string without digits or a % sign +Each specifier can occur more than once in the input_format, +but only the first occurence will be used. +An example of a valid format: “%H:%M:%S” +Unlike strptime, this does support %H >= 24

+
>>> format_time("%H:%M:%S", "%H", "43")
+'43:00:00'
+>>> format_time("%H  %M", "%M,%S", "59,59")
+'0  59'
+>>> format_time("%H, %S", "%H:%M:%S", "2:43:9")
+'2, 09'
+
+
+
+ +
+
+CIME.utils.get_all_cime_models()[source]
+
+ +
+
+CIME.utils.get_batch_script_for_job(job)[source]
+
+ +
+
+CIME.utils.get_charge_account(machobj=None, project=None)[source]
+

Hierarchy for choosing CHARGE_ACCOUNT: +1. Environment variable CHARGE_ACCOUNT +2. File $HOME/.cime/config +3. config_machines.xml (if machobj provided) +4. default to same value as PROJECT

+
>>> import CIME
+>>> import CIME.XML.machines
+>>> machobj = CIME.XML.machines.Machines(machine="theta")
+>>> project = get_project(machobj)
+>>> charge_account = get_charge_account(machobj, project)
+>>> project == charge_account
+True
+>>> os.environ["CHARGE_ACCOUNT"] = "ChargeAccount"
+>>> get_charge_account(machobj, project)
+'ChargeAccount'
+>>> del os.environ["CHARGE_ACCOUNT"]
+
+
+
+ +
+
+CIME.utils.get_cime_config()[source]
+
+ +
+
+CIME.utils.get_cime_default_driver()[source]
+
+ +
+
+CIME.utils.get_cime_root(case=None)[source]
+

Return the absolute path to the root of CIME that contains this script

+
+ +
+
+CIME.utils.get_config_path()[source]
+
+ +
+
+CIME.utils.get_current_branch(repo=None)[source]
+

Return the name of the current branch for a repository

+
>>> if "GIT_BRANCH" in os.environ:
+...     get_current_branch() is not None
+... else:
+...     os.environ["GIT_BRANCH"] = "foo"
+...     get_current_branch() == "foo"
+True
+
+
+
+ +
+
+CIME.utils.get_current_commit(short=False, repo=None, tag=False)[source]
+

Return the sha1 of the current HEAD commit

+
>>> get_current_commit() is not None
+True
+
+
+
+ +
+
+CIME.utils.get_current_submodule_status(recursive=False, repo=None)[source]
+

Return the sha1s of the current currently checked out commit for each submodule, +along with the submodule path and the output of git describe for the SHA-1.

+
>>> get_current_submodule_status() is not None
+True
+
+
+
+ +
+
+CIME.utils.get_full_test_name(partial_test, caseopts=None, grid=None, compset=None, machine=None, compiler=None, testmods_list=None, testmods_string=None)[source]
+

Given a partial CIME test name, return in form TESTCASE.GRID.COMPSET.MACHINE_COMPILER[.TESTMODS] +Use the additional args to fill out the name if needed

+

Testmods can be provided through one of two arguments, but not both: +- testmods_list: a list of one or more testmods (as would be returned by

+
+

parse_test_name, for example)

+
+
    +
  • testmods_string: a single string containing one or more testmods; if there is more +than one, then they should be separated by a string of two hyphens (’–‘)

  • +
+

For both testmods_list and testmods_string, any slashes as path separators (‘/’) are +replaced by hyphens (‘-‘).

+
>>> get_full_test_name("ERS", grid="ne16_fe16", compset="JGF", machine="melvin", compiler="gnu")
+'ERS.ne16_fe16.JGF.melvin_gnu'
+>>> get_full_test_name("ERS", caseopts=["D", "P16"], grid="ne16_fe16", compset="JGF", machine="melvin", compiler="gnu")
+'ERS_D_P16.ne16_fe16.JGF.melvin_gnu'
+>>> get_full_test_name("ERS.ne16_fe16", compset="JGF", machine="melvin", compiler="gnu")
+'ERS.ne16_fe16.JGF.melvin_gnu'
+>>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu")
+'ERS.ne16_fe16.JGF.melvin_gnu'
+>>> get_full_test_name("ERS.ne16_fe16.JGF.melvin_gnu.mods", machine="melvin", compiler="gnu")
+'ERS.ne16_fe16.JGF.melvin_gnu.mods'
+
+
+

testmods_list can be a single element: +>>> get_full_test_name(“ERS.ne16_fe16.JGF”, machine=”melvin”, compiler=”gnu”, testmods_list=[“mods/test”]) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test’

+

testmods_list can also have multiple elements, separated either by slashes or hyphens: +>>> get_full_test_name(“ERS.ne16_fe16.JGF”, machine=”melvin”, compiler=”gnu”, testmods_list=[“mods/test”, “mods2/test2/subdir2”, “mods3/test3/subdir3”]) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test–mods2-test2-subdir2–mods3-test3-subdir3’ +>>> get_full_test_name(“ERS.ne16_fe16.JGF”, machine=”melvin”, compiler=”gnu”, testmods_list=[“mods-test”, “mods2-test2-subdir2”, “mods3-test3-subdir3”]) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test–mods2-test2-subdir2–mods3-test3-subdir3’

+

The above testmods_list tests should also work with equivalent testmods_string arguments: +>>> get_full_test_name(“ERS.ne16_fe16.JGF”, machine=”melvin”, compiler=”gnu”, testmods_string=”mods/test”) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test’ +>>> get_full_test_name(“ERS.ne16_fe16.JGF”, machine=”melvin”, compiler=”gnu”, testmods_string=”mods/test–mods2/test2/subdir2–mods3/test3/subdir3”) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test–mods2-test2-subdir2–mods3-test3-subdir3’ +>>> get_full_test_name(“ERS.ne16_fe16.JGF”, machine=”melvin”, compiler=”gnu”, testmods_string=”mods-test–mods2-test2-subdir2–mods3-test3-subdir3”) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test–mods2-test2-subdir2–mods3-test3-subdir3’

+

The following tests the consistency check between the test name and various optional arguments: +>>> get_full_test_name(“ERS.ne16_fe16.JGF.melvin_gnu.mods-test–mods2-test2-subdir2–mods3-test3-subdir3”, machine=”melvin”, compiler=”gnu”, testmods_list=[“mods/test”, “mods2/test2/subdir2”, “mods3/test3/subdir3”]) +‘ERS.ne16_fe16.JGF.melvin_gnu.mods-test–mods2-test2-subdir2–mods3-test3-subdir3’

+
+ +
+
+CIME.utils.get_htmlroot(machobj=None)[source]
+

Get location for test HTML output

+

Hierarchy for choosing CIME_HTML_ROOT: +0. Environment variable CIME_HTML_ROOT +1. File $HOME/.cime/config +2. config_machines.xml (if machobj provided)

+
+ +
+
+CIME.utils.get_lids(case)[source]
+
+ +
+
+CIME.utils.get_logging_options()[source]
+

Use to pass same logging options as was used for current +executable to subprocesses.

+
+ +
+
+CIME.utils.get_model()[source]
+

Get the currently configured model value +The CIME_MODEL env variable may or may not be set

+
>>> os.environ["CIME_MODEL"] = "garbage"
+>>> get_model() 
+Traceback (most recent call last):
+    ...
+CIMEError: ERROR: model garbage not recognized
+>>> del os.environ["CIME_MODEL"]
+>>> set_model('rocky') 
+Traceback (most recent call last):
+    ...
+CIMEError: ERROR: model rocky not recognized
+>>> set_model('e3sm')
+>>> get_model()
+'e3sm'
+>>> reset_cime_config()
+
+
+
+ +
+
+CIME.utils.get_model_config_location_within_cime(model=None)[source]
+
+ +
+
+CIME.utils.get_model_config_root(model=None)[source]
+

Get absolute path to model config area”

+
>>> os.environ["CIME_MODEL"] = "e3sm" # Set the test up don't depend on external resources
+>>> os.path.isdir(get_model_config_root())
+True
+
+
+
+ +
+
+CIME.utils.get_project(machobj=None)[source]
+

Hierarchy for choosing PROJECT: +0. Command line flag to create_newcase or create_test +1. Environment variable PROJECT +2 Environment variable ACCOUNT (this is for backward compatibility) +3. File $HOME/.cime/config (this is new) +4 File $HOME/.cesm_proj (this is for backward compatibility) +5 config_machines.xml (if machobj provided)

+
+ +
+
+CIME.utils.get_python_libs_location_within_cime()[source]
+

From within CIME, return subdirectory of python libraries

+
+ +
+
+CIME.utils.get_schema_path()[source]
+
+ +
+
+CIME.utils.get_scripts_root()[source]
+

Get absolute path to scripts

+
>>> os.path.isdir(get_scripts_root())
+True
+
+
+
+ +
+
+CIME.utils.get_src_root()[source]
+

Return the absolute path to the root of SRCROOT.

+
+ +
+
+CIME.utils.get_template_path()[source]
+
+ +
+
+CIME.utils.get_time_in_seconds(timeval, unit)[source]
+

Convert a time from ‘unit’ to seconds

+
+ +
+
+CIME.utils.get_timestamp(timestamp_format='%Y%m%d_%H%M%S', utc_time=False)[source]
+

Get a string representing the current UTC time in format: YYYYMMDD_HHMMSS

+

The format can be changed if needed.

+
+ +
+
+CIME.utils.get_tools_path()[source]
+
+ +
+
+CIME.utils.get_umask()[source]
+
+ +
+
+CIME.utils.get_urlroot(machobj=None)[source]
+

Get URL to htmlroot

+

Hierarchy for choosing CIME_URL_ROOT: +0. Environment variable CIME_URL_ROOT +1. File $HOME/.cime/config +2. config_machines.xml (if machobj provided)

+
+ +
+
+CIME.utils.gunzip_existing_file(filepath)[source]
+
+ +
+
+CIME.utils.gzip_existing_file(filepath)[source]
+

Gzips an existing file, removes the unzipped version, returns path to zip file. +Note the that the timestamp of the original file will be maintained in +the zipped file.

+
>>> import tempfile
+>>> fd, filename = tempfile.mkstemp(text=True)
+>>> _ = os.write(fd, b"Hello World")
+>>> os.close(fd)
+>>> gzfile = gzip_existing_file(filename)
+>>> gunzip_existing_file(gzfile) == b'Hello World'
+True
+>>> os.remove(gzfile)
+
+
+
+ +
+
+CIME.utils.id_generator(size=6, chars='abcdefghijklmnopqrstuvwxyz0123456789')[source]
+
+ +
+
+CIME.utils.import_and_run_sub_or_cmd(cmd, cmdargs, subname, subargs, config_dir, compname, logfile=None, case=None, from_dir=None, timeout=None)[source]
+
+ +
+
+CIME.utils.import_from_file(name, file_path)[source]
+
+ +
+
+CIME.utils.indent_string(the_string, indent_level)[source]
+

Indents the given string by a given number of spaces

+
+
Args:

the_string: str +indent_level: int

+
+
+

Returns a new string that is the same as the_string, except that +each line is indented by ‘indent_level’ spaces.

+

In python3, this can be done with textwrap.indent.

+
+ +
+
+CIME.utils.is_last_process_complete(filepath, expect_text, fail_text)[source]
+

Search the filepath in reverse order looking for expect_text +before finding fail_text. This utility is used by archive_metadata.

+
+ +
+
+CIME.utils.is_python_executable(filepath)[source]
+
+ +
+
+CIME.utils.ls_sorted_by_mtime(path)[source]
+

return list of path sorted by timestamp oldest first

+
+ +
+
+CIME.utils.match_any(item, re_counts)[source]
+

Return true if item matches any regex in re_counts’ keys. Increments +count if a match was found.

+
+ +
+
+CIME.utils.model_log(model, arg_logger, msg, debug_others=True)[source]
+
+ +
+
+CIME.utils.new_lid(case=None)[source]
+
+ +
+
+CIME.utils.normalize_case_id(case_id)[source]
+

Given a case_id, return it in form TESTCASE.GRID.COMPSET.PLATFORM

+
>>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel')
+'ERT.ne16_g37.B1850C5.sandiatoss3_intel'
+>>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod')
+'ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod'
+>>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel.G.20151121')
+'ERT.ne16_g37.B1850C5.sandiatoss3_intel'
+>>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod.G.20151121')
+'ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod'
+
+
+
+ +
+
+CIME.utils.parse_args_and_handle_standard_logging_options(args, parser=None)[source]
+

Guide to logging in CIME.

+

logger.debug -> Verbose/detailed output, use for debugging, off by default. Goes to a .log file +logger.info -> Goes to stdout (and log if –debug). Use for normal program output +logger.warning -> Goes to stderr (and log if –debug). Use for minor problems +logger.error -> Goes to stderr (and log if –debug)

+
+ +
+
+CIME.utils.parse_test_name(test_name)[source]
+

Given a CIME test name TESTCASE[_CASEOPTS].GRID.COMPSET[.MACHINE_COMPILER[.TESTMODS]], +return each component of the testname with machine and compiler split. +Do not error if a partial testname is provided (TESTCASE or TESTCASE.GRID) instead +parse and return the partial results.

+

TESTMODS use hyphens in a special way: +- A single hyphen stands for a path separator (for example, ‘test-mods’ resolves to

+
+

the path ‘test/mods’)

+
+
    +
  • A double hyphen separates multiple test mods (for example, ‘test-mods–other-dir-path’ +indicates two test mods: ‘test/mods’ and ‘other/dir/path’)

  • +
+

If there are one or more TESTMODS, then the testmods component of the result will be a +list, where each element of the list is one testmod, and hyphens have been replaced by +slashes. (If there are no TESTMODS in this test, then the TESTMODS component of the +result is None, as for other optional components.)

+
>>> parse_test_name('ERS')
+['ERS', None, None, None, None, None, None]
+>>> parse_test_name('ERS.fe12_123')
+['ERS', None, 'fe12_123', None, None, None, None]
+>>> parse_test_name('ERS.fe12_123.JGF')
+['ERS', None, 'fe12_123', 'JGF', None, None, None]
+>>> parse_test_name('ERS_D.fe12_123.JGF')
+['ERS', ['D'], 'fe12_123', 'JGF', None, None, None]
+>>> parse_test_name('ERS_D_P1.fe12_123.JGF')
+['ERS', ['D', 'P1'], 'fe12_123', 'JGF', None, None, None]
+>>> parse_test_name('ERS_D_G2.fe12_123.JGF')
+['ERS', ['D', 'G2'], 'fe12_123', 'JGF', None, None, None]
+>>> parse_test_name('SMS_D_Ln9_Mmpi-serial.f19_g16_rx1.A')
+['SMS', ['D', 'Ln9', 'Mmpi-serial'], 'f19_g16_rx1', 'A', None, None, None]
+>>> parse_test_name('ERS.fe12_123.JGF.machine_compiler')
+['ERS', None, 'fe12_123', 'JGF', 'machine', 'compiler', None]
+>>> parse_test_name('ERS.fe12_123.JGF.machine_compiler.test-mods')
+['ERS', None, 'fe12_123', 'JGF', 'machine', 'compiler', ['test/mods']]
+>>> parse_test_name('ERS.fe12_123.JGF.*_compiler.test-mods')
+['ERS', None, 'fe12_123', 'JGF', None, 'compiler', ['test/mods']]
+>>> parse_test_name('ERS.fe12_123.JGF.machine_*.test-mods')
+['ERS', None, 'fe12_123', 'JGF', 'machine', None, ['test/mods']]
+>>> parse_test_name('ERS.fe12_123.JGF.*_*.test-mods')
+['ERS', None, 'fe12_123', 'JGF', None, None, ['test/mods']]
+>>> parse_test_name('ERS.fe12_123.JGF.machine_compiler.test-mods--other-dir-path--and-one-more')
+['ERS', None, 'fe12_123', 'JGF', 'machine', 'compiler', ['test/mods', 'other/dir/path', 'and/one/more']]
+>>> parse_test_name('SMS.f19_g16.2000_DATM%QI.A_XLND_SICE_SOCN_XROF_XGLC_SWAV.mach-ine_compiler.test-mods') 
+Traceback (most recent call last):
+    ...
+CIMEError: ERROR: Expected 4th item of 'SMS.f19_g16.2000_DATM%QI.A_XLND_SICE_SOCN_XROF_XGLC_SWAV.mach-ine_compiler.test-mods' ('A_XLND_SICE_SOCN_XROF_XGLC_SWAV') to be in form machine_compiler
+>>> parse_test_name('SMS.f19_g16.2000_DATM%QI/A_XLND_SICE_SOCN_XROF_XGLC_SWAV.mach-ine_compiler.test-mods') 
+Traceback (most recent call last):
+    ...
+CIMEError: ERROR: Invalid compset name 2000_DATM%QI/A_XLND_SICE_SOCN_XROF_XGLC_SWAV
+
+
+
+ +
+
+CIME.utils.redirect_logger(new_target, logger_name)[source]
+
+ +
+
+CIME.utils.redirect_stderr(new_target)[source]
+
+ +
+
+CIME.utils.redirect_stdout(new_target)[source]
+
+ +
+
+CIME.utils.redirect_stdout_stderr(new_target)[source]
+
+ +
+
+CIME.utils.reset_cime_config()[source]
+

Useful to keep unit tests from interfering with each other

+
+ +
+
+CIME.utils.resolve_mail_type_args(args)[source]
+
+ +
+
+CIME.utils.run_and_log_case_status(func, phase, caseroot='.', custom_starting_msg_functor=None, custom_success_msg_functor=None, is_batch=False)[source]
+
+ +
+
+CIME.utils.run_bld_cmd_ensure_logging(cmd, arg_logger, from_dir=None, timeout=None)[source]
+
+ +
+
+CIME.utils.run_cmd(cmd, input_str=None, from_dir=None, verbose=None, arg_stdout=<object object>, arg_stderr=<object object>, env=None, combine_output=False, timeout=None, executable=None, shell=True)[source]
+

Wrapper around subprocess to make it much more convenient to run shell commands

+
>>> run_cmd('ls file_i_hope_doesnt_exist')[0] != 0
+True
+
+
+
+ +
+
+CIME.utils.run_cmd_no_fail(cmd, input_str=None, from_dir=None, verbose=None, arg_stdout=<object object>, arg_stderr=<object object>, env=None, combine_output=False, timeout=None, executable=None)[source]
+

Wrapper around subprocess to make it much more convenient to run shell commands. +Expects command to work. Just returns output string.

+
>>> run_cmd_no_fail('echo foo') == 'foo'
+True
+>>> run_cmd_no_fail('echo THE ERROR >&2; false') 
+Traceback (most recent call last):
+    ...
+CIMEError: ERROR: Command: 'echo THE ERROR >&2; false' failed with error ...
+
+
+
>>> run_cmd_no_fail('grep foo', input_str=b'foo') == 'foo'
+True
+>>> run_cmd_no_fail('echo THE ERROR >&2', combine_output=True) == 'THE ERROR'
+True
+
+
+
+ +
+
+CIME.utils.run_sub_or_cmd(cmd, cmdargs, subname, subargs, logfile=None, case=None, from_dir=None, timeout=None)[source]
+

This code will try to import and run each cmd as a subroutine +if that fails it will run it as a program in a seperate shell

+

Raises exception on failure.

+
+ +
+
+CIME.utils.safe_copy(src_path, tgt_path, preserve_meta=True)[source]
+

A flexbile and safe copy routine. Will try to copy file and metadata, but this +can fail if the current user doesn’t own the tgt file. A fallback data-only copy is +attempted in this case. Works even if overwriting a read-only file.

+

tgt_path can be a directory, src_path must be a file

+

most of the complexity here is handling the case where the tgt_path file already +exists. This problem does not exist for the tree operations so we don’t need to wrap those.

+

preserve_meta toggles if file meta-data, like permissions, should be preserved. If you are +copying baseline files, you should be within a SharedArea context manager and preserve_meta +should be false so that the umask set up by SharedArea can take affect regardless of the +permissions of the src files.

+
+ +
+
+CIME.utils.safe_recursive_copy(src_dir, tgt_dir, file_map)[source]
+

Copies a set of files from one dir to another. Works even if overwriting a +read-only file. Files can be relative paths and the relative path will be +matched on the tgt side.

+
+ +
+
+CIME.utils.set_logger_indent(indent)[source]
+
+ +
+
+CIME.utils.set_model(model)[source]
+

Set the model to be used in this session

+
+ +
+
+CIME.utils.setup_standard_logging_options(parser)[source]
+
+ +
+
+CIME.utils.start_buffering_output()[source]
+

All stdout, stderr will be buffered after this is called. This is python’s +default behavior.

+
+ +
+
+CIME.utils.stop_buffering_output()[source]
+

All stdout, stderr will not be buffered after this is called.

+
+ +
+
+CIME.utils.string_in_list(_string, _list)[source]
+

Case insensitive search for string in list +returns the matching list value +>>> string_in_list(“Brack”,[“bar”, “bracK”, “foo”]) +‘bracK’ +>>> string_in_list(“foo”, [“FFO”, “FOO”, “foo2”, “foo3”]) +‘FOO’ +>>> string_in_list(“foo”, [“FFO”, “foo2”, “foo3”])

+
+ +
+
+CIME.utils.stringify_bool(val)[source]
+
+ +
+ +

Makes a symlink from link_name to target. Unlike the standard +os.symlink, this will work even if link_name already exists (in +which case link_name will be overwritten).

+
+ +
+
+CIME.utils.touch(fname)[source]
+
+ +
+
+CIME.utils.transform_vars(text, case=None, subgroup=None, overrides=None, default=None)[source]
+

Do the variable substitution for any variables that need transforms +recursively.

+
>>> transform_vars("{{ cesm_stdout }}", default="cesm.stdout")
+'cesm.stdout'
+>>> member_store = lambda : None
+>>> member_store.foo = "hi"
+>>> transform_vars("I say {{ foo }}", overrides={"foo":"hi"})
+'I say hi'
+
+
+
+ +
+
+CIME.utils.verbatim_success_msg(return_val)[source]
+
+ +
+
+CIME.utils.wait_for_unlocked(filepath)[source]
+
+ +
+
+

CIME.wait_for_tests module

+
+
+CIME.wait_for_tests.create_cdash_build_xml(results, cdash_build_name, cdash_build_group, utc_time, current_time, hostname, data_rel_path, git_commit)[source]
+
+ +
+
+CIME.wait_for_tests.create_cdash_config_xml(results, cdash_build_name, cdash_build_group, utc_time, current_time, hostname, data_rel_path, git_commit)[source]
+
+ +
+
+CIME.wait_for_tests.create_cdash_test_xml(results, cdash_build_name, cdash_build_group, utc_time, current_time, hostname, data_rel_path, git_commit)[source]
+
+ +
+
+CIME.wait_for_tests.create_cdash_upload_xml(results, cdash_build_name, cdash_build_group, utc_time, hostname, force_log_upload)[source]
+
+ +
+
+CIME.wait_for_tests.create_cdash_xml(results, cdash_build_name, cdash_project, cdash_build_group, force_log_upload=False)[source]
+
+ +
+
+CIME.wait_for_tests.create_cdash_xml_boiler(phase, cdash_build_name, cdash_build_group, utc_time, current_time, hostname, git_commit)[source]
+
+ +
+
+CIME.wait_for_tests.create_cdash_xml_fakes(results, cdash_build_name, cdash_build_group, utc_time, current_time, hostname)[source]
+
+ +
+
+CIME.wait_for_tests.get_nml_diff(test_path)[source]
+
+ +
+
+CIME.wait_for_tests.get_test_output(test_path)[source]
+
+ +
+
+CIME.wait_for_tests.get_test_phase(test_path, phase)[source]
+
+ +
+
+CIME.wait_for_tests.get_test_time(test_path)[source]
+
+ +
+
+CIME.wait_for_tests.set_up_signal_handlers()[source]
+
+ +
+
+CIME.wait_for_tests.signal_handler(*_)[source]
+
+ +
+
+CIME.wait_for_tests.wait_for_test(test_path, results, wait, check_throughput, check_memory, ignore_namelists, ignore_memleak, no_run)[source]
+
+ +
+
+CIME.wait_for_tests.wait_for_tests(test_paths, no_wait=False, check_throughput=False, check_memory=False, ignore_namelists=False, ignore_memleak=False, cdash_build_name=None, cdash_project='E3SM', cdash_build_group='ACME_Latest', timeout=None, force_log_upload=False, no_run=False, update_success=False, expect_test_complete=True)[source]
+
+ +
+
+CIME.wait_for_tests.wait_for_tests_impl(test_paths, no_wait=False, check_throughput=False, check_memory=False, ignore_namelists=False, ignore_memleak=False, no_run=False)[source]
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.scripts.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.scripts.html new file mode 100644 index 00000000000..8f21c08c3f2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.scripts.html @@ -0,0 +1,435 @@ + + + + + + + CIME.scripts package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.scripts package

+
+

Submodules

+
+
+

CIME.scripts.create_clone module

+
+
+CIME.scripts.create_clone.parse_command_line(args)[source]
+
+ +
+
+

CIME.scripts.create_newcase module

+

Script to create a new CIME Case Control System (CSS) experimental case.

+
+
+CIME.scripts.create_newcase.parse_command_line(args, cimeroot, description)[source]
+
+ +
+
+

CIME.scripts.create_test module

+

Script to create, build and run CIME tests. This script can:

+
    +
  1. Run a single test, or more than one test +./create_test TESTNAME +./create_test TESTNAME1 TESTNAME2 …

  2. +
  3. Run a test suite from a text file with one test per line +./create_test -f TESTFILE

  4. +
  5. Run an E3SM test suite:

  6. +
+
+

Below, a suite name, SUITE, is defined in $CIMEROOT/scripts/lib/get_tests.py +- Run a single suite

+
+

./create_test SUITE

+
+
    +
  • Run two suites

  • +
+
+

./create_test SUITE1 SUITE2

+
+
    +
  • Run all tests in a suite except for one

  • +
+
+

./create_test SUITE ^TESTNAME

+
+
    +
  • Run all tests in a suite except for tests that are in another suite

  • +
+
+

./create_test SUITE1 ^SUITE2

+
+
    +
  • Run all tests in a suite with baseline comparisons against master baselines

  • +
+
+

./create_test SUITE1 -c -b master

+
+
+
    +
  1. Run a CESM test suite(s): +./create_test –xml-category XML_CATEGORY [–xml-machine XML_MACHINE] [–xml-compiler XML_COMPILER] [ –xml-testlist XML_TESTLIST]

  2. +
+

If this tool is missing any feature that you need, please add an issue on +https://github.com/ESMCI/cime

+
+
+CIME.scripts.create_test.create_test(test_names, test_data, compiler, machine_name, no_run, no_build, no_setup, no_batch, test_root, baseline_root, clean, baseline_cmp_name, baseline_gen_name, namelists_only, project, test_id, parallel_jobs, walltime, single_submit, proc_pool, use_existing, save_timing, queue, allow_baseline_overwrite, output_root, wait, force_procs, force_threads, mpilib, input_dir, pesfile, run_count, mail_user, mail_type, check_throughput, check_memory, ignore_namelists, ignore_memleak, allow_pnl, non_local, single_exe, workflow, chksum, force_rebuild)[source]
+
+ +
+
+CIME.scripts.create_test.get_default_setting(config, varname, default_if_not_found, check_main=False)[source]
+
+ +
+
+CIME.scripts.create_test.parse_command_line(args, description)[source]
+
+ +
+
+CIME.scripts.create_test.single_submit_impl(machine_name, test_id, proc_pool, _, args, job_cost_map, wall_time, test_root)[source]
+
+ +
+
+

CIME.scripts.query_config module

+

Displays information about available compsets, component settings, grids and/or +machines. Typically run with one of the arguments –compsets, –settings, +–grids or –machines; if you specify more than one of these arguments, +information will be listed for each.

+
+
+class CIME.scripts.query_config.ArgumentParser(prog=None, usage=None, description=None, epilog=None, parents=[], formatter_class=<class 'argparse.HelpFormatter'>, prefix_chars='-', fromfile_prefix_chars=None, argument_default=None, conflict_handler='error', add_help=True, allow_abbrev=True, exit_on_error=True)[source]
+

Bases: ArgumentParser

+

we override the error message from ArgumentParser to have a more helpful +message in the case of missing arguments

+
+
+error(message: string)[source]
+

Prints a usage message incorporating the message to stderr and +exits.

+

If you override this in a subclass, it should not return – it +should either exit or raise an exception.

+
+ +
+ +
+
+class CIME.scripts.query_config.Machines(infile=None, files=None, machine=None, extra_machines_dir=None)[source]
+

Bases: Machines

+

we overide print_values from Machines to add current in machine description

+
+
+print_values(machine_name='all')[source]
+
+ +
+ +
+
+CIME.scripts.query_config.get_components(files)[source]
+

Determine the valid component classes (e.g. atm) for the driver/cpl +These are then stored in comps_array

+
+ +
+
+CIME.scripts.query_config.get_compsets(files)[source]
+

Determine valid component values by checking the value attributes for COMPSETS_SPEC_FILE

+
+ +
+
+CIME.scripts.query_config.parse_command_line(args, description)[source]
+

parse command line arguments

+
+ +
+
+CIME.scripts.query_config.print_compset(name, files, all_components=False, xml=False)[source]
+

print compsets associated with the component name, but if all_components is true only +print the details if the associated component is available

+
+ +
+
+CIME.scripts.query_config.query_all_components(files, xml=False)[source]
+

query all components

+
+ +
+
+CIME.scripts.query_config.query_component(name, files, all_components=False, xml=False)[source]
+

query a component by name

+
+ +
+
+CIME.scripts.query_config.query_compsets(files, name, xml=False)[source]
+

query compset definition give a compset name

+
+ +
+
+CIME.scripts.query_config.query_grids(files, long_output, xml=False)[source]
+

query all grids.

+
+ +
+
+CIME.scripts.query_config.query_machines(files, machine_name='all', xml=False)[source]
+

query machines. Defaule: all

+
+ +
+
+

CIME.scripts.query_testlists module

+

Script to query xml test lists, displaying all tests in human-readable form.

+
+
Usage:
+
./query_testlists [–show-options] [–define-testtypes]

Display a list of tests

+
+
./query_testlists –count

Count tests by category/machine/compiler

+
+
./query_testlists –list {category,categories,machine,machines,compiler,compilers}

List the available options for –xml-category, –xml-machine, or –xml-compiler

+
+
+

All of the above support the various –xml-* arguments for subsetting which tests are included.

+
+
+
+
+CIME.scripts.query_testlists.count_test_data(test_data)[source]
+
+
Args:
+
test_data (dict): dictionary of test data, containing at least these keys:
    +
  • name: full test name

  • +
  • category: test category

  • +
  • machine

  • +
  • compiler

  • +
+
+
+
+
+
+ +
+
+CIME.scripts.query_testlists.list_test_data(test_data, list_type)[source]
+

List categories, machines or compilers

+
+
Args:
+
test_data (dict): dictionary of test data, containing at least these keys:
    +
  • category

  • +
  • machine

  • +
  • compiler

  • +
+
+
+

list_type (str): one of ‘category’, ‘machine’ or ‘compiler’

+
+
+
+ +
+
+CIME.scripts.query_testlists.parse_command_line(args, description)[source]
+
+ +
+
+CIME.scripts.query_testlists.print_test_data(test_data, show_options, define_testtypes)[source]
+
+
Args:
+
test_data (dict): dictionary of test data, containing at least these keys:
    +
  • name: full test name

  • +
  • category: test category

  • +
+
+
+
+
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.tests.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.tests.html new file mode 100644 index 00000000000..d53a827f106 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/CIME.tests.html @@ -0,0 +1,3566 @@ + + + + + + + CIME.tests package — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME.tests package

+
+

Submodules

+
+
+

CIME.tests.base module

+
+
+class CIME.tests.base.BaseTestCase(methodName='runTest')[source]
+

Bases: TestCase

+
+
+FAST_ONLY = None
+
+ +
+
+GLOBAL_TIMEOUT = None
+
+ +
+
+MACHINE = None
+
+ +
+
+NO_BATCH = None
+
+ +
+
+NO_CMAKE = None
+
+ +
+
+NO_FORTRAN_RUN = None
+
+ +
+
+NO_TEARDOWN = None
+
+ +
+
+SCRIPT_DIR = '/home/runner/work/cime/cime/scripts'
+
+ +
+
+TEST_COMPILER = None
+
+ +
+
+TEST_MPILIB = None
+
+ +
+
+TEST_ROOT = None
+
+ +
+
+TOOLS_DIR = '/home/runner/work/cime/cime/CIME/Tools'
+
+ +
+
+assert_dashboard_has_build(build_name, expected_count=1)[source]
+
+ +
+
+assert_test_status(test_name, test_status_obj, test_phase, expected_stat)[source]
+
+ +
+
+get_casedir(case_fragment, all_cases)[source]
+
+ +
+
+kill_python_subprocesses(sig=Signals.SIGKILL, expected_num_killed=None)[source]
+
+ +
+
+kill_subprocesses(name=None, sig=Signals.SIGKILL, expected_num_killed=None)[source]
+
+ +
+
+run_cmd_assert_result(cmd, from_dir=None, expected_stat=0, env=None, verbose=False, shell=True)[source]
+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+setup_proxy()[source]
+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+verify_perms(root_dir)[source]
+
+ +
+ +
+
+CIME.tests.base.typed_os_environ(key, default_value, expected_type=None)[source]
+
+ +
+
+

CIME.tests.case_fake module

+

This module contains a fake implementation of the Case class that can be used +for testing the tests.

+
+
+class CIME.tests.case_fake.CaseFake(case_root, create_case_root=True)[source]
+

Bases: object

+
+
+case_setup(clean=False, test_mode=False, reset=False)[source]
+
+ +
+
+copy(newcasename, newcaseroot)[source]
+

Create and return a copy of self, but with CASE and CASEBASEID set to newcasename, +CASEROOT set to newcaseroot, and RUNDIR set appropriately.

+
+
Args:

newcasename (str): new value for CASE +newcaseroot (str): new value for CASEROOT

+
+
+
+ +
+
+create_clone(newcase, keepexe=False, mach_dir=None, project=None, cime_output_root=None, exeroot=None, rundir=None)[source]
+

Create a clone of the current case. Also creates the CASEROOT directory +for the clone case (given by newcase).

+
+
Args:
+
newcase (str): full path to the new case. This directory should not

already exist; it will be created

+
+
+

keepexe (bool, optional): Ignored +mach_dir (str, optional): Ignored +project (str, optional): Ignored +cime_output_root (str, optional): New CIME_OUTPUT_ROOT for the clone +exeroot (str, optional): New EXEROOT for the clone +rundir (str, optional): New RUNDIR for the clone

+
+
+

Returns the clone case object

+
+ +
+
+flush()[source]
+
+ +
+
+get_value(item)[source]
+

Get the value of the given item

+

Returns None if item isn’t set for this case

+
+
Args:

item (str): variable of interest

+
+
+
+ +
+
+load_env(reset=False)[source]
+
+ +
+
+make_rundir()[source]
+

Make directory given by RUNDIR

+
+ +
+
+set_exeroot()[source]
+

Assumes CASEROOT is already set; sets an appropriate EXEROOT +(nested inside CASEROOT)

+
+ +
+
+set_initial_test_values()[source]
+
+ +
+
+set_rundir()[source]
+

Assumes CASEROOT is already set; sets an appropriate RUNDIR (nested +inside CASEROOT)

+
+ +
+
+set_value(item, value)[source]
+

Set the value of the given item to the given value

+
+
Args:

item (str): variable of interest +value (any type): new value for item

+
+
+
+ +
+ +
+
+

CIME.tests.custom_assertions_test_status module

+

This module contains a class that extends unittest.TestCase, adding custom assertions that +can be used when testing TestStatus.

+
+
+class CIME.tests.custom_assertions_test_status.CustomAssertionsTestStatus(methodName='runTest')[source]
+

Bases: TestCase

+
+
+assert_core_phases(output, test_name, fails)[source]
+

Asserts that ‘output’ contains a line for each of the core test +phases for the given test_name. All results should be PASS +except those given by the fails list, which should be FAILS.

+
+ +
+
+assert_num_expected_unexpected_fails(output, num_expected, num_unexpected)[source]
+

Asserts that the number of occurrences of expected and unexpected fails in +‘output’ matches the given numbers

+
+ +
+
+assert_phase_absent(output, phase, test_name)[source]
+

Asserts that ‘output’ does not contain a status line for the +given phase and test_name

+
+ +
+
+assert_status_of_phase(output, status, phase, test_name, xfail=None)[source]
+

Asserts that ‘output’ contains a line showing the given +status for the given phase for the given test_name.

+

‘xfail’ should have one of the following values: +- None (the default): assertion passes regardless of whether there is an

+
+

EXPECTED/UNEXPECTED string

+
+
    +
  • ‘no’: The line should end with the phase, with no additional text after that

  • +
  • ‘expected’: After the phase, the line should contain ‘(EXPECTED FAILURE)’

  • +
  • ‘unexpected’: After the phase, the line should contain ‘(UNEXPECTED’

  • +
+
+ +
+ +
+
+

CIME.tests.scripts_regression_tests module

+

Script containing CIME python regression test suite. This suite should be run +to confirm overall CIME correctness.

+
+
+CIME.tests.scripts_regression_tests.cleanup(test_root)[source]
+
+ +
+
+CIME.tests.scripts_regression_tests.configure_tests(timeout, no_fortran_run, fast, no_batch, no_cmake, no_teardown, machine, compiler, mpilib, test_root, **kwargs)[source]
+
+ +
+
+CIME.tests.scripts_regression_tests.setup_arguments(parser)[source]
+
+ +
+
+CIME.tests.scripts_regression_tests.write_provenance_info(machine, test_compiler, test_mpilib, test_root)[source]
+
+ +
+
+

CIME.tests.test_sys_bless_tests_results module

+
+
+class CIME.tests.test_sys_bless_tests_results.TestBlessTestResults(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_bless_test_results()[source]
+
+ +
+
+test_rebless_namelist()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_build_system module

+
+
+class CIME.tests.test_sys_build_system.TestBuildSystem(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_clean_rebuild()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_cime_case module

+
+
+class CIME.tests.test_sys_cime_case.TestCimeCase(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_case_clean()[source]
+
+ +
+
+test_case_submit_interface()[source]
+
+ +
+
+test_cime_case()[source]
+
+ +
+
+test_cime_case_allow_failed_prereq()[source]
+
+ +
+
+test_cime_case_build_threaded_1()[source]
+
+ +
+
+test_cime_case_build_threaded_2()[source]
+
+ +
+
+test_cime_case_force_pecount()[source]
+
+ +
+
+test_cime_case_mpi_serial()[source]
+
+ +
+
+test_cime_case_prereq()[source]
+
+ +
+
+test_cime_case_resubmit_immediate()[source]
+
+ +
+
+test_cime_case_st_archive_resubmit()[source]
+
+ +
+
+test_cime_case_test_custom_project()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_1()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_2()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_3()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_4()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_5()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_6()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_7()[source]
+
+ +
+
+test_cime_case_test_walltime_mgmt_8()[source]
+
+ +
+
+test_cime_case_xmlchange_append()[source]
+
+ +
+
+test_configure()[source]
+
+ +
+
+test_create_test_longname()[source]
+
+ +
+
+test_env_loading()[source]
+
+ +
+
+test_self_build_cprnc()[source]
+
+ +
+
+test_xml_caching()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_cime_performance module

+
+
+class CIME.tests.test_sys_cime_performance.TestCimePerformance(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_cime_case_ctrl_performance()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_create_newcase module

+
+
+class CIME.tests.test_sys_create_newcase.TestCreateNewcase(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+classmethod setUpClass()[source]
+

Hook method for setting up class fixture before running tests in the class.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+classmethod tearDownClass()[source]
+

Hook method for deconstructing the class fixture after running all tests in the class.

+
+ +
+
+test_a_createnewcase()[source]
+
+ +
+
+test_aa_no_flush_on_instantiate()[source]
+
+ +
+
+test_b_user_mods()[source]
+
+ +
+
+test_c_create_clone_keepexe()[source]
+
+ +
+
+test_d_create_clone_new_user()[source]
+
+ +
+
+test_dd_create_clone_not_writable()[source]
+
+ +
+
+test_e_xmlquery()[source]
+
+ +
+
+test_f_createnewcase_with_user_compset()[source]
+
+ +
+
+test_g_createnewcase_with_user_compset_and_env_mach_pes()[source]
+
+ +
+
+test_h_primary_component()[source]
+
+ +
+
+test_j_createnewcase_user_compset_vs_alias()[source]
+

Create a compset using the alias and another compset using the full compset name +and make sure they are the same by comparing the namelist files in CaseDocs. +Ignore the modelio files and clean the directory names out first.

+
+ +
+
+test_k_append_config()[source]
+
+ +
+
+test_ka_createnewcase_extra_machines_dir()[source]
+
+ +
+
+test_m_createnewcase_alternate_drivers()[source]
+
+ +
+
+test_n_createnewcase_bad_compset()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_full_system module

+
+
+class CIME.tests.test_sys_full_system.TestFullSystem(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_full_system()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_grid_generation module

+
+
+class CIME.tests.test_sys_grid_generation.TestGridGeneration(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+classmethod setUpClass()[source]
+

Hook method for setting up class fixture before running tests in the class.

+
+ +
+
+classmethod tearDownClass()[source]
+

Hook method for deconstructing the class fixture after running all tests in the class.

+
+ +
+
+test_gen_domain()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_jenkins_generic_job module

+
+
+class CIME.tests.test_sys_jenkins_generic_job.TestJenkinsGenericJob(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+assert_num_leftovers(suite)[source]
+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+simple_test(expect_works, extra_args, build_name=None)[source]
+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_jenkins_generic_job()[source]
+
+ +
+
+test_jenkins_generic_job_kill()[source]
+
+ +
+
+test_jenkins_generic_job_realistic_dash()[source]
+
+ +
+
+test_jenkins_generic_job_save_timing()[source]
+
+ +
+
+threaded_test(expect_works, extra_args, build_name=None)[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_manage_and_query module

+
+
+class CIME.tests.test_sys_manage_and_query.TestManageAndQuery(methodName='runTest')[source]
+

Bases: BaseTestCase

+

Tests various scripts to manage and query xml files

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_query_testlists_count_runs()[source]
+

Make sure that query_testlists runs successfully with the –count argument

+
+ +
+
+test_query_testlists_define_testtypes_runs()[source]
+

Make sure that query_testlists runs successfully with the –define-testtypes argument

+
+ +
+
+test_query_testlists_list_runs()[source]
+

Make sure that query_testlists runs successfully with the –list argument

+
+ +
+
+test_query_testlists_runs()[source]
+

Make sure that query_testlists runs successfully

+

This simply makes sure that query_testlists doesn’t generate any errors +when it runs. This helps ensure that changes in other utilities don’t +break query_testlists.

+
+ +
+ +
+
+

CIME.tests.test_sys_query_config module

+
+
+class CIME.tests.test_sys_query_config.TestQueryConfig(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_query_components()[source]
+
+ +
+
+test_query_compsets()[source]
+
+ +
+
+test_query_grids()[source]
+
+ +
+
+test_query_machines()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_run_restart module

+
+
+class CIME.tests.test_sys_run_restart.TestRunRestart(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_run_restart()[source]
+
+ +
+
+test_run_restart_too_many_fails()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_save_timings module

+
+
+class CIME.tests.test_sys_save_timings.TestSaveTimings(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+simple_test(manual_timing=False)[source]
+
+ +
+
+test_save_timings()[source]
+
+ +
+
+test_save_timings_manual()[source]
+
+ +
+
+test_success_recording()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_single_submit module

+
+
+class CIME.tests.test_sys_single_submit.TestSingleSubmit(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_single_submit()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_test_scheduler module

+
+
+class CIME.tests.test_sys_test_scheduler.TestTestScheduler(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_a_phases()[source]
+
+ +
+
+test_b_full()[source]
+
+ +
+
+test_c_use_existing()[source]
+
+ +
+
+test_chksum(strftime)[source]
+
+ +
+
+test_d_retry()[source]
+
+ +
+
+test_e_test_inferred_compiler()[source]
+
+ +
+
+test_force_rebuild()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_unittest module

+
+
+class CIME.tests.test_sys_unittest.TestUnitTest(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+classmethod setUpClass()[source]
+

Hook method for setting up class fixture before running tests in the class.

+
+ +
+
+classmethod tearDownClass()[source]
+

Hook method for deconstructing the class fixture after running all tests in the class.

+
+ +
+
+test_a_unit_test()[source]
+
+ +
+
+test_b_cime_f90_unit_tests()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_user_concurrent_mods module

+
+
+class CIME.tests.test_sys_user_concurrent_mods.TestUserConcurrentMods(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_user_concurrent_mods()[source]
+
+ +
+ +
+
+

CIME.tests.test_sys_wait_for_tests module

+
+
+class CIME.tests.test_sys_wait_for_tests.TestWaitForTests(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+live_test_impl(testdir, expected_results, last_phase, last_status)[source]
+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+simple_test(testdir, expected_results, extra_args='', build_name=None)[source]
+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_wait_for_test_all_pass()[source]
+
+ +
+
+test_wait_for_test_cdash_kill()[source]
+
+ +
+
+test_wait_for_test_cdash_pass()[source]
+
+ +
+
+test_wait_for_test_no_wait()[source]
+
+ +
+
+test_wait_for_test_test_status_integration_pass()[source]
+
+ +
+
+test_wait_for_test_test_status_integration_submit_fail()[source]
+
+ +
+
+test_wait_for_test_timeout()[source]
+
+ +
+
+test_wait_for_test_wait_for_missing_run_phase()[source]
+
+ +
+
+test_wait_for_test_wait_for_pend()[source]
+
+ +
+
+test_wait_for_test_wait_kill()[source]
+
+ +
+
+test_wait_for_test_with_fail()[source]
+
+ +
+
+threaded_test(testdir, expected_results, extra_args='', build_name=None)[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_aprun module

+
+
+class CIME.tests.test_unit_aprun.TestUnitAprun(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_aprun()[source]
+
+ +
+
+test_aprun_extra_args()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_baselines_performance module

+
+
+class CIME.tests.test_unit_baselines_performance.TestUnitBaselinesPerformance(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test__perf_get_memory(get_latest_cpl_logs, get_cpl_mem_usage)[source]
+
+ +
+
+test__perf_get_memory_override(get_latest_cpl_logs, get_cpl_mem_usage)[source]
+
+ +
+
+test__perf_get_throughput(get_latest_cpl_logs, get_cpl_throughput)[source]
+
+ +
+
+test_get_cpl_mem_usage(isfile)[source]
+
+ +
+
+test_get_cpl_mem_usage_gz()[source]
+
+ +
+
+test_get_cpl_throughput()[source]
+
+ +
+
+test_get_cpl_throughput_no_file()[source]
+
+ +
+
+test_get_latest_cpl_logs()[source]
+
+ +
+
+test_get_latest_cpl_logs_found_multiple()[source]
+
+ +
+
+test_get_latest_cpl_logs_found_single()[source]
+
+ +
+
+test_perf_compare_memory_baseline(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
+
+ +
+
+test_perf_compare_memory_baseline_above_threshold(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
+
+ +
+
+test_perf_compare_memory_baseline_no_baseline(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
+
+ +
+
+test_perf_compare_memory_baseline_no_baseline_file(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
+
+ +
+
+test_perf_compare_memory_baseline_no_tolerance(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
+
+ +
+
+test_perf_compare_memory_baseline_not_enough_samples(get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage)[source]
+
+ +
+
+test_perf_compare_throughput_baseline(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
+
+ +
+
+test_perf_compare_throughput_baseline_above_threshold(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
+
+ +
+
+test_perf_compare_throughput_baseline_no_baseline(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
+
+ +
+
+test_perf_compare_throughput_baseline_no_baseline_file(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
+
+ +
+
+test_perf_compare_throughput_baseline_no_tolerance(get_latest_cpl_logs, read_baseline_file, _perf_get_throughput)[source]
+
+ +
+
+test_perf_get_memory()[source]
+
+ +
+
+test_perf_get_memory_default(_perf_get_memory)[source]
+
+ +
+
+test_perf_get_throughput()[source]
+
+ +
+
+test_perf_get_throughput_default(_perf_get_throughput)[source]
+
+ +
+
+test_perf_write_baseline(perf_get_throughput, perf_get_memory, write_baseline_file)[source]
+
+ +
+
+test_read_baseline_file_content()[source]
+
+ +
+
+test_read_baseline_file_multi_line()[source]
+
+ +
+
+test_write_baseline_file()[source]
+
+ +
+
+test_write_baseline_runtimeerror(perf_get_throughput, perf_get_memory, write_baseline_file)[source]
+
+ +
+
+test_write_baseline_skip(perf_get_throughput, perf_get_memory, write_baseline_file)[source]
+
+ +
+ +
+
+CIME.tests.test_unit_baselines_performance.create_mock_case(tempdir, get_latest_cpl_logs=None)[source]
+
+ +
+
+

CIME.tests.test_unit_bless_test_results module

+
+
+class CIME.tests.test_unit_bless_test_results.TestUnitBlessTestResults(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_baseline_name_none(get_test_status_files, TestStatus, Case, bless_namelists)[source]
+
+ +
+
+test_baseline_root_none(get_test_status_files, TestStatus, Case)[source]
+
+ +
+
+test_bless_all(get_test_status_files, TestStatus, Case)[source]
+
+ +
+
+test_bless_hist_only(get_test_status_files, TestStatus, Case, bless_history)[source]
+
+ +
+
+test_bless_history(compare_baseline)[source]
+
+ +
+
+test_bless_history_fail(compare_baseline, generate_baseline)[source]
+
+ +
+
+test_bless_history_force(compare_baseline, generate_baseline)[source]
+
+ +
+
+test_bless_memory(perf_compare_memory_baseline)[source]
+
+ +
+
+test_bless_memory_file_not_found_error(perf_compare_memory_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_memory_force(perf_compare_memory_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_memory_force_error(perf_compare_memory_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_memory_general_error(perf_compare_memory_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_memory_only(get_test_status_files, TestStatus, Case, _bless_memory, _bless_throughput)[source]
+
+ +
+
+test_bless_memory_report_only(perf_compare_memory_baseline)[source]
+
+ +
+
+test_bless_namelists_fail(run_cmd, get_scripts_root)[source]
+
+ +
+
+test_bless_namelists_force(run_cmd, get_scripts_root)[source]
+
+ +
+
+test_bless_namelists_new_test_id(run_cmd, get_scripts_root)[source]
+
+ +
+
+test_bless_namelists_new_test_root(run_cmd, get_scripts_root)[source]
+
+ +
+
+test_bless_namelists_only(get_test_status_files, TestStatus, Case, bless_namelists)[source]
+
+ +
+
+test_bless_namelists_pes_file(run_cmd, get_scripts_root)[source]
+
+ +
+
+test_bless_namelists_report_only()[source]
+
+ +
+
+test_bless_perf(get_test_status_files, TestStatus, Case, _bless_memory, _bless_throughput)[source]
+
+ +
+
+test_bless_tests_no_match(get_test_status_files, TestStatus, Case)[source]
+
+ +
+
+test_bless_tests_results_fail(get_test_status_files, TestStatus, Case, bless_namelists, bless_history, _bless_throughput, _bless_memory)[source]
+
+ +
+
+test_bless_tests_results_homme(get_test_status_files, TestStatus, Case, bless_namelists, bless_history, _bless_throughput, _bless_memory)[source]
+
+ +
+
+test_bless_throughput(perf_compare_throughput_baseline)[source]
+
+ +
+
+test_bless_throughput_file_not_found_error(perf_compare_throughput_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_throughput_force(perf_compare_throughput_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_throughput_force_error(perf_compare_throughput_baseline, perf_write_baseline)[source]
+
+ +
+
+test_bless_throughput_general_error(perf_compare_throughput_baseline)[source]
+
+ +
+
+test_bless_throughput_only(get_test_status_files, TestStatus, Case, _bless_memory, _bless_throughput)[source]
+
+ +
+
+test_bless_throughput_report_only(perf_compare_throughput_baseline)[source]
+
+ +
+
+test_exclude(get_test_status_files, TestStatus, Case)[source]
+
+ +
+
+test_is_bless_needed()[source]
+
+ +
+
+test_is_bless_needed_baseline_fail()[source]
+
+ +
+
+test_is_bless_needed_no_run_phase()[source]
+
+ +
+
+test_is_bless_needed_no_skip_fail()[source]
+
+ +
+
+test_is_bless_needed_overall_fail()[source]
+
+ +
+
+test_is_bless_needed_run_phase_fail()[source]
+
+ +
+
+test_multiple_files(get_test_status_files, TestStatus, Case)[source]
+
+ +
+
+test_no_skip_pass(get_test_status_files, TestStatus, Case, bless_namelists, bless_history, _bless_throughput, _bless_memory)[source]
+
+ +
+
+test_specific(get_test_status_files, TestStatus, Case)[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_case module

+
+
+class CIME.tests.test_unit_case.TestCase(methodName='runTest')[source]
+

Bases: TestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_copy(getuser, getfqdn, configure, create_caseroot, apply_user_mods, set_lookup_value, lock_file, strftime, read_xml)[source]
+
+ +
+
+test_create(get_user, getfqdn, configure, create_caseroot, apply_user_mods, set_lookup_value, lock_file, strftime, read_xml)[source]
+
+ +
+
+test_fix_sys_argv_quotes(read_xml)[source]
+
+ +
+
+test_fix_sys_argv_quotes_incomplete(read_xml)[source]
+
+ +
+
+test_fix_sys_argv_quotes_kv(read_xml)[source]
+
+ +
+
+test_fix_sys_argv_quotes_val(read_xml)[source]
+
+ +
+
+test_fix_sys_argv_quotes_val_quoted(read_xml)[source]
+
+ +
+
+test_new_hash(getuser, getfqdn, strftime, read_xml)[source]
+
+ +
+ +
+
+class CIME.tests.test_unit_case.TestCaseSubmit(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test__submit(lock_file, unlock_file, basename)[source]
+
+ +
+
+test_check_case()[source]
+
+ +
+
+test_check_case_test()[source]
+
+ +
+
+test_submit(read_xml, get_value, init, _submit)[source]
+
+ +
+ +
+
+class CIME.tests.test_unit_case.TestCase_RecordCmd(methodName='runTest')[source]
+

Bases: TestCase

+
+
+assert_calls_match(calls, expected)[source]
+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_cmd_arg(get_value, flush, init)[source]
+
+ +
+
+test_error(strftime, get_value, flush, init)[source]
+
+ +
+
+test_init(strftime, get_value, flush, init)[source]
+
+ +
+
+test_sub_relative(strftime, get_value, flush, init)[source]
+
+ +
+ +
+
+CIME.tests.test_unit_case.make_valid_case(path)[source]
+

Make the given path look like a valid case to avoid errors

+
+ +
+
+

CIME.tests.test_unit_case_fake module

+

This module contains unit tests of CaseFake

+
+
+class CIME.tests.test_unit_case_fake.TestCaseFake(methodName='runTest')[source]
+

Bases: TestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_create_clone()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_case_setup module

+
+
+class CIME.tests.test_unit_case_setup.TestCaseSetup(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_create_macros(_create_macros_cmake)[source]
+
+ +
+
+test_create_macros_cmake(copy_depends_files)[source]
+
+ +
+
+test_create_macros_copy_extra()[source]
+
+ +
+
+test_create_macros_copy_user()[source]
+
+ +
+ +
+
+CIME.tests.test_unit_case_setup.chdir(path)[source]
+
+ +
+
+CIME.tests.test_unit_case_setup.create_machines_dir()[source]
+

Creates temp machines directory with fake content

+
+ +
+
+

CIME.tests.test_unit_compare_test_results module

+

This module contains unit tests for compare_test_results

+
+
+class CIME.tests.test_unit_compare_test_results.TestCaseFake(methodName='runTest')[source]
+

Bases: TestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_baseline()[source]
+
+ +
+
+test_failed_early()[source]
+
+ +
+
+test_hist_only()[source]
+
+ +
+
+test_namelists_only()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_compare_two module

+

This module contains unit tests of the core logic in SystemTestsCompareTwo.

+
+
+class CIME.tests.test_unit_compare_two.Call(method, arguments)
+

Bases: tuple

+
+
+arguments
+

Alias for field number 1

+
+ +
+
+method
+

Alias for field number 0

+
+ +
+ +
+
+class CIME.tests.test_unit_compare_two.SystemTestsCompareTwoFake(case1, run_one_suffix='base', run_two_suffix='test', separate_builds=False, multisubmit=False, case2setup_raises_exception=False, run_one_should_pass=True, run_two_should_pass=True, compare_should_pass=True)[source]
+

Bases: SystemTestsCompareTwo

+
+
+run_indv(suffix='base', st_archive=False, submit_resubmits=None, keep_init_generated_files=False)[source]
+

This fake implementation appends to the log and raises an exception if +it’s supposed to

+

Note that the Call object appended to the log has the current CASE name +in addition to the method arguments. (This is mainly to ensure that the +proper suffix is used for the proper case, but this extra check can be +removed if it’s a maintenance problem.)

+
+ +
+ +
+
+class CIME.tests.test_unit_compare_two.TestSystemTestsCompareTwo(methodName='runTest')[source]
+

Bases: TestCase

+
+
+get_caseroots(casename='mytest')[source]
+

Returns a tuple (case1root, case2root)

+
+ +
+
+get_compare_phase_name(mytest)[source]
+

Returns a string giving the compare phase name for this test

+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_compare_fails()[source]
+
+ +
+
+test_compare_passes()[source]
+
+ +
+
+test_internal_calls_multisubmit_failed_state()[source]
+
+ +
+
+test_resetup_case_single_exe()[source]
+
+ +
+
+test_run1_fails()[source]
+
+ +
+
+test_run2_fails()[source]
+
+ +
+
+test_run_phase_internal_calls()[source]
+
+ +
+
+test_run_phase_internal_calls_multisubmit_phase1()[source]
+
+ +
+
+test_run_phase_internal_calls_multisubmit_phase2()[source]
+
+ +
+
+test_run_phase_passes()[source]
+
+ +
+
+test_setup()[source]
+
+ +
+
+test_setup_case2_exists()[source]
+
+ +
+
+test_setup_error()[source]
+
+ +
+
+test_setup_separate_builds_sharedlibroot()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_config module

+
+
+class CIME.tests.test_unit_config.TestConfig(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_class()[source]
+
+ +
+
+test_class_external()[source]
+
+ +
+
+test_load()[source]
+
+ +
+
+test_overwrite()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_cs_status module

+
+
+class CIME.tests.test_unit_cs_status.TestCsStatus(methodName='runTest')[source]
+

Bases: CustomAssertionsTestStatus

+
+
+create_test_dir(test_dir)[source]
+

Creates the given test directory under testroot.

+

Returns the full path to the created test directory.

+
+ +
+
+static create_test_status_core_passes(test_dir_path, test_name)[source]
+

Creates a TestStatus file in the given path, with PASS status +for all core phases

+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+set_last_core_phase_to_fail(test_dir_path, test_name)[source]
+

Sets the last core phase to FAIL

+

Returns the name of this phase

+
+ +
+
+static set_phase_to_status(test_dir_path, test_name, phase, status)[source]
+

Sets the given phase to the given status for this test

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_count_fails()[source]
+

Test the count of fails with three tests

+

For first phase of interest: First test FAILs, second PASSes, +third FAILs; count should be 2, and this phase should not appear +individually for each test.

+

For second phase of interest: First test PASSes, second PASSes, +third FAILs; count should be 1, and this phase should not appear +individually for each test.

+
+ +
+
+test_expected_fails()[source]
+

With the expected_fails_file flag, expected failures should be flagged as such

+
+ +
+
+test_fails_only()[source]
+

With fails_only flag, only fails and pends should appear in the output

+
+ +
+
+test_force_rebuild()[source]
+
+ +
+
+test_single_test()[source]
+

cs_status for a single test should include some minimal expected output

+
+ +
+
+test_two_tests()[source]
+

cs_status for two tests (one with a FAIL) should include some minimal expected output

+
+ +
+ +
+
+

CIME.tests.test_unit_custom_assertions_test_status module

+

This module contains unit tests of CustomAssertionsTestStatus

+
+
+class CIME.tests.test_unit_custom_assertions_test_status.TestCustomAssertions(methodName='runTest')[source]
+

Bases: CustomAssertionsTestStatus

+
+
+static output_line(status, test_name, phase, extra='')[source]
+
+ +
+
+test_assertCorePhases_missingPhase_fails()[source]
+

assert_core_phases fails if there is a missing phase

+
+ +
+
+test_assertCorePhases_passes()[source]
+

assert_core_phases passes when it should

+
+ +
+
+test_assertCorePhases_wrongName_fails()[source]
+

assert_core_phases fails if the test name is wrong

+
+ +
+
+test_assertCorePhases_wrongStatus_fails()[source]
+

assert_core_phases fails if a phase has the wrong status

+
+ +
+
+test_assertPhaseAbsent_fails()[source]
+

assert_phase_absent should fail when the phase is present for +the given test_name

+
+ +
+
+test_assertPhaseAbsent_passes()[source]
+

assert_phase_absent should pass when the phase is absent for +the given test_name

+
+ +
+
+test_assertStatusOfPhase_withExtra_passes()[source]
+

Make sure assert_status_of_phase passes when there is some extra text at the +end of the line

+
+ +
+
+test_assertStatusOfPhase_xfailExpected_fails()[source]
+

assert_status_of_phase should fail when xfail=’expected’ but the line does NOT contain +the EXPECTED comment

+
+ +
+
+test_assertStatusOfPhase_xfailExpected_passes()[source]
+

assert_status_of_phase should pass when xfail=’expected’ and the line contains +the EXPECTED comment

+
+ +
+
+test_assertStatusOfPhase_xfailNo_fails()[source]
+

assert_status_of_phase should fail when xfail=’no’ but the line contains the +EXPECTED comment

+
+ +
+
+test_assertStatusOfPhase_xfailNo_passes()[source]
+

assert_status_of_phase should pass when xfail=’no’ and there is no +EXPECTED/UNEXPECTED on the line

+
+ +
+
+test_assertStatusOfPhase_xfailUnexpected_fails()[source]
+

assert_status_of_phase should fail when xfail=’unexpected’ but the line does NOT +contain the UNEXPECTED comment

+
+ +
+
+test_assertStatusOfPhase_xfailUnexpected_passes()[source]
+

assert_status_of_phase should pass when xfail=’unexpected’ and the line contains +the UNEXPECTED comment

+
+ +
+ +
+
+

CIME.tests.test_unit_doctest module

+
+
+class CIME.tests.test_unit_doctest.TestDocs(methodName='runTest')[source]
+

Bases: BaseTestCase

+
+
+test_lib_docs()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_expected_fails_file module

+
+
+class CIME.tests.test_unit_expected_fails_file.TestExpectedFailsFile(methodName='runTest')[source]
+

Bases: TestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_basic()[source]
+

Basic test of the parsing of an expected fails file

+
+ +
+
+test_invalid_file()[source]
+

Given an invalid file, an exception should be raised in schema validation

+
+ +
+
+test_same_test_appears_twice()[source]
+

If the same test appears twice, its information should be appended.

+

This is not the typical, expected layout of the file, but it should be handled +correctly in case the file is written this way.

+
+ +
+ +
+
+

CIME.tests.test_unit_grids module

+

This module tests some functionality of CIME.XML.grids

+
+
+class CIME.tests.test_unit_grids.TestComponentGrids(methodName='runTest')[source]
+

Bases: TestCase

+

Tests the _ComponentGrids helper class defined in CIME.XML.grids

+
+
+test_check_num_elements_right_ndomains()[source]
+

With the right number of domains for a component, check_num_elements should pass

+
+ +
+
+test_check_num_elements_right_nmaps()[source]
+

With the right number of maps between two components, check_num_elements should pass

+
+ +
+
+test_check_num_elements_wrong_ndomains()[source]
+

With the wrong number of domains for a component, check_num_elements should fail

+
+ +
+
+test_check_num_elements_wrong_nmaps()[source]
+

With the wrong number of maps between two components, check_num_elements should fail

+
+ +
+ +
+
+class CIME.tests.test_unit_grids.TestGrids(methodName='runTest')[source]
+

Bases: TestCase

+

Tests some functionality of CIME.XML.grids

+

Note that much of the functionality of CIME.XML.grids is NOT covered here

+
+
+assert_grid_info_f09_g17(grid_info)[source]
+

Asserts that expected grid info is present and correct when using _MODEL_GRID_F09_G17

+
+ +
+
+assert_grid_info_f09_g17_3glc(grid_info)[source]
+

Asserts that all domain info is present & correct for _MODEL_GRID_F09_G17_3GLC

+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_get_grid_info_3glc()[source]
+

Test of get_grid_info with 3 glc grids

+
+ +
+
+test_get_grid_info_basic()[source]
+

Basic test of get_grid_info

+
+ +
+
+test_get_grid_info_extra_gridmaps()[source]
+

Test of get_grid_info with some extra gridmaps

+
+ +
+
+test_get_grid_info_extra_required_gridmaps()[source]
+

Test of get_grid_info with some extra required gridmaps

+
+ +
+ +
+
+class CIME.tests.test_unit_grids.TestGridsFunctions(methodName='runTest')[source]
+

Bases: TestCase

+

Tests helper functions defined in CIME.XML.grids

+

These tests are in a separate class to avoid the unnecessary setUp and tearDown +function of the main test class.

+
+
+test_add_grid_info_existing()[source]
+

Test of _add_grid_info when the given key already exists

+
+ +
+
+test_add_grid_info_existing_with_value_for_multiple()[source]
+

Test of _add_grid_info when the given key already exists and value_for_multiple is provided

+
+ +
+
+test_add_grid_info_initial()[source]
+

Test of _add_grid_info for the initial add of a given key

+
+ +
+
+test_strip_grid_from_name_badname()[source]
+

_strip_grid_from_name should raise an exception for a name not ending with _grid

+
+ +
+
+test_strip_grid_from_name_basic()[source]
+

Basic test of _strip_grid_from_name

+
+ +
+ +
+
+

CIME.tests.test_unit_hist_utils module

+
+
+class CIME.tests.test_unit_hist_utils.TestHistUtils(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_copy_histfiles(safe_copy)[source]
+
+ +
+
+test_copy_histfiles_exclude(safe_copy)[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_nmlgen module

+
+
+class CIME.tests.test_unit_nmlgen.TestNamelistGenerator(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_init_defaults()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_paramgen module

+

This module tests some functionality of CIME.ParamGen.paramgen’s ParamGen class

+
+
+class CIME.tests.test_unit_paramgen.DummyCase[source]
+

Bases: object

+

A dummy Case class that mimics CIME class objects’ get_value method.

+
+
+get_value(varname)[source]
+
+ +
+ +
+
+class CIME.tests.test_unit_paramgen.TestParamGen(methodName='runTest')[source]
+

Bases: TestCase

+

Tests some basic functionality of the +CIME.ParamGen.paramgen’s ParamGen class

+
+
+test_expandable_vars()[source]
+

Tests the reduce method of ParamGen expandable vars in guards.

+
+ +
+
+test_formula_expansion()[source]
+

Tests the formula expansion feature of ParamGen.

+
+ +
+
+test_init_data()[source]
+

Tests the ParamGen initializer with and without an initial data.

+
+ +
+
+test_match()[source]
+

Tests the default behavior of returning the last match and the optional behavior of returning the +first match.

+
+ +
+
+test_nested_reduce()[source]
+

Tests the reduce method of ParamGen on data with nested guards.

+
+ +
+
+test_outer_guards()[source]
+

Tests the reduce method on data with outer guards enclosing parameter definitions.

+
+ +
+
+test_reduce()[source]
+

Tests the reduce method of ParamGen on data with explicit guards (True or False).

+
+ +
+
+test_undefined_var()[source]
+

Tests the reduce method of ParamGen on nested guards where an undefined expandable var is specified +below a guard that evaluates to False. The undefined var should not lead to an error since the enclosing +guard evaluates to false.

+
+ +
+ +
+
+class CIME.tests.test_unit_paramgen.TestParamGenXmlConstructor(methodName='runTest')[source]
+

Bases: TestCase

+

A unit test class for testing ParamGen’s xml constructor.

+
+
+test_default_var()[source]
+

Test to check if default val is assigned when all guards eval to False

+
+ +
+
+test_duplicate_entry_error()[source]
+

Test to make sure duplicate ids raise the correct error +when the “no_duplicates” flag is True.

+
+ +
+
+test_mixed_guard()[source]
+

Tests multiple key=value guards mixed with explicit (flexible) guards.

+
+ +
+
+test_mixed_guard_first()[source]
+

Tests multiple key=value guards mixed with explicit (flexible) guards +with match=first option.

+
+ +
+
+test_no_match()[source]
+

Tests an xml entry with no match, i.e., no guards evaluating to True.

+
+ +
+
+test_single_key_val_guard()[source]
+

Test xml entry values with single key=value guards

+
+ +
+ +
+
+class CIME.tests.test_unit_paramgen.TestParamGenYamlConstructor(methodName='runTest')[source]
+

Bases: TestCase

+

A unit test class for testing ParamGen’s yaml constructor.

+
+
+test_input_data_list()[source]
+

Test mom.input_data_list file generation via a subset of original input_data_list.yaml

+
+ +
+
+test_mom_input()[source]
+

Test MOM_input file generation via a subset of original MOM_input.yaml

+
+ +
+ +
+
+

CIME.tests.test_unit_system_tests module

+
+
+class CIME.tests.test_unit_system_tests.TestUnitSystemTests(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_check_for_memleak(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
+
+ +
+
+test_check_for_memleak_found(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
+
+ +
+
+test_check_for_memleak_not_enough_samples(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
+
+ +
+
+test_check_for_memleak_runtime_error(get_latest_cpl_logs, perf_get_memory_list, append_testlog, load_coupler_customization)[source]
+
+ +
+
+test_compare_memory(append_testlog, perf_compare_memory_baseline)[source]
+
+ +
+
+test_compare_memory_erorr_diff(append_testlog, perf_compare_memory_baseline)[source]
+
+ +
+
+test_compare_memory_erorr_fail(append_testlog, perf_compare_memory_baseline)[source]
+
+ +
+
+test_compare_throughput(append_testlog, perf_compare_throughput_baseline)[source]
+
+ +
+
+test_compare_throughput_error_diff(append_testlog, perf_compare_throughput_baseline)[source]
+
+ +
+
+test_compare_throughput_fail(append_testlog, perf_compare_throughput_baseline)[source]
+
+ +
+
+test_dry_run()[source]
+
+ +
+
+test_generate_baseline()[source]
+
+ +
+
+test_kwargs()[source]
+
+ +
+ +
+
+CIME.tests.test_unit_system_tests.create_mock_case(tempdir, idx=None, cpllog_data=None)[source]
+
+ +
+
+

CIME.tests.test_unit_test_status module

+
+
+class CIME.tests.test_unit_test_status.TestTestStatus(methodName='runTest')[source]
+

Bases: CustomAssertionsTestStatus

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_current_is()[source]
+
+ +
+
+test_get_latest_phase()[source]
+
+ +
+
+test_psdump_corePhasesPass()[source]
+
+ +
+
+test_psdump_oneCorePhaseFails()[source]
+
+ +
+
+test_psdump_oneCorePhaseFailsAbsentFromXFails()[source]
+

One phase fails. There is an expected fails list, but that phase is not in it.

+
+ +
+
+test_psdump_oneCorePhaseFailsInXFails()[source]
+

One phase fails. That phase is in the expected fails list.

+
+ +
+
+test_psdump_oneCorePhasePassesInXFails()[source]
+

One phase passes despite being in the expected fails list.

+
+ +
+
+test_psdump_skipPasses()[source]
+

With the skip_passes argument, only non-passes should appear

+
+ +
+
+test_psdump_unexpectedPass_shouldBePresent()[source]
+

Even with the skip_passes argument, an unexpected PASS should be present

+
+ +
+ +
+ +
+

CIME.tests.test_unit_user_mod_support module

+
+
+class CIME.tests.test_unit_user_mod_support.TestUserModSupport(methodName='runTest')[source]
+

Bases: TestCase

+
+
+assertResults(expected_user_nl_cpl, expected_shell_commands_result, expected_sourcemod, msg='')[source]
+

Asserts that the contents of the files in self._caseroot match expectations

+

If msg is provided, it is printed for some failing assertions

+
+ +
+
+createUserMod(name, include_dirs=None)[source]
+

Create a user_mods directory with the given name.

+

This directory is created within self._user_mods_parent_dir

+

For name=’foo’, it will contain:

+
    +
  • A user_nl_cpl file with contents: +foo

  • +
  • A shell_commands file with contents: +echo foo >> /PATH/TO/CASEROOT/shell_commands_result

  • +
  • A file in _SOURCEMODS named myfile.F90 with contents: +foo

  • +
+

If include_dirs is given, it should be a list of strings, giving names +of other user_mods directories to include. e.g., if include_dirs is +[‘foo1’, ‘foo2’], then this will create a file ‘include_user_mods’ that +contains paths to the ‘foo1’ and ‘foo2’ user_mods directories, one per +line.

+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_basic()[source]
+
+ +
+
+test_duplicate_includes()[source]
+

Test multiple includes, where both include the same base mod.

+

The base mod should only be included once.

+
+ +
+
+test_include()[source]
+

If there is an included mod, the main one should appear after the included one so that it takes precedence.

+
+ +
+
+test_keepexe()[source]
+
+ +
+
+test_two_applications()[source]
+

If apply_user_mods is called twice, the second should appear after the first so that it takes precedence.

+
+ +
+ +
+
+

CIME.tests.test_unit_user_nl_utils module

+
+
+class CIME.tests.test_unit_user_nl_utils.TestUserNLCopier(methodName='runTest')[source]
+

Bases: TestCase

+
+
+assertFileContentsEqual(expected, filepath, msg=None)[source]
+

Asserts that the contents of the file given by ‘filepath’ are equal to +the string given by ‘expected’. ‘msg’ gives an optional message to be +printed if the assertion fails.

+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_append()[source]
+
+ +
+
+test_append_list()[source]
+
+ +
+
+test_append_multiple_files()[source]
+
+ +
+
+test_append_without_files_raises_exception()[source]
+
+ +
+
+write_user_nl_file(component, contents, suffix='')[source]
+

Write contents to a user_nl file in the case directory. Returns the +basename (i.e., not the full path) of the file that is created.

+

For a component foo, with the default suffix of ‘’, the file name will +be user_nl_foo

+

If the suffix is ‘_0001’, the file name will be user_nl_foo_0001

+
+ +
+ +
+
+

CIME.tests.test_unit_utils module

+
+
+class CIME.tests.test_unit_utils.MockTime[source]
+

Bases: object

+
+ +
+
+class CIME.tests.test_unit_utils.TestFileContainsPythonFunction(methodName='runTest')[source]
+

Bases: TestCase

+

Tests of file_contains_python_function

+
+
+create_test_file(contents)[source]
+

Creates a test file with the given contents, and returns the path to that file

+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+tearDown()[source]
+

Hook method for deconstructing the test fixture after testing it.

+
+ +
+
+test_contains_correct_def_and_others()[source]
+

Test file_contains_python_function with a correct def mixed with other defs

+
+ +
+
+test_does_not_contain_correct_def()[source]
+

Test file_contains_python_function without the correct def

+
+ +
+ +
+
+class CIME.tests.test_unit_utils.TestIndentStr(methodName='runTest')[source]
+

Bases: TestCase

+

Test the indent_string function.

+
+
+test_indent_string_multiline()[source]
+

Test the indent_string function with a multi-line string

+
+ +
+
+test_indent_string_singleline()[source]
+

Test the indent_string function with a single-line string

+
+ +
+ +
+
+class CIME.tests.test_unit_utils.TestLineDefinesPythonFunction(methodName='runTest')[source]
+

Bases: TestCase

+

Tests of _line_defines_python_function

+
+
+test_def_barfoo()[source]
+

Test of a def of a different function

+
+ +
+
+test_def_foo()[source]
+

Test of a def of the function of interest

+
+ +
+
+test_def_foo_indented()[source]
+

Test of a def of the function of interest, but indented

+
+ +
+
+test_def_foo_no_parens()[source]
+

Test of a def of the function of interest, but without parentheses

+
+ +
+
+test_def_foo_space()[source]
+

Test of a def of the function of interest, with an extra space before the parentheses

+
+ +
+
+test_def_foobar()[source]
+

Test of a def of a different function

+
+ +
+
+test_import_barfoo()[source]
+

Test of an import of a different function

+
+ +
+
+test_import_foo()[source]
+

Test of an import of the function of interest

+
+ +
+
+test_import_foo_indented()[source]
+

Test of an import of the function of interest, but indented

+
+ +
+
+test_import_foo_space()[source]
+

Test of an import of the function of interest, with trailing spaces

+
+ +
+
+test_import_foo_then_others()[source]
+

Test of an import of the function of interest, along with others

+
+ +
+
+test_import_foobar()[source]
+

Test of an import of a different function

+
+ +
+
+test_import_others_then_foo()[source]
+

Test of an import of the function of interest, after others

+
+ +
+ +
+
+class CIME.tests.test_unit_utils.TestUtils(methodName='runTest')[source]
+

Bases: TestCase

+
+
+assertMatchAllLines(tempdir, test_lines)[source]
+
+ +
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_copy_globs(safe_copy, glob)[source]
+
+ +
+
+test_import_and_run_sub_or_cmd()[source]
+
+ +
+
+test_import_and_run_sub_or_cmd_cime_py(importmodule)[source]
+
+ +
+
+test_import_and_run_sub_or_cmd_import(importmodule)[source]
+
+ +
+
+test_import_and_run_sub_or_cmd_run(func, isfile)[source]
+
+ +
+
+test_import_from_file()[source]
+
+ +
+
+test_run_and_log_case_status()[source]
+
+ +
+
+test_run_and_log_case_status_case_submit_error_on_batch()[source]
+
+ +
+
+test_run_and_log_case_status_case_submit_no_batch()[source]
+
+ +
+
+test_run_and_log_case_status_case_submit_on_batch()[source]
+
+ +
+
+test_run_and_log_case_status_custom_msg()[source]
+
+ +
+
+test_run_and_log_case_status_custom_msg_error_on_batch()[source]
+
+ +
+
+test_run_and_log_case_status_error()[source]
+
+ +
+ +
+
+CIME.tests.test_unit_utils.match_all_lines(data, lines)[source]
+
+ +
+
+

CIME.tests.test_unit_xml_archive_base module

+
+
+class CIME.tests.test_unit_xml_archive_base.TestXMLArchiveBase(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_exclude_testing()[source]
+
+ +
+
+test_extension_included()[source]
+
+ +
+
+test_match_files()[source]
+
+ +
+
+test_suffix()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_xml_env_batch module

+
+
+class CIME.tests.test_unit_xml_env_batch.TestXMLEnvBatch(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_get_job_deps()[source]
+
+ +
+
+test_get_queue_specs(get)[source]
+
+ +
+
+test_get_submit_args()[source]
+
+ +
+
+test_get_submit_args_job_queue()[source]
+
+ +
+
+test_set_job_defaults(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_set_job_defaults_honor_walltimemax(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_set_job_defaults_honor_walltimemin(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_set_job_defaults_user_walltime(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_set_job_defaults_walltimedef(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_set_job_defaults_walltimemax_none(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_set_job_defaults_walltimemin_none(get_default_queue, select_best_queue, get_queue_specs, text)[source]
+
+ +
+
+test_submit_jobs(_submit_single_job)[source]
+
+ +
+
+test_submit_jobs_dependency(_submit_single_job, get_batch_script_for_job, isfile)[source]
+
+ +
+
+test_submit_jobs_single(_submit_single_job, get_batch_script_for_job, isfile)[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_xml_env_mach_specific module

+
+
+class CIME.tests.test_unit_xml_env_mach_specific.TestXMLEnvMachSpecific(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_aprun_get_args()[source]
+
+ +
+
+test_cmd_path(text, get_optional_child)[source]
+
+ +
+
+test_find_best_mpirun_match()[source]
+
+ +
+
+test_get_aprun_mode_default()[source]
+
+ +
+
+test_get_aprun_mode_not_valid()[source]
+
+ +
+
+test_get_aprun_mode_user_defined()[source]
+
+ +
+
+test_get_mpirun()[source]
+
+ +
+
+test_init_path(text, get_optional_child)[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_xml_machines module

+
+
+class CIME.tests.test_unit_xml_machines.TestUnitXMLMachines(methodName='runTest')[source]
+

Bases: TestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_has_batch_system()[source]
+
+ +
+
+test_is_valid_MPIlib()[source]
+
+ +
+
+test_is_valid_compiler()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_xml_namelist_definition module

+
+
+class CIME.tests.test_unit_xml_namelist_definition.TestXMLNamelistDefinition(methodName='runTest')[source]
+

Bases: TestCase

+
+
+test_set_nodes()[source]
+
+ +
+ +
+
+

CIME.tests.test_unit_xml_tests module

+
+
+class CIME.tests.test_unit_xml_tests.TestXMLTests(methodName='runTest')[source]
+

Bases: TestCase

+
+
+setUp()[source]
+

Hook method for setting up the test fixture before exercising it.

+
+ +
+
+test_support_single_exe(_setup_cases_if_not_yet_done)[source]
+
+ +
+
+test_support_single_exe_error(_setup_cases_if_not_yet_done)[source]
+
+ +
+ +
+
+

CIME.tests.utils module

+
+
+class CIME.tests.utils.CMakeTester(parent, cmake_string)[source]
+

Bases: object

+

Helper class for checking CMake output.

+

Public methods: +__init__ +query_var +assert_variable_equals +assert_variable_matches

+
+
+assert_variable_equals(var_name, value, env=None, var=None)[source]
+

Assert that a variable in the CMakeLists has a given value.

+

Arguments: +var_name - Name of variable to check. +value - The string that the variable value should be equal to. +env - Optional. Dict of environment variables to set when calling cmake. +var - Optional. Dict of CMake variables to set when calling cmake.

+
+ +
+
+assert_variable_matches(var_name, regex, env=None, var=None)[source]
+

Assert that a variable in the CMkeLists matches a regex.

+

Arguments: +var_name - Name of variable to check. +regex - The regex to match. +env - Optional. Dict of environment variables to set when calling cmake. +var - Optional. Dict of CMake variables to set when calling cmake.

+
+ +
+
+query_var(var_name, env, var)[source]
+

Request the value of a variable in Macros.cmake, as a string.

+

Arguments: +var_name - Name of the variable to query. +env - A dict containing extra environment variables to set when calling

+
+

cmake.

+
+

var - A dict containing extra CMake variables to set when calling cmake.

+
+ +
+ +
+
+class CIME.tests.utils.MakefileTester(parent, make_string)[source]
+

Bases: object

+

Helper class for checking Makefile output.

+

Public methods: +__init__ +query_var +assert_variable_equals +assert_variable_matches

+
+
+assert_variable_equals(var_name, value, env=None, var=None)[source]
+

Assert that a variable in the Makefile has a given value.

+

Arguments: +var_name - Name of variable to check. +value - The string that the variable value should be equal to. +env - Optional. Dict of environment variables to set when calling make. +var - Optional. Dict of make variables to set when calling make.

+
+ +
+
+assert_variable_matches(var_name, regex, env=None, var=None)[source]
+

Assert that a variable in the Makefile matches a regex.

+

Arguments: +var_name - Name of variable to check. +regex - The regex to match. +env - Optional. Dict of environment variables to set when calling make. +var - Optional. Dict of make variables to set when calling make.

+
+ +
+
+query_var(var_name, env, var)[source]
+

Request the value of a variable in the Makefile, as a string.

+

Arguments: +var_name - Name of the variable to query. +env - A dict containing extra environment variables to set when calling

+
+

make.

+
+
+
var - A dict containing extra make variables to set when calling make.
+
(The distinction between env and var actually matters only for

CMake, though.)

+
+
+
+
+
+ +
+ +
+
+class CIME.tests.utils.MockMachines(name, os_)[source]
+

Bases: object

+

A mock version of the Machines object to simplify testing.

+
+
+get_default_MPIlib(attributes=None)[source]
+
+ +
+
+get_default_compiler()[source]
+
+ +
+
+get_machine_name()[source]
+

Return the name we were given.

+
+ +
+
+get_value(var_name)[source]
+

Allow the operating system to be queried.

+
+ +
+
+is_valid_MPIlib(_)[source]
+

Assume all MPILIB settings are valid.

+
+ +
+
+is_valid_compiler(_)[source]
+

Assume all compilers are valid.

+
+ +
+ +
+
+class CIME.tests.utils.Mocker(ret=None, cmd=None, return_value=None, side_effect=None)[source]
+

Bases: object

+
+
+assert_called()[source]
+
+ +
+
+assert_called_with(i=None, args=None, kwargs=None)[source]
+
+ +
+
+property calls
+
+ +
+
+property method_calls
+
+ +
+
+patch(module, method=None, ret=None, is_property=False, update_value_only=False)[source]
+
+ +
+
+property ret
+
+ +
+
+revert_mocks()[source]
+
+ +
+ +
+
+class CIME.tests.utils.TemporaryDirectory[source]
+

Bases: object

+
+ +
+
+CIME.tests.utils.make_fake_teststatus(path, testname, status, phase)[source]
+
+ +
+
+CIME.tests.utils.parse_test_status(line)[source]
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/CIME_api/modules.html b/branch/azamat/baselines/update-perf-info/html/CIME_api/modules.html new file mode 100644 index 00000000000..33d039c5f85 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/CIME_api/modules.html @@ -0,0 +1,722 @@ + + + + + + + CIME — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

CIME

+
+ +
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_api/Tools.html b/branch/azamat/baselines/update-perf-info/html/Tools_api/Tools.html new file mode 100644 index 00000000000..47e3732a944 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_api/Tools.html @@ -0,0 +1,216 @@ + + + + + + + Tools package — CIME master documentation + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

Tools package

+
+

Submodules

+
+
+

Tools.generate_cylc_workflow module

+

Generates a cylc workflow file for the case. See https://cylc.github.io for details about cylc

+
+
+Tools.generate_cylc_workflow.cylc_batch_job_template(job, jobname, case, ensemble)[source]
+
+ +
+
+Tools.generate_cylc_workflow.cylc_get_case_path_string(case, ensemble)[source]
+
+ +
+
+Tools.generate_cylc_workflow.cylc_get_ensemble_first_and_last(case, ensemble)[source]
+
+ +
+
+Tools.generate_cylc_workflow.cylc_script_job_template(job, case, ensemble)[source]
+
+ +
+
+Tools.generate_cylc_workflow.parse_command_line(args, description)[source]
+
+ +
+
+

Tools.standard_script_setup module

+

Encapsulate the importing of python utils and logging setup, things +that every script should do.

+
+
+Tools.standard_script_setup.check_minimum_python_version(major, minor)[source]
+

Check your python version.

+
>>> check_minimum_python_version(sys.version_info[0], sys.version_info[1])
+>>>
+
+
+
+ +
+
+

Tools.testreporter module

+

Simple script to populate CESM test database with test results.

+
+
+Tools.testreporter.get_testreporter_xml(testroot, testid, tagname, testtype)[source]
+
+ +
+
+Tools.testreporter.parse_command_line(args)[source]
+
+ +
+
+

Module contents

+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_api/modules.html b/branch/azamat/baselines/update-perf-info/html/Tools_api/modules.html new file mode 100644 index 00000000000..5b4184a83a7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_api/modules.html @@ -0,0 +1,158 @@ + + + + + + + Tools — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/advanced-py-prof.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/advanced-py-prof.html new file mode 100644 index 00000000000..ecfe6a776fb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/advanced-py-prof.html @@ -0,0 +1,199 @@ + + + + + + + advanced-py-prof — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

advanced-py-prof

+

advanced-py-prof is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./advanced-py-prof --help
+Traceback (most recent call last):
+  File "<frozen runpy>", line 198, in _run_module_as_main
+  File "<frozen runpy>", line 88, in _run_code
+  File "/opt/hostedtoolcache/Python/3.12.0/x64/lib/python3.12/cProfile.py", line 195, in <module>
+    main()
+  File "/opt/hostedtoolcache/Python/3.12.0/x64/lib/python3.12/cProfile.py", line 172, in main
+    with io.open_code(progname) as fp:
+         ^^^^^^^^^^^^^^^^^^^^^^
+FileNotFoundError: [Errno 2] No such file or directory: 'basename'
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/archive_metadata.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/archive_metadata.html new file mode 100644 index 00000000000..ba3d99f344f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/archive_metadata.html @@ -0,0 +1,264 @@ + + + + + + + archive_metadata — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

archive_metadata

+

archive_metadata is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./archive_metadata --help
+/home/runner/work/cime/cime/CIME/utils.py:204: SyntaxWarning: invalid escape sequence '\]'
+  chars = "+*?<>/{}[\]~`@:"  # pylint: disable=anomalous-backslash-in-string
+usage: archive_metadata [-h] [-d] [-v] [-s] --user USER --password
+                        [--caseroot CASEROOT] [--workdir WORKDIR] --expType
+                        {CMIP6,production,tuning,lens,C1,C2,C3,C4,C5}
+                        [--title TITLE] [--ignore-logs] [--ignore-timing]
+                        [--ignore-repo-update] [--add-files USER_ADD_FILES]
+                        [--dryrun] [--query_cmip6 QUERY_CMIP6 QUERY_CMIP6]
+                        [--test-post]
+
+Query and parse the caseroot files to gather metadata information that can be
+posted to the CESM experiments database. CMIP6 experiment case names must be
+reserved already in the experiment database. Please see:
+https://csesgweb.cgd.ucar.edu/expdb2.0 for details.
+
+options:
+  -h, --help            show this help message and exit
+  --user USER           User name for SVN CESM developer access (required)
+  --password            Password for SVN CESM developer access (required)
+  --caseroot CASEROOT   Fully quailfied path to case root directory
+                        (optional). Defaults to current working directory.
+  --workdir WORKDIR     Fully quailfied path to directory for storing
+                        intermediate case files. A sub-directory called
+                        archive_temp_dir is created, populated with case
+                        files, and posted to the CESM experiments database and
+                        SVN repository at URL "https://svn-
+                        cesm2-expdb.cgd.ucar.edu". This argument can be used
+                        to archive a caseroot when the user does not have
+                        write permission in the caseroot (optional). Defaults
+                        to current working directory.
+  --expType {CMIP6,production,tuning,lens,C1,C2,C3,C4,C5}
+                        Experiment type. For CMIP6 experiments, the case must
+                        already exist in the experiments database at URL
+                        "http://csegweb.cgd.ucar.edu/expdb2.0" (required).
+                        Must be one of "['CMIP6', 'production', 'tuning',
+                        'lens', 'C1', 'C2', 'C3', 'C4', 'C5']"
+  --title TITLE         Title of experiment (optional).
+  --ignore-logs         Ignore updating the SVN repository with the
+                        caseroot/logs files. The experiments database will be
+                        updated (optional).
+  --ignore-timing       Ignore updating the the SVN repository with
+                        caseroot/timing files.The experiments database will be
+                        updated (optional).
+  --ignore-repo-update  Ignore updating the SVN repository with all the
+                        caseroot files. The experiments database will be
+                        updated (optional).
+  --add-files USER_ADD_FILES
+                        Comma-separated list with no spaces of files or
+                        directories to be added to the SVN repository. These
+                        are in addition to the default added caseroot files
+                        and directories: "['Buildconf', 'CaseDocs',
+                        'CaseStatus', 'LockedFiles', 'Macros.make',
+                        'README.case', 'SourceMods',
+                        'software_environment.txt'], *.xml, user_nl_*"
+                        (optional).
+  --dryrun              Parse settings and print what actions will be taken
+                        but do not execute the action (optional).
+  --query_cmip6 QUERY_CMIP6 QUERY_CMIP6
+                        Query the experiments database global attributes for
+                        specified CMIP6 casename as argument 1. Writes a json
+                        formatted output file, specified by argument 2, to
+                        subdir archive_files (optional).
+  --test-post           Post metadata to the test expdb2.0 web application
+                        server at URL
+                        "http://csegwebdev.cgd.ucar.edu/expdb2.0". No --test-
+                        post argument defaults to posting metadata to the
+                        production expdb2.0 web application server at URL
+                        "http://csegweb.cgd.ucar.edu/expdb2.0" (optional).
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/archive_metadata.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/bld_diff.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/bld_diff.html new file mode 100644 index 00000000000..c6888e730b7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/bld_diff.html @@ -0,0 +1,218 @@ + + + + + + + bld_diff — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

bld_diff

+

bld_diff is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./bld_diff --help
+usage: 
+bld_diff log1 log2
+OR
+bld_diff --help
+
+EXAMPLES:
+    > bld_diff case1 case2
+
+Try to calculate and succinctly present the differences between two bld logs
+for the same component
+
+positional arguments:
+  log1                  First log.
+  log2                  Second log.
+
+options:
+  -h, --help            show this help message and exit
+  -I, --ignore-includes
+                        Ignore differences in include flags (default: False)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file
+                        /home/runner/work/cime/cime/CIME/Tools/bld_diff.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/bless_test_results.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/bless_test_results.html new file mode 100644 index 00000000000..21913691f69 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/bless_test_results.html @@ -0,0 +1,191 @@ + + + + + + + bless_test_results — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

bless_test_results

+

bless_test_results is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./bless_test_results --help
+ERROR:  xmllint not found in PATH, xmllint is required for cime.  PATH=/opt/hostedtoolcache/Python/3.12.0/x64/bin:/opt/hostedtoolcache/Python/3.12.0/x64:/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/case.build.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.build.html new file mode 100644 index 00000000000..ca4871db965 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.build.html @@ -0,0 +1,270 @@ + + + + + + + case.build — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

case.build

+

case.build is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./case.build --help
+usage: case.build [-h] [-d] [-v] [-s] [--ninja] [--separate-builds]
+                  [--skip-submit] [--dry-run]
+                  [--sharedlib-only | -m | -b {cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} ...]
+                  | --skip-provenance-check | --clean-all | --clean
+                  [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} ...]
+                  | --clean-depends
+                  [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare} ...]]
+                  [caseroot]
+
+Builds the case.
+
+case.setup must be run before this. In addition, any changes to env_build.xml
+must be made before running this.
+
+This must be run before running case.submit.
+
+There are two usage modes; both modes accept the --caseroot option, but
+other options are specific to one mode or the other:
+
+1) To build the model:
+
+   Typical usage is simply:
+      ./case.build
+
+   This can be used for the initial build as well as for incrementally
+   rebuilding after changing some source files.
+
+   Optionally, you can specify one of the following options, although this is
+   not common:
+      --sharedlib-only
+      --model-only
+      --build ...
+
+   In addition, if you'd like to skip saving build provenance (typically because
+   there was some error in doing so), you can add:
+      --skip-provenance-check
+
+2) To clean part or all of the build:
+
+   To clean the whole build; this should be done after modifying either
+   env_build.xml or Macros.make:
+      ./case.build --clean-all
+
+   To clean select portions of the build, for example, after adding new source
+   files for one component:
+      ./case.build --clean ...
+   or:
+      ./case.build --clean-depends ...
+
+positional arguments:
+  caseroot              Case directory to build.
+                        Default is current directory.
+
+options:
+  -h, --help            show this help message and exit
+  --ninja               Use ninja backed for CMake (instead of gmake). The ninja backend is better at scanning fortran dependencies but seems to be less reliable across different platforms and compilers.
+  --separate-builds     Build each component one at a time, separately, with output going to separate logs
+  --skip-submit         Sets the current test phase to RUN, skipping the SUBMIT phase. This may be useful if rebuilding the model while this test is in the batch queue. ONLY USE IF A TEST CASE, OTHERWISE IGNORED.
+  --dry-run             Just print the cmake and ninja commands.
+  --sharedlib-only      Only build shared libraries.
+  -m, --model-only      Assume shared libraries are already built.
+  -b {cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} ...], --build {cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} ...]
+                        Libraries to build.
+                        Will cause namelist generation to be skipped.
+  --skip-provenance-check
+                        Do not check and save build provenance
+  --clean-all           Clean all objects (including sharedlib objects that may be
+                        used by other builds).
+  --clean [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare,mct,pio,gptl} ...]
+                        Clean objects associated with specific libraries.
+                        With no arguments, clean all objects other than sharedlib objects.
+  --clean-depends [{cpl,atm,lnd,ice,ocn,rof,glc,wav,esp,iac,csmshare} ...]
+                        Clean Depends and Srcfiles only.
+                        This allows you to rebuild after adding new
+                        files in the source tree or in SourceMods.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/case.build.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/case.cmpgen_namelists.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.cmpgen_namelists.html new file mode 100644 index 00000000000..c22dd36da75 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.cmpgen_namelists.html @@ -0,0 +1,223 @@ + + + + + + + case.cmpgen_namelists — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

case.cmpgen_namelists

+

case.cmpgen_namelists is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./case.cmpgen_namelists --help
+usage: case.cmpgen_namelists [-h] [-d] [-v] [-s] [-c] [-g]
+                             [--compare-name COMPARE_NAME]
+                             [--generate-name GENERATE_NAME]
+                             [--baseline-root BASELINE_ROOT]
+                             [caseroot]
+
+case.cmpgen_namelists - perform namelist baseline operations (compare,
+generate, or both) for this case.
+
+positional arguments:
+  caseroot              Case directory for which namelists are compared/generated. 
+                        Default is current directory.
+
+options:
+  -h, --help            show this help message and exit
+  -c, --compare         Force a namelist comparison against baselines. 
+                        Default is to follow the case specification.
+  -g, --generate        Force a generation of namelist baselines. 
+                        Default is to follow the case specification.
+  --compare-name COMPARE_NAME
+                        Force comparison to use baselines with this name. 
+                        Default is to follow the case specification.
+  --generate-name GENERATE_NAME
+                        Force generation to use baselines with this name. 
+                        Default is to follow the case specification.
+  --baseline-root BASELINE_ROOT
+                        Root of baselines. 
+                        Default is the case's BASELINE_ROOT.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/case.cmpgen_namelists.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/case.qstatus.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.qstatus.html new file mode 100644 index 00000000000..2f71f8830f9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.qstatus.html @@ -0,0 +1,208 @@ + + + + + + + case.qstatus — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

case.qstatus

+

case.qstatus is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./case.qstatus --help
+usage: case.qstatus [-h] [-d] [-v] [-s] [caseroot]
+
+Shows the batch status of all jobs associated with this case.
+
+Typical usage is simply:
+   ./case.qstatus
+
+positional arguments:
+  caseroot       Case directory to query.
+                 Default is current directory.
+
+options:
+  -h, --help     show this help message and exit
+
+Logging options:
+  -d, --debug    Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/case.qstatus.log
+  -v, --verbose  Add additional context (time and file) to log messages
+  -s, --silent   Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/case.setup.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.setup.html new file mode 100644 index 00000000000..daf9eb115ed --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.setup.html @@ -0,0 +1,228 @@ + + + + + + + case.setup — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

case.setup

+

case.setup is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./case.setup --help
+usage: case.setup [-h] [-d] [-v] [-s] [-c] [-t] [-r] [-k KEEP] [-N] [caseroot]
+
+Creates various files and directories needed in order to build the case,
+create namelists and run the case.
+
+Any changes to env_mach_pes.xml and env_mach_specific.xml must be made
+before running this.
+
+This must be run before running case.build.
+
+To run this initially for the case, simply run:
+   ./case.setup
+
+To rerun after making changes to env_mach_pes.xml or env_mach_specific.xml, run:
+   ./case.setup --reset
+
+positional arguments:
+  caseroot              Case directory to setup.
+                        Default is current directory.
+
+options:
+  -h, --help            show this help message and exit
+  -c, --clean           Removes the batch run script for target machine.
+                        If the testmode argument is present then keep the test
+                        script if it is present - otherwise remove it.
+                        The user_nl_xxx and Macros files are never removed by case.setup -
+                        you must remove them manually.
+  -t, --test-mode       Keeps the test script when the --clean argument is used.
+  -r, --reset           Does a clean followed by setup.
+                        This flag should be used when rerunning case.setup after it
+                        has already been run for this case.
+  -k KEEP, --keep KEEP  When cleaning/resetting a case, do not remove/refresh files in this list. Choices are batch script, env_mach_specific.xml, Macros.make, Macros.cmake. Use should use this if you have local modifications to these files that you want to keep.
+  -N, --non-local       Use when you've requested a machine that you aren't on. Will reduce errors for missing directories etc.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/case.setup.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/case.submit.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.submit.html new file mode 100644 index 00000000000..49897204503 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/case.submit.html @@ -0,0 +1,249 @@ + + + + + + + case.submit — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

case.submit

+

case.submit is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./case.submit --help
+usage: case.submit [-h] [-d] [-v] [-s] [--job JOB] [--only-job ONLY_JOB]
+                   [--no-batch] [--prereq PREREQ] [--prereq-allow-failure]
+                   [--resubmit] [--resubmit-immediate]
+                   [--skip-preview-namelist] [--mail-user MAIL_USER]
+                   [-M MAIL_TYPE] [-a BATCH_ARGS] [--chksum]
+                   [caseroot]
+
+Submits the case to the queuing system, or runs it if there is no queueing system.
+
+Also submits any other jobs (such as the short-term archiver) associated with this case.
+
+Running case.submit is the only way you should start a job.
+
+Typical usage is simply:
+   ./case.submit
+
+Other examples:
+   ./case.submit -m begin,end
+      Submits the case, requesting mail at job beginning and end
+
+positional arguments:
+  caseroot              Case directory to submit.
+                        Default is current directory.
+
+options:
+  -h, --help            show this help message and exit
+  --job JOB, -j JOB     Name of the job to be submitted;
+                        can be any of the jobs listed in env_batch.xml.
+                        This will be the first job of any defined workflow.  Default is case.run.
+  --only-job ONLY_JOB   Name of the job to be submitted;
+                        can be any of the jobs listed in env_batch.xml.
+                        Only this job will be run, workflow and RESUBMIT are ignored.  Default is case.run.
+  --no-batch            Do not submit jobs to batch system, run locally.
+  --prereq PREREQ       Specify a prerequisite job id, this job will not start until the
+                        job with this id is completed (batch mode only).
+  --prereq-allow-failure
+                        Allows starting the run even if the prerequisite fails.
+                        This also allows resubmits to run if the original failed and the
+                        resubmit was submitted to the queue with the orginal as a dependency,
+                        as in the case of --resubmit-immediate.
+  --resubmit            Used with tests only, to continue rather than restart a test.
+  --resubmit-immediate  This queues all of the resubmissions immediately after
+                        the first job is queued. These rely on the queue system to
+                        handle dependencies.
+  --skip-preview-namelist
+                        Skip calling preview-namelist during case.run.
+  --mail-user MAIL_USER
+                        Email to be used for batch notification.
+  -M MAIL_TYPE, --mail-type MAIL_TYPE
+                        When to send user email. Options are: never, all, begin, end, fail.
+                        You can specify multiple types with either comma-separated args or multiple -M flags.
+  -a BATCH_ARGS, --batch-args BATCH_ARGS
+                        Used to pass additional arguments to batch system.
+  --chksum              Verifies input data checksums.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/case.submit.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/case_diff.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/case_diff.html new file mode 100644 index 00000000000..054d869114e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/case_diff.html @@ -0,0 +1,218 @@ + + + + + + + case_diff — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

case_diff

+

case_diff is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./case_diff --help
+usage: 
+case_diff case1 case2 [skip-files]
+OR
+case_diff --help
+
+EXAMPLES:
+    > case_diff case1 case2
+
+Try to calculate and succinctly present the differences between two large
+directory trees.
+
+positional arguments:
+  case1              First case.
+  case2              Second case.
+  skip_list          skip these files. You'll probably want to skip the bld
+                     directory if it's inside the case (default: None)
+
+options:
+  -h, --help         show this help message and exit
+  -b, --show-binary  Show binary diffs (default: False)
+
+Logging options:
+  -d, --debug        Print debug information (very verbose) to file
+                     /home/runner/work/cime/cime/CIME/Tools/case_diff.log
+                     (default: False)
+  -v, --verbose      Add additional context (time and file) to log messages
+                     (default: False)
+  -s, --silent       Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/check_case.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/check_case.html new file mode 100644 index 00000000000..4919a27b4f9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/check_case.html @@ -0,0 +1,214 @@ + + + + + + + check_case — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

check_case

+

check_case is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./check_case --help
+usage: check_case [-h] [-d] [-v] [-s]
+
+Script to verify that the case is ready for submission.
+
+Typical usage is simply:
+   ./check_case
+
+You can run this before running case.submit to:
+  - Ensure that all of the env xml files are in sync with the locked files
+  - Create namelists (thus verifying that there will be no problems with
+    namelist generation)
+  - Ensure that the build is complete
+
+Running this is completely optional: these checks will be done
+automatically when running case.submit. However, you can run this if you
+want to perform these checks without actually submitting the case.
+
+options:
+  -h, --help     show this help message and exit
+
+Logging options:
+  -d, --debug    Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/check_case.log
+  -v, --verbose  Add additional context (time and file) to log messages
+  -s, --silent   Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/check_input_data.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/check_input_data.html new file mode 100644 index 00000000000..d0f12e15222 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/check_input_data.html @@ -0,0 +1,228 @@ + + + + + + + check_input_data — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

check_input_data

+

check_input_data is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./check_input_data --help
+usage: check_input_data [-h] [-d] [-v] [-s] [--protocol PROTOCOL]
+                        [--server SERVER] [-i INPUT_DATA_ROOT]
+                        [--data-list-dir DATA_LIST_DIR] [--download]
+                        [--chksum]
+
+This script determines if the required data files for your case exist on local disk in the appropriate subdirectory of
+$DIN_LOC_ROOT. It automatically downloads missing data required for your simulation.
+
+It is recommended that users on a given system share a common $DIN_LOC_ROOT directory to avoid duplication on
+disk of large amounts of input data. You may need to talk to your system administrator in order to set this up.
+
+This script should be run from $CASEROOT.
+
+To verify the presence of required data use:
+   ./check_input_data
+
+To obtain missing datasets from the input data server(s) use:
+   ./check_input_data --download
+
+This script is automatically called by the case control system, when the case is built and submitted.
+So manual usage of this script is optional.
+
+options:
+  -h, --help            show this help message and exit
+  --protocol PROTOCOL   The input data protocol to download data.
+  --server SERVER       The input data repository from which to download data.
+  -i INPUT_DATA_ROOT, --input-data-root INPUT_DATA_ROOT
+                        The root directory where input data goes,
+                        use xmlquery DIN_LOC_ROOT to see default value.
+  --data-list-dir DATA_LIST_DIR
+                        Where to find list of input files
+  --download            Attempt to download missing input files
+  --chksum              chksum inputfiles against inputdata_chksum.dat (if available)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/check_input_data.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/check_lockedfiles.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/check_lockedfiles.html new file mode 100644 index 00000000000..1449cec9047 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/check_lockedfiles.html @@ -0,0 +1,213 @@ + + + + + + + check_lockedfiles — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

check_lockedfiles

+

check_lockedfiles is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./check_lockedfiles --help
+usage: 
+check_lockedfiles [--verbose]
+OR
+check_lockedfiles --help
+
+EXAMPLES:
+    # check_lockedfiles SMS
+    > check_lockedfiles
+
+This script compares xml files
+
+options:
+  -h, --help           show this help message and exit
+  --caseroot CASEROOT  Case directory to build (default:
+                       /home/runner/work/cime/cime/CIME/Tools)
+
+Logging options:
+  -d, --debug          Print debug information (very verbose) to file /home/ru
+                       nner/work/cime/cime/CIME/Tools/check_lockedfiles.log
+                       (default: False)
+  -v, --verbose        Add additional context (time and file) to log messages
+                       (default: False)
+  -s, --silent         Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/cime_bisect.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/cime_bisect.html new file mode 100644 index 00000000000..c6e2f6dae5b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/cime_bisect.html @@ -0,0 +1,243 @@ + + + + + + + cime_bisect — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cime_bisect

+

cime_bisect is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./cime_bisect --help
+usage: 
+cime_bisect <testargs> <last-known-good-commit> [--bad=<bad>] [--verbose]
+OR
+cime_bisect --help
+
+EXAMPLES:
+    # Bisect ERS.f45_g37.B1850C5 which got broken in the last 4 CIME commits 
+    > cd <root-of-broken-cime-repo>
+    > cime_bisect HEAD~4 ERS.f45_g37.B1850C5
+
+    # Bisect ERS.f45_g37.B1850C5 which got broken in the last 4 MODEL commits 
+    > cd <root-of-broken-model>
+    > cime_bisect HEAD~4 ERS.f45_g37.B1850C5
+
+    # Bisect ERS.f45_g37.B1850C5 which started to DIFF in the last 4 commits 
+    > cd <root-of-broken-cime-repo>
+    > cime_bisect HEAD~4 'ERS.f45_g37.B1850C5 -c -b master'
+
+    # Bisect a build error for ERS.f45_g37.B1850C5 which got broken in the last 4 commits 
+    > cd <root-of-broken-cime-repo>
+    > cime_bisect HEAD~4 'ERS.f45_g37.B1850C5 --no-run'
+
+    # Bisect two different failing tests which got broken in the last 4 commits 
+    > cd <root-of-broken-cime-repo>
+    > cime_bisect HEAD~4 'ERS.f45_g37.B1850C5 --no-run' 'SMS.f45_g37.F'
+
+A script to help track down the commit that caused tests to fail. This script
+can do bisections for both cime and the model that houses it, just be sure you
+run this script from the root of the repo you want to bisect. NOTE: this tool
+will only work for models that use git and, for bisecting CIME, bring in CIME
+via submodule or clone.
+
+positional arguments:
+  good                  Name of most recent known good commit.
+  testargs              String to pass to create_test. Combine with single
+                        quotes if it includes multiple args. (default: None)
+
+options:
+  -h, --help            show this help message and exit
+  -B BAD, --bad BAD     Name of bad commit, default is current HEAD. (default:
+                        HEAD)
+  -a, --all-commits     Test all commits, not just merges (default: False)
+  -S SCRIPT, --script SCRIPT
+                        Use your own custom script instead (default: None)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file
+                        /home/runner/work/cime/cime/CIME/Tools/cime_bisect.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/code_checker.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/code_checker.html new file mode 100644 index 00000000000..df5bf5f40c0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/code_checker.html @@ -0,0 +1,191 @@ + + + + + + + code_checker — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

code_checker

+

code_checker is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./code_checker --help
+ERROR: pylint not found
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/compare_namelists.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/compare_namelists.html new file mode 100644 index 00000000000..abb60ca269c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/compare_namelists.html @@ -0,0 +1,219 @@ + + + + + + + compare_namelists — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

compare_namelists

+

compare_namelists is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./compare_namelists --help
+usage: 
+compare_namelists <Path to gold namelist file> <Path to new namelist file> [-c <CASEBASEID>] [--verbose]
+OR
+compare_namelists --help
+
+EXAMPLES:
+    # Compare namelist files
+    > compare_namelists baseline_dir/test/namelistfile mytestarea/namelistfile -c <CASE>
+
+Compare namelists. Should be called by an ACME test. Designed to not be
+sensitive to order or whitespace.
+
+positional arguments:
+  gold_file             Path to gold file
+  new_file              Path to file to compare against gold
+
+options:
+  -h, --help            show this help message and exit
+  -c CASE, --case CASE  The case base id (<TESTCASE>.<GRID>.<COMPSET>). Helps
+                        us normalize data. (default: None)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/compare_namelists.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/compare_test_results.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/compare_test_results.html new file mode 100644 index 00000000000..f3348b0da97 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/compare_test_results.html @@ -0,0 +1,191 @@ + + + + + + + compare_test_results — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

compare_test_results

+

compare_test_results is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./compare_test_results --help
+ERROR:  xmllint not found in PATH, xmllint is required for cime.  PATH=/opt/hostedtoolcache/Python/3.12.0/x64/bin:/opt/hostedtoolcache/Python/3.12.0/x64:/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_baseline.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_baseline.html new file mode 100644 index 00000000000..a4220706846 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_baseline.html @@ -0,0 +1,218 @@ + + + + + + + component_compare_baseline — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

component_compare_baseline

+

component_compare_baseline is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./component_compare_baseline --help
+usage: 
+component_compare_baseline [<casedir>] [--verbose]
+OR
+component_compare_baseline --help
+
+EXAMPLES:
+    # Compare baselines 
+    > component_compare_baseline
+
+Compares current component history files against baselines
+
+positional arguments:
+  caseroot              Case directory (default:
+                        /home/runner/work/cime/cime/CIME/Tools)
+
+options:
+  -h, --help            show this help message and exit
+  -b BASELINE_DIR, --baseline-dir BASELINE_DIR
+                        Use custom baseline dir (default: None)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/component_compare_base
+                        line.log (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_copy.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_copy.html new file mode 100644 index 00000000000..1802c757792 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_copy.html @@ -0,0 +1,217 @@ + + + + + + + component_compare_copy — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

component_compare_copy

+

component_compare_copy is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./component_compare_copy --help
+usage: 
+component_compare_copy suffix [<casedir>] [--verbose]
+OR
+component_compare_copy --help
+
+EXAMPLES:
+    # Setup case 
+    > component_compare_copy
+
+Copy the most recent batch of hist files in a case, adding the given suffix.
+This allows us to save these results if we want to run the case again.
+
+positional arguments:
+  caseroot         Case directory (default:
+                   /home/runner/work/cime/cime/CIME/Tools)
+
+options:
+  -h, --help       show this help message and exit
+  --suffix SUFFIX  Suffix to append to hist files (default: None)
+
+Logging options:
+  -d, --debug      Print debug information (very verbose) to file /home/runner
+                   /work/cime/cime/CIME/Tools/component_compare_copy.log
+                   (default: False)
+  -v, --verbose    Add additional context (time and file) to log messages
+                   (default: False)
+  -s, --silent     Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_test.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_test.html new file mode 100644 index 00000000000..d8286195ee7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_compare_test.html @@ -0,0 +1,217 @@ + + + + + + + component_compare_test — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

component_compare_test

+

component_compare_test is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./component_compare_test --help
+usage: 
+component_compare_test suffix1 suffix2 [<casedir>] [--verbose]
+OR
+component_compare_test --help
+
+EXAMPLES:
+    # Setup case 
+    > component_compare_test
+
+Compares two component history files in the testcase directory
+
+positional arguments:
+  suffix1        The suffix of the first set of files
+  suffix2        The suffix of the second set of files
+  caseroot       Case directory (default:
+                 /home/runner/work/cime/cime/CIME/Tools)
+
+options:
+  -h, --help     show this help message and exit
+
+Logging options:
+  -d, --debug    Print debug information (very verbose) to file /home/runner/w
+                 ork/cime/cime/CIME/Tools/component_compare_test.log (default:
+                 False)
+  -v, --verbose  Add additional context (time and file) to log messages
+                 (default: False)
+  -s, --silent   Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/component_generate_baseline.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_generate_baseline.html new file mode 100644 index 00000000000..5c9834a639e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/component_generate_baseline.html @@ -0,0 +1,223 @@ + + + + + + + component_generate_baseline — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

component_generate_baseline

+

component_generate_baseline is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./component_generate_baseline --help
+usage: 
+component_generate_baseline [<casedir>] [--verbose]
+OR
+component_generate_baseline --help
+
+EXAMPLES:
+    # Generate baselines 
+    > component_generate_baseline
+
+Copies current component history files into baselines
+
+positional arguments:
+  caseroot              Case directory (default:
+                        /home/runner/work/cime/cime/CIME/Tools)
+
+options:
+  -h, --help            show this help message and exit
+  -b BASELINE_DIR, --baseline-dir BASELINE_DIR
+                        Use custom baseline dir (default: None)
+  -o, --allow-baseline-overwrite
+                        By default an attempt to overwrite an existing
+                        baseline directory will raise an error. Specifying
+                        this option allows existing baseline directories to be
+                        silently overwritten. (default: False)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/component_generate_bas
+                        eline.log (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/concat_daily_hist.csh.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/concat_daily_hist.csh.html new file mode 100644 index 00000000000..e750a6a5982 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/concat_daily_hist.csh.html @@ -0,0 +1,191 @@ + + + + + + + concat_daily_hist.csh — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

concat_daily_hist.csh

+

concat_daily_hist.csh is a script in CIMEROOT/CIME/Tools.

+
+
+ +
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/create_clone.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/create_clone.html new file mode 100644 index 00000000000..6c52cb998d0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/create_clone.html @@ -0,0 +1,242 @@ + + + + + + + create_clone — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

create_clone

+

create_clone is a script in CIMEROOT/scripts.

+
+
+
$ ./create_clone --help
+usage: create_clone [-h] [-d] [-v] [-s] --case CASE --clone CLONE
+                    [--ensemble ENSEMBLE]
+                    [--user-mods-dirs [USER_MODS_DIRS ...]] [--keepexe]
+                    [--mach-dir MACH_DIR] [--project PROJECT]
+                    [--cime-output-root CIME_OUTPUT_ROOT]
+
+options:
+  -h, --help            show this help message and exit
+  --case CASE, -case CASE
+                        (required) Specify a new case name. If not a full pathname, 
+                        the new case will be created under then current working directory.
+  --clone CLONE, -clone CLONE
+                        (required) Specify a case to be cloned. If not a full pathname, 
+                        the case to be cloned is assumed to be under then current working directory.
+  --ensemble ENSEMBLE   clone an ensemble of cases, the case name argument must end in an integer.
+                        for example: ./create_clone --clone case.template --case case.001 --ensemble 4 
+                        will create case.001, case.002, case.003, case.004 from existing case.template
+  --user-mods-dirs [USER_MODS_DIRS ...], --user-mods-dir [USER_MODS_DIRS ...]
+                        Full pathname to a directory containing any combination of user_nl_* files 
+                        and a shell_commands script (typically containing xmlchange commands). 
+                        The directory can also contain an SourceMods/ directory with the same structure 
+                        as would be found in a case directory.
+                        It can also contain a file named 'include_user_mods' which gives the path to
+                        one or more other directories that should be included.
+                        Multiple directories can be given to the --user-mods-dirs argument,
+                        in which case changes from all of them are applied.
+                        (If there are conflicts, later directories take precedence.)
+                        (Care is needed if multiple directories include the same directory via 'include_user_mods':
+                        in this case, the included directory will be applied multiple times.)
+                        If this argument is used in conjunction 
+                        with the --keepexe flag, then no changes will be permitted to the env_build.xml 
+                        in the newly created case directory. 
+  --keepexe, -keepexe   Sets EXEROOT to point to original build. It is HIGHLY recommended 
+                        that the original case be built BEFORE cloning it if the --keepexe flag is specfied. 
+                        This flag will make the SourceMods/ directory in the newly created case directory a 
+                        symbolic link to the SourceMods/ directory in the original case directory. 
+  --mach-dir MACH_DIR, -mach_dir MACH_DIR
+                        Specify the locations of the Machines directory, other than the default. 
+                        The default is CIMEROOT/machines.
+  --project PROJECT, -project PROJECT
+                        Specify a project id for the case (optional).
+                        Used for accounting and directory permissions when on a batch system.
+                        The default is user or machine specified by PROJECT.
+                        Accounting (only) may be overridden by user or machine specified CHARGE_ACCOUNT.
+  --cime-output-root CIME_OUTPUT_ROOT
+                        Specify the root output directory. The default is the setting in the original
+                        case directory. NOTE: create_clone will fail if this directory is not writable.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/scripts/create_clone.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/create_newcase.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/create_newcase.html new file mode 100644 index 00000000000..a2597c96a70 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/create_newcase.html @@ -0,0 +1,298 @@ + + + + + + + create_newcase — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

create_newcase

+

create_newcase is a script in CIMEROOT/scripts.

+
+
+
$ ./create_newcase --help
+usage: create_newcase [-h] [-d] [-v] [-s] --case CASENAME --compset COMPSET
+                      --res GRID [--machine MACHINE] [--compiler COMPILER]
+                      [--multi-driver] [--ninst NINST] [--mpilib MPILIB]
+                      [--project PROJECT] [--pecount PECOUNT]
+                      [--user-mods-dirs [USER_MODS_DIRS ...]]
+                      [--pesfile PESFILE] [--gridfile GRIDFILE]
+                      [--workflow WORKFLOW] [--srcroot SRCROOT]
+                      [--output-root OUTPUT_ROOT] [--run-unsupported]
+                      [--walltime WALLTIME] [-q QUEUE]
+                      [--handle-preexisting-dirs {a,r,u}] [-i INPUT_DIR]
+                      [--driver {mct,nuopc}] [-n]
+                      [--extra-machines-dir EXTRA_MACHINES_DIR]
+                      [--case-group CASE_GROUP]
+                      [--ngpus-per-node NGPUS_PER_NODE] [--gpu-type GPU_TYPE]
+                      [--gpu-offload GPU_OFFLOAD]
+
+options:
+  -h, --help            show this help message and exit
+  --case CASENAME, -case CASENAME
+                        (required) Specify the case name. 
+                        If this is simply a name (not a path), the case directory is created in the current working directory.
+                        This can also be a relative or absolute path specifying where the case should be created;
+                        with this usage, the name of the case will be the last component of the path.
+  --compset COMPSET, -compset COMPSET
+                        (required) Specify a compset. 
+                        To see list of current compsets, use the utility ./query_config --compsets in this directory.
+  --res GRID, -res GRID
+                        (required) Specify a model grid resolution. 
+                        To see list of current model resolutions, use the utility 
+                        ./query_config --grids in this directory.
+  --machine MACHINE, -mach MACHINE
+                        Specify a machine. The default value is the match to NODENAME_REGEX in config_machines.xml. To see 
+                        the list of current machines, invoke ./query_config --machines.
+  --compiler COMPILER, -compiler COMPILER
+                        Specify a compiler. 
+                        To see list of supported compilers for each machine, use the utility 
+                        ./query_config --machines in this directory. 
+                        The default value will be the first one listed.
+  --multi-driver        Specify that --ninst should modify the number of driver/coupler instances. 
+                        The default is to have one driver/coupler supporting multiple component instances.
+  --ninst NINST         Specify number of model ensemble instances. 
+                        The default is multiple components and one driver/coupler. 
+                        Use --multi-driver to run multiple driver/couplers in the ensemble.
+  --mpilib MPILIB, -mpilib MPILIB
+                        Specify the MPI library. To see list of supported mpilibs for each machine, invoke ./query_config --machines.
+                        The default is the first listing in MPILIBS in config_machines.xml.
+  --project PROJECT, -project PROJECT
+                        Specify a project id for the case (optional).
+                        Used for accounting and directory permissions when on a batch system.
+                        The default is user or machine specified by PROJECT.
+                        Accounting (only) may be overridden by user or machine specified CHARGE_ACCOUNT.
+  --pecount PECOUNT, -pecount PECOUNT
+                        Specify a target size description for the number of cores. 
+                        This is used to query the appropriate config_pes.xml file and find the 
+                        optimal PE-layout for your case - if it exists there. 
+                        Allowed options are  ('S','M','L','X1','X2','[0-9]x[0-9]','[0-9]').
+  --user-mods-dirs [USER_MODS_DIRS ...], --user-mods-dir [USER_MODS_DIRS ...]
+                        Full pathname to a directory containing any combination of user_nl_* files 
+                        and a shell_commands script (typically containing xmlchange commands). 
+                        The directory can also contain an SourceMods/ directory with the same structure 
+                        as would be found in a case directory.
+                        It can also contain a file named 'include_user_mods' which gives the path to
+                        one or more other directories that should be included.
+                        Multiple directories can be given to the --user-mods-dirs argument,
+                        in which case changes from all of them are applied.
+                        (If there are conflicts, later directories take precedence.)
+                        (Care is needed if multiple directories include the same directory via 'include_user_mods':
+                        in this case, the included directory will be applied multiple times.)
+  --pesfile PESFILE     Full pathname of an optional pes specification file. 
+                        The file can follow either the config_pes.xml or the env_mach_pes.xml format.
+  --gridfile GRIDFILE   Full pathname of config grid file to use. 
+                        This should be a copy of config/config_grids.xml with the new user grid changes added to it. 
+  --workflow WORKFLOW   A workflow from config_workflow.xml to apply to this case. 
+  --srcroot SRCROOT     Alternative pathname for source root directory. The default is /home/runner/work/cime
+  --output-root OUTPUT_ROOT
+                        Alternative pathname for the directory where case output is written.
+  --run-unsupported     Force the creation of a case that is not tested or supported by CESM developers.
+  --walltime WALLTIME   Set the wallclock limit for this case in the format (the usual format is HH:MM:SS). 
+                        You may use env var CIME_GLOBAL_WALLTIME to set this. 
+                        If CIME_GLOBAL_WALLTIME is not defined in the environment, then the walltime
+                        will be the maximum allowed time defined for the queue in config_batch.xml.
+  -q QUEUE, --queue QUEUE
+                        Force batch system to use the specified queue. 
+  --handle-preexisting-dirs {a,r,u}
+                        Do not query how to handle pre-existing bld/exe dirs. 
+                        Valid options are (a)bort (r)eplace or (u)se existing. 
+                        This can be useful if you need to run create_newcase non-iteractively.
+  -i INPUT_DIR, --input-dir INPUT_DIR
+                        Use a non-default location for input files. This will change the xml value of DIN_LOC_ROOT.
+  --driver {mct,nuopc}  Override the top level driver type and use this one (changes xml variable COMP_INTERFACE) [this is an advanced option]
+  -n, --non-local       Use when you've requested a machine that you aren't on. Will reduce errors for missing directories etc.
+  --extra-machines-dir EXTRA_MACHINES_DIR
+                        Optional path to a directory containing one or more of:
+                        config_machines.xml, config_batch.xml.
+                        If provided, the contents of these files will be appended to
+                        the standard machine files (and any files in ~/.cime).
+  --case-group CASE_GROUP
+                        Add this case to a case group
+  --ngpus-per-node NGPUS_PER_NODE
+                        Specify number of GPUs used for simulation. 
+  --gpu-type GPU_TYPE   Specify type of GPU hardware - currently supported are v100, a100, mi250
+  --gpu-offload GPU_OFFLOAD
+                        Specify gpu offload method - currently supported are openacc, openmp, combined
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/scripts/create_newcase.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/create_test.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/create_test.html new file mode 100644 index 00000000000..7d3e8f380f1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/create_test.html @@ -0,0 +1,326 @@ + + + + + + + create_test — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

create_test

+

create_test is a script in CIMEROOT/scripts.

+
+
+
$ ./create_test --help
+usage: create_test [-h] [-d] [-v] [-s] [--no-run] [--no-build] [--no-setup]
+                   [-u] [--save-timing] [--no-batch] [--single-exe]
+                   [--single-submit] [-r TEST_ROOT]
+                   [--output-root OUTPUT_ROOT] [--baseline-root BASELINE_ROOT]
+                   [--clean] [-m MACHINE] [--mpilib MPILIB] [-c COMPARE]
+                   [-g GENERATE] [--xml-machine XML_MACHINE]
+                   [--xml-compiler XML_COMPILER] [--xml-category XML_CATEGORY]
+                   [--xml-testlist XML_TESTLIST] [--driver {mct,nuopc}]
+                   [--compiler COMPILER] [-n] [-p PROJECT] [-t TEST_ID]
+                   [-j PARALLEL_JOBS] [--proc-pool PROC_POOL]
+                   [--walltime WALLTIME] [-q QUEUE] [-f TESTFILE] [-o]
+                   [--wait] [--allow-pnl] [--check-throughput]
+                   [--check-memory] [--ignore-namelists] [--ignore-memleak]
+                   [--force-procs FORCE_PROCS] [--force-threads FORCE_THREADS]
+                   [-i INPUT_DIR] [--pesfile PESFILE] [--retry RETRY] [-N]
+                   [--workflow WORKFLOW] [--chksum] [--srcroot SRCROOT]
+                   [--force-rebuild] [--mail-user MAIL_USER] [-M MAIL_TYPE]
+                   [testargs ...]
+
+positional arguments:
+  testargs              Tests to run. Testname form is TEST.GRID.COMPSET[.MACHINE_COMPILER]
+
+options:
+  -h, --help            show this help message and exit
+  --no-run              Do not run generated tests
+  --no-build            Do not build generated tests, implies --no-run
+  --no-setup            Do not setup generated tests, implies --no-build and --no-run
+  -u, --use-existing    Use pre-existing case directories they will pick up at the 
+                        latest PEND state or re-run the first failed state. Requires test-id
+  --save-timing         Enable archiving of performance data.
+  --no-batch            Do not submit jobs to batch system, run locally.
+                        If false, this will default to machine setting.
+  --single-exe          Use a single build for all cases. This can 
+                        drastically improve test throughput but is currently use-at-your-own risk.
+                        It's up to the user to ensure that all cases are build-compatible.
+                        E3SM tests belonging to a suite with share enabled will always share exes.
+  --single-submit       Use a single interactive allocation to run all the tests. This can 
+                        drastically reduce queue waiting but only makes sense on batch machines.
+  -r TEST_ROOT, --test-root TEST_ROOT
+                        Where test cases will be created. The default is output root
+                        as defined in the config_machines file
+  --output-root OUTPUT_ROOT
+                        Where the case output is written.
+  --baseline-root BASELINE_ROOT
+                        Specifies a root directory for baseline datasets that will 
+                        be used for Bit-for-bit generate and/or compare testing.
+  --clean               Specifies if tests should be cleaned after run. If set, all object
+                        executables and data files will be removed after the tests are run.
+  -m MACHINE, --machine MACHINE
+                        The machine for creating and building tests. This machine must be defined
+                        in the config_machines.xml file for the given model. The default is to 
+                        to match the name of the machine in the test name or the name of the 
+                        machine this script is run on to the NODENAME_REGEX field in 
+                        config_machines.xml. WARNING: This option is highly unsafe and should 
+                        only be used if you are an expert.
+  --mpilib MPILIB       Specify the mpilib. To see list of supported MPI libraries for each machine, 
+                        invoke ./query_config. The default is the first listing .
+  -c COMPARE, --compare COMPARE
+                        While testing, compare baselines against the given compare directory. 
+  -g GENERATE, --generate GENERATE
+                        While testing, generate baselines in the given generate directory. 
+                        NOTE: this can also be done after the fact with bless_test_results
+  --xml-machine XML_MACHINE
+                        Use this machine key in the lookup in testlist.xml. 
+                        The default is all if any --xml- argument is used.
+  --xml-compiler XML_COMPILER
+                        Use this compiler key in the lookup in testlist.xml. 
+                        The default is all if any --xml- argument is used.
+  --xml-category XML_CATEGORY
+                        Use this category key in the lookup in testlist.xml. 
+                        The default is all if any --xml- argument is used.
+  --xml-testlist XML_TESTLIST
+                        Use this testlist to lookup tests.The default is specified in config_files.xml
+  --driver {mct,nuopc}  Override driver specified in tests and use this one.
+  --compiler COMPILER   Compiler for building cime. Default will be the name in the 
+                        Testname or the default defined for the machine.
+  -n, --namelists-only  Only perform namelist actions for tests
+  -p PROJECT, --project PROJECT
+                        Specify a project id for the case (optional).
+                        Used for accounting and directory permissions when on a batch system.
+                        The default is user or machine specified by PROJECT.
+                        Accounting (only) may be overridden by user or machine specified CHARGE_ACCOUNT.
+  -t TEST_ID, --test-id TEST_ID
+                        Specify an 'id' for the test. This is simply a string that is appended 
+                        to the end of a test name. If no test-id is specified, a time stamp plus a 
+                        random string will be used (ensuring a high probability of uniqueness). 
+                        If a test-id is specified, it is the user's responsibility to ensure that 
+                        each run of create_test uses a unique test-id. WARNING: problems will occur 
+                        if you use the same test-id twice on the same file system, even if the test 
+                        lists are completely different.
+  -j PARALLEL_JOBS, --parallel-jobs PARALLEL_JOBS
+                        Number of tasks create_test should perform simultaneously. The default 
+                         is min(num_cores, num_tests).
+  --proc-pool PROC_POOL
+                        The size of the processor pool that create_test can use. The default is 
+                        MAX_MPITASKS_PER_NODE + 25 percent.
+  --walltime WALLTIME   Set the wallclock limit for all tests in the suite. 
+                        Use the variable CIME_GLOBAL_WALLTIME to set this for all tests.
+  -q QUEUE, --queue QUEUE
+                        Force batch system to use a certain queue
+  -f TESTFILE, --testfile TESTFILE
+                        A file containing an ascii list of tests to run
+  -o, --allow-baseline-overwrite
+                        If the --generate option is given, then an attempt to overwrite 
+                        an existing baseline directory will raise an error. WARNING: Specifying this 
+                        option will allow existing baseline directories to be silently overwritten.
+  --wait                On batch systems, wait for submitted jobs to complete
+  --allow-pnl           Do not pass skip-pnl to case.submit
+  --check-throughput    Fail if throughput check fails. Requires --wait on batch systems
+  --check-memory        Fail if memory check fails. Requires --wait on batch systems
+  --ignore-namelists    Do not fail if there namelist diffs
+  --ignore-memleak      Do not fail if there's a memleak
+  --force-procs FORCE_PROCS
+                        For all tests to run with this number of processors
+  --force-threads FORCE_THREADS
+                        For all tests to run with this number of threads
+  -i INPUT_DIR, --input-dir INPUT_DIR
+                        Use a non-default location for input files
+  --pesfile PESFILE     Full pathname of an optional pes specification file. The file
+                        can follow either the config_pes.xml or the env_mach_pes.xml format.
+  --retry RETRY         Automatically retry failed tests. >0 implies --wait
+  -N, --non-local       Use when you've requested a machine that you aren't on. Will reduce errors for missing directories etc.
+  --workflow WORKFLOW   A workflow from config_workflow.xml to apply to this case. 
+  --chksum              Verifies input data checksums.
+  --srcroot SRCROOT     Alternative pathname for source root directory. The default is /home/runner/work/cime
+  --force-rebuild       When used with 'use-existing' and 'test-id', thetests will have their 'BUILD_SHAREDLIB' phase reset to 'PEND'.
+  --mail-user MAIL_USER
+                        Email to be used for batch notification.
+  -M MAIL_TYPE, --mail-type MAIL_TYPE
+                        When to send user email. Options are: never, all, begin, end, fail.
+                        You can specify multiple types with either comma-separated args or multiple -M flags.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/scripts/create_test.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/cs.status.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/cs.status.html new file mode 100644 index 00000000000..0a78105a4fb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/cs.status.html @@ -0,0 +1,235 @@ + + + + + + + cs.status — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

cs.status

+

cs.status is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./cs.status --help
+usage: cs.status [-h] [-s | -f] [-c PHASE] [-p] [--check-throughput]
+                 [--check-memory] [-x EXPECTED_FAILS_FILE] [-t TEST_ID]
+                 [-r TEST_ROOT] [--force-rebuild]
+                 [paths ...]
+
+List test results based on TestStatus files.
+
+Typical usage:
+    ./cs.status /path/to/testroot/*.testid/TestStatus
+
+Returns True if no errors occured (not based on test statuses).
+
+positional arguments:
+  paths                 Paths to TestStatus files.
+
+options:
+  -h, --help            show this help message and exit
+  -s, --summary         Only show summary
+  -f, --fails-only      Only show non-PASSes (this includes PENDs as well as FAILs)
+  -c PHASE, --count-fails PHASE
+                        For this phase, do not give line-by-line output; instead, just report
+                        the total number of tests that have not PASSed this phase
+                        (this includes PENDs as well as FAILs).
+                        This is typically used with the --fails-only option,
+                        but it can also be used without that option.
+                        (However, it cannot be used with the --summary option.)
+                        (Can be specified multiple times.)
+  -p, --count-performance-fails
+                        For phases that involve performance comparisons with baseline:
+                        Do not give line-by-line output; instead, just report the total number
+                        of tests that have not PASSed this phase.
+                        (This can be useful because these performance comparisons can be
+                        subject to machine variability.)
+                        This is equivalent to specifying:
+                        --count-fails TPUTCOMP --count-fails MEMCOMP
+  --check-throughput    Fail if throughput check fails (fail if tests slow down)
+  --check-memory        Fail if memory check fails (fail if tests footprint grows)
+  -x EXPECTED_FAILS_FILE, --expected-fails-file EXPECTED_FAILS_FILE
+                        Path to XML file listing expected failures for this test suite
+  -t TEST_ID, --test-id TEST_ID
+                        Include all tests with this test id.
+                        (Can be specified multiple times.)
+  -r TEST_ROOT, --test-root TEST_ROOT
+                        Test root used when --test-id is given
+  --force-rebuild       When used with 'test-id', thetests will have their 'BUILD_SHAREDLIB' phase reset to 'PEND'.
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/e3sm_check_env.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/e3sm_check_env.html new file mode 100644 index 00000000000..0c7b53ccd6b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/e3sm_check_env.html @@ -0,0 +1,209 @@ + + + + + + + e3sm_check_env — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

e3sm_check_env

+

e3sm_check_env is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./e3sm_check_env --help
+usage: 
+e3sm_check_env [--verbose]
+OR
+e3sm_check_env --help
+
+A script to verify that the environment is compliant with E3SM's software
+requirements. Be sure to source your env_mach_specific file before running
+this check.
+
+options:
+  -h, --help     show this help message and exit
+
+Logging options:
+  -d, --debug    Print debug information (very verbose) to file
+                 /home/runner/work/cime/cime/CIME/Tools/e3sm_check_env.log
+                 (default: False)
+  -v, --verbose  Add additional context (time and file) to log messages
+                 (default: False)
+  -s, --silent   Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/generate_cylc_workflow.py.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/generate_cylc_workflow.py.html new file mode 100644 index 00000000000..53124c14333 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/generate_cylc_workflow.py.html @@ -0,0 +1,211 @@ + + + + + + + generate_cylc_workflow.py — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

generate_cylc_workflow.py

+

generate_cylc_workflow.py is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./generate_cylc_workflow.py --help
+usage: generate_cylc_workflow.py [-h] [-d] [-v] [-s] [--cycles CYCLES]
+                                 [--ensemble ENSEMBLE]
+                                 [caseroot]
+
+Generates a cylc workflow file for the case.  See https://cylc.github.io for details about cylc
+
+positional arguments:
+  caseroot             Case directory for which namelists are generated.
+                       Default is current directory.
+
+options:
+  -h, --help           show this help message and exit
+  --cycles CYCLES      The number of cycles to run, default is RESUBMIT
+  --ensemble ENSEMBLE  generate suite.rc for an ensemble of cases, the case name argument must end in an integer.
+                       for example: ./generate_cylc_workflow.py --ensemble 4 
+                       will generate a workflow file in the current case, if that case is named case.01,the workflow will include case.01, case.02, case.03 and case.04
+
+Logging options:
+  -d, --debug          Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/generate_cylc_workflow.py.log
+  -v, --verbose        Add additional context (time and file) to log messages
+  -s, --silent         Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/getTiming.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/getTiming.html new file mode 100644 index 00000000000..a39c3bbf681 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/getTiming.html @@ -0,0 +1,209 @@ + + + + + + + getTiming — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

getTiming

+

getTiming is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./getTiming --help
+usage: 
+getTiming  [-lid|--lid] [-h|--help]
+
+Get timing information from run
+
+options:
+  -h, --help           show this help message and exit
+  -lid LID, --lid LID  print using yymmdd-hhmmss format (default:
+                       999999-999999)
+  --caseroot CASEROOT  Case directory to get timing for (default:
+                       /home/runner/work/cime/cime/CIME/Tools)
+
+Logging options:
+  -d, --debug          Print debug information (very verbose) to file
+                       /home/runner/work/cime/cime/CIME/Tools/getTiming.log
+                       (default: False)
+  -v, --verbose        Add additional context (time and file) to log messages
+                       (default: False)
+  -s, --silent         Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/get_case_env.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/get_case_env.html new file mode 100644 index 00000000000..bda95b065c4 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/get_case_env.html @@ -0,0 +1,218 @@ + + + + + + + get_case_env — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

get_case_env

+

get_case_env is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./get_case_env --help
+usage: get_case_env [-c <case>]
+OR
+get_case_env --help
+
+EXAMPLES:
+    # Get the default CIME env 
+    > ./get_case_env
+    # Get the default CIME env and load it into your current shell env 
+    > eval $(./get_case_env)
+    # Get the CIME env for a different machine or compiler  
+    > ./get_case_env -c SMS.f09_g16.X.mach_compiler
+    # Get the CIME env for a different mpi (serial in this case)  
+    > ./get_case_env -c SMS_Mmpi-serial.f09_g16.X
+    # Same as above but also load it into current shell env 
+    > eval $(./get_case_env -c SMS_Mmpi-serial.f09_g16.X)
+
+Dump what the CIME environment would be for a case.
+
+Only supports E3SM for now.
+
+options:
+  -h, --help            show this help message and exit
+  -c CASE, --case CASE  The case for which you want the env. Default=SMS.f09_g16.X
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/get_case_env.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/get_standard_makefile_args.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/get_standard_makefile_args.html new file mode 100644 index 00000000000..ba366791e49 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/get_standard_makefile_args.html @@ -0,0 +1,206 @@ + + + + + + + get_standard_makefile_args — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

get_standard_makefile_args

+

get_standard_makefile_args is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./get_standard_makefile_args --help
+usage: get_standard_makefile_args [-h] [-d] [-v] [-s] [caseroot]
+
+Output the list of standard makefile args to the command line.  This script
+should only be used when the components buildlib is not written in python
+
+positional arguments:
+  caseroot       Case directory to build.
+                 Default is current directory.
+
+options:
+  -h, --help     show this help message and exit
+
+Logging options:
+  -d, --debug    Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/get_standard_makefile_args.log
+  -v, --verbose  Add additional context (time and file) to log messages
+  -s, --silent   Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/index.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/index.html new file mode 100644 index 00000000000..244ad353522 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/index.html @@ -0,0 +1,238 @@ + + + + + + + User Tools — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+ + +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/jenkins_generic_job.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/jenkins_generic_job.html new file mode 100644 index 00000000000..b1a1e2d4d29 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/jenkins_generic_job.html @@ -0,0 +1,291 @@ + + + + + + + jenkins_generic_job — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

jenkins_generic_job

+

jenkins_generic_job is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./jenkins_generic_job --help
+usage: 
+jenkins_generic_job [-g] [-d] [--verbose]
+OR
+jenkins_generic_job --help
+
+EXAMPLES:
+    # Run the tests and compare baselines 
+    > jenkins_generic_job
+    # Run the tests, compare baselines, and update dashboard 
+    > jenkins_generic_job -d
+    # Run the tests, generating a full set of baselines (useful for first run on a machine) 
+    > jenkins_generic_job -g
+
+Jenkins runs this script to perform a test of an e3sm test suite. Essentially,
+a wrapper around create_test and wait_for_tests that handles cleanup of old
+test results and ensures that the batch system is left in a clean state.
+
+options:
+  -h, --help            show this help message and exit
+  -g, --generate-baselines
+                        Generate baselines (default: False)
+  --baseline-compare    Do baseline comparisons. Off by default. (default:
+                        False)
+  --submit-to-cdash     Send results to CDash (default: False)
+  -n, --no-submit       Force us to not send results to CDash, overrides
+                        --submit-to-cdash. Useful for CI (default: False)
+  --update-success      Record test success in baselines. Only the nightly
+                        process should use this in general. (default: False)
+  --no-update-success   For us to not record test success in baselines,
+                        overrides --update-success. Useful for CI. (default:
+                        False)
+  --no-batch            Do not use batch system even if on batch machine
+                        (default: False)
+  -c CDASH_BUILD_NAME, --cdash-build-name CDASH_BUILD_NAME
+                        Build name to use for CDash submission. Default will
+                        be <TEST_SUITE>_<BRANCH>_<COMPILER> (default: None)
+  -p CDASH_PROJECT, --cdash-project CDASH_PROJECT
+                        The name of the CDash project where results should be
+                        uploaded (default: E3SM)
+  -b BASELINE_NAME, --baseline-name BASELINE_NAME
+                        Baseline name for baselines to use. Also impacts
+                        dashboard job name. Useful for testing a branch other
+                        than next or master (default: None)
+  -B BASELINE_ROOT, --baseline-root BASELINE_ROOT
+                        Baseline area for baselines to use. Default will be
+                        config_machine value for machine (default: None)
+  -O OVERRIDE_BASELINE_NAME, --override-baseline-name OVERRIDE_BASELINE_NAME
+                        Force comparison with these baseines without impacting
+                        dashboard or test-id. (default: None)
+  -t TEST_SUITE, --test-suite TEST_SUITE
+                        Override default e3sm test suite that will be run
+                        (default: None)
+  -r SCRATCH_ROOT, --scratch-root SCRATCH_ROOT
+                        Override default e3sm scratch root. Use this to avoid
+                        conflicting with other jenkins jobs (default: None)
+  --cdash-build-group CDASH_BUILD_GROUP
+                        The build group to be used to display results on the
+                        CDash dashboard. (default: ACME_Latest)
+  -j PARALLEL_JOBS, --parallel-jobs PARALLEL_JOBS
+                        Number of tasks create_test should perform
+                        simultaneously. Default will be min(num_cores,
+                        num_tests). (default: None)
+  --walltime WALLTIME   Force a specific walltime for all tests. (default:
+                        None)
+  -m MACHINE, --machine MACHINE
+                        The machine for which to build tests, this machine
+                        must be defined in the config_machines.xml file for
+                        the given model. Default is to match the name of the
+                        machine in the test name or the name of the machine
+                        this script is run on to the NODENAME_REGEX field in
+                        config_machines.xml. This option is highly unsafe and
+                        should only be used if you know what you're doing.
+                        (default: None)
+  --compiler COMPILER   Compiler to use to build cime. Default will be the
+                        default defined for the machine. (default: None)
+  -q QUEUE, --queue QUEUE
+                        Force create_test to use a specific queue. (default:
+                        None)
+  --check-throughput    Fail if throughput check fails (fail if tests slow
+                        down) (default: False)
+  --check-memory        Fail if memory check fails (fail if tests footprint
+                        grows) (default: False)
+  --ignore-memleak      Do not fail if there are memleaks (default: False)
+  --ignore-namelists    Do not fail if there are namelist diffs (default:
+                        False)
+  --save-timing         Tell create_test to save timings of tests (default:
+                        False)
+  --pes-file PES_FILE   Full pathname of an optional pes specification file.
+                        The file can follow either the config_pes.xml or the
+                        env_mach_pes.xml format. (default: None)
+  --jenkins-id JENKINS_ID
+                        Specify an 'id' for the Jenkins jobs. (default: None)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/jenkins_generic_job.lo
+                        g (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/list_e3sm_tests.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/list_e3sm_tests.html new file mode 100644 index 00000000000..10e17b21a51 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/list_e3sm_tests.html @@ -0,0 +1,227 @@ + + + + + + + list_e3sm_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

list_e3sm_tests

+

list_e3sm_tests is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./list_e3sm_tests --help
+usage: 
+list_e3sm_tests <thing-to-list> [<test category> <test category> ...] [--verbose]
+OR
+list_e3sm_tests --help
+
+EXAMPLES:
+    # List all tested compsets 
+    > list_e3sm_tests compsets
+    # List all compsets tested by e3sm_developer 
+    > list_e3sm_tests compsets e3sm_developer
+    # List all grids tested by e3sm_developer 
+    > list_e3sm_tests grid e3sm_developer
+
+List e3sm test suites. Can be used to show what's being tested. Can just list
+tested grids, compsets, etc.
+
+positional arguments:
+  suites                The tests suites to list. Test suites: cime_tiny,
+                        cime_test_only_pass, cime_test_only_slow_pass,
+                        cime_test_only, cime_test_all, cime_test_share,
+                        cime_test_share2, cime_test_perf, cime_test_timing,
+                        cime_test_repeat, cime_test_time,
+                        cime_test_multi_inherit, cime_developer
+
+options:
+  -h, --help            show this help message and exit
+  -t {compsets,grids,testcases,tests}, --thing-to-list {compsets,grids,testcases,tests}
+                        The thing you want to list (default: tests)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/list_e3sm_tests.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/mkDepends.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/mkDepends.html new file mode 100644 index 00000000000..2d78f3248ad --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/mkDepends.html @@ -0,0 +1,224 @@ + + + + + + + mkDepends — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

mkDepends

+

mkDepends is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./mkDepends --help
+SYNOPSIS
+     mkDepends [-p [-Dmacro[=val]] [-Umacro] [-Idir]] [-d depfile]
+               [-m mangle_scheme] [-t dir] [-w] Filepath Srcfiles
+OPTIONS
+     -p
+          Preprocess files (suffix .F and .F90) before  searching
+	  for module dependencies. Default CPP preprocessor: cpp.
+	  Set env variables CPP and/or CPPFLAGS to override.
+     -D macro[=val]
+          Define the CPP macro with val as its value.
+	  Ignored when -p option is not active.
+     -U macro
+          Undefine the CPP macro.
+	  Ignored when -p option is not active.
+     -I dir
+          Add dir to the include path for CPP.
+	  Ignored when -p option is not active.
+     -d depfile
+          Additional file to be added to every .o dependence.
+     -m mangle_scheme
+          Method of mangling Fortran module names into .mod filenames.
+          Allowed values are:
+              lower - Filename is module_name.mod
+              upper - Filename is MODULE_NAME.MOD
+          The default is -m lower.
+     -t dir
+          Target directory.  If this option is set the .o files that are
+          targets in the dependency rules have the form dir/file.o.
+     -w   Print warnings to STDERR about files or dependencies not found.
+ARGUMENTS
+     Filepath is the name of a file containing the directories (one per
+     line) to be searched for dependencies.  Srcfiles is the name of a
+     file containing the names of files (one per line) for which
+     dependencies will be generated.
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/mkSrcfiles.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/mkSrcfiles.html new file mode 100644 index 00000000000..f8075b88038 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/mkSrcfiles.html @@ -0,0 +1,198 @@ + + + + + + + mkSrcfiles — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

mkSrcfiles

+

mkSrcfiles is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./mkSrcfiles --help
+SYNOPSIS
+     mkSrcfiles
+DESCRIPTION
+     The mkSrcfiles utility assumes the existence of an input file
+     ./Filepath, and writes an output file ./Srcfiles that contains
+     the names of all the files that match the patterns *.F90, *.F,
+     and *.c in all the directories from ./Filepath plus ./.  The
+     files are listed one per line.
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/mvsource.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/mvsource.html new file mode 100644 index 00000000000..16a17278dbf --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/mvsource.html @@ -0,0 +1,195 @@ + + + + + + + mvsource — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

mvsource

+

mvsource is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./mvsource --help
+Traceback (most recent call last):
+  File "/home/runner/work/cime/cime/CIME/Tools/./mvsource", line 8, in <module>
+    cimeroot = sys.argv[2]
+               ~~~~~~~~^^^
+IndexError: list index out of range
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/normalize_cases.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/normalize_cases.html new file mode 100644 index 00000000000..5433a18927c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/normalize_cases.html @@ -0,0 +1,216 @@ + + + + + + + normalize_cases — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

normalize_cases

+

normalize_cases is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./normalize_cases --help
+usage: 
+normalize_cases case1 case2
+OR
+normalize_cases --help
+
+EXAMPLES:
+    > normalize_cases case1 case2
+
+Remove uninteresting diffs between cases by changing the first to be more like
+the second. This is for debugging purposes and meant to assist the user when
+they want to run case_diff.
+
+positional arguments:
+  case1          First case. This one will be changed
+  case2          Second case. This one will not be changed
+
+options:
+  -h, --help     show this help message and exit
+
+Logging options:
+  -d, --debug    Print debug information (very verbose) to file
+                 /home/runner/work/cime/cime/CIME/Tools/normalize_cases.log
+                 (default: False)
+  -v, --verbose  Add additional context (time and file) to log messages
+                 (default: False)
+  -s, --silent   Print only warnings and error messages (default: False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/pelayout.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/pelayout.html new file mode 100644 index 00000000000..af1ba5b9f26 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/pelayout.html @@ -0,0 +1,247 @@ + + + + + + + pelayout — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

pelayout

+

pelayout is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./pelayout --help
+usage: pelayout [-h] [-d] [-v] [-s] [--set-ntasks SET_NTASKS]
+                [--set-nthrds SET_NTHRDS] [--format FORMAT] [--header HEADER]
+                [--no-header] [--caseroot CASEROOT]
+
+This utility allows the CIME user to view and modify a case's PE layout.
+With this script, a user can:
+
+1) View the PE layout of a case
+   ./pelayout
+   ./pelayout --format "%C:  %06T/%+H" --header "Comp: Tasks /Th"
+2) Attempt to scale the number of tasks used by a case
+   ./pelayout --set-ntasks 144
+3) Set the number of threads used by a case
+   ./pelayout --set-nthrds 2
+
+The --set-ntasks option attempts to scale all components so that the
+job will run in the provided number of tasks. For a component using the
+maximum number of tasks, this will merely set that component to the new
+number. However, for components running in parallel using a portion of
+the maximum tasks, --set-ntasks will attempt to scale the tasks
+proportionally, changing the value of ROOTPE to maintain the same level
+of parallel behavior. If the --set-ntasks algorithm is unable to
+automatically find a new layout, it will print an error message
+indicating the component(s) it was unable to reset and no changes will
+be made to the case.
+
+Interpreted FORMAT sequences are:
+%%  a literal %
+%C  the component name
+%T  the task count for the component
+%H  the thread count for the component
+%R  the PE root for the component
+
+Standard format extensions, such as a field length and padding are supported.
+Python dictionary-format strings are also supported. For instance,
+--format "{C:4}", will print the component name padded to 4 spaces.
+
+If  you encounter problems with this tool or find it is missing any
+feature that you need, please open an issue on https://github.com/ESMCI/cime
+
+options:
+  -h, --help            show this help message and exit
+  --set-ntasks SET_NTASKS
+                        Total number of tasks to set for the case
+  --set-nthrds SET_NTHRDS, --set-nthreads SET_NTHRDS
+                        Number of threads to set for all components
+  --format FORMAT       Format the PE layout items for each component (see
+                        below)
+  --header HEADER       Custom header for PE layout display
+  --no-header           Do not print any PE layout header
+  --caseroot CASEROOT   Case directory to reference
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file
+                        /home/runner/work/cime/cime/CIME/Tools/pelayout.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/preview_namelists.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/preview_namelists.html new file mode 100644 index 00000000000..80c09604684 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/preview_namelists.html @@ -0,0 +1,220 @@ + + + + + + + preview_namelists — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

preview_namelists

+

preview_namelists is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./preview_namelists --help
+usage: preview_namelists [-h] [-d] [-v] [-s] [--component COMPONENT]
+                         [caseroot]
+
+Creates namelist and other model input files for each component (by running each
+component's buildnml script). Then copies the generated files to the CaseDocs
+subdirectory for inspection.
+
+It is not required to run this manually: namelists will be generated
+automatically when the run starts. However, this can be useful in order to
+review the namelists before submitting the case.
+
+case.setup must be run before this.
+
+Typical usage is simply:
+   ./preview_namelists
+
+positional arguments:
+  caseroot              Case directory for which namelists are generated.
+                        Default is current directory.
+
+options:
+  -h, --help            show this help message and exit
+  --component COMPONENT
+                        Specify component's namelist to build.
+                        If not specified, generates namelists for all components.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/preview_namelists.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/preview_run.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/preview_run.html new file mode 100644 index 00000000000..b2bf103716c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/preview_run.html @@ -0,0 +1,219 @@ + + + + + + + preview_run — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

preview_run

+

preview_run is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./preview_run --help
+usage: preview_run [-h] [-d] [-v] [-s] [-j JOB] [caseroot]
+
+Queries key CIME shell commands (mpirun and batch submission).
+
+To force a certain mpirun command, use:
+   ./xmlchange MPI_RUN_COMMAND=$your_cmd
+
+Example:
+   ./xmlchange MPI_RUN_COMMAND='mpiexec -np 16 --some-flag'
+
+To force a certain qsub command, use:
+   ./xmlchange --subgroup=case.run BATCH_COMMAND_FLAGS=$your_flags
+
+Example:
+   ./xmlchange --subgroup=case.run BATCH_COMMAND_FLAGS='--some-flag --other-flag'
+
+positional arguments:
+  caseroot           Case directory to query.
+                     Default is current directory.
+
+options:
+  -h, --help         show this help message and exit
+  -j JOB, --job JOB  The job you want to print.
+                     Default is case.run (or case.test if this is a test).
+
+Logging options:
+  -d, --debug        Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/preview_run.log
+  -v, --verbose      Add additional context (time and file) to log messages
+  -s, --silent       Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/query_config.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/query_config.html new file mode 100644 index 00000000000..c0cdde4a3fa --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/query_config.html @@ -0,0 +1,191 @@ + + + + + + + query_config — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

query_config

+

query_config is a script in CIMEROOT/scripts.

+
+
+
$ ./query_config --help
+ERROR:  xmllint not found in PATH, xmllint is required for cime.  PATH=/opt/hostedtoolcache/Python/3.12.0/x64/bin:/opt/hostedtoolcache/Python/3.12.0/x64:/snap/bin:/home/runner/.local/bin:/opt/pipx_bin:/home/runner/.cargo/bin:/home/runner/.config/composer/vendor/bin:/usr/local/.ghcup/bin:/home/runner/.dotnet/tools:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/query_testlists.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/query_testlists.html new file mode 100644 index 00000000000..b25bcec50cd --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/query_testlists.html @@ -0,0 +1,225 @@ + + + + + + + query_testlists — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

query_testlists

+

query_testlists is a script in CIMEROOT/scripts.

+
+
+
$ ./query_testlists --help
+usage: query_testlists [-h] [-d] [-v] [-s] [--count]
+                       [--list {category,categories,machine,machines,compiler,compilers}]
+                       [--show-options] [--define-testtypes]
+                       [--xml-category XML_CATEGORY]
+                       [--xml-machine XML_MACHINE]
+                       [--xml-compiler XML_COMPILER]
+                       [--xml-testlist XML_TESTLIST]
+
+options:
+  -h, --help            show this help message and exit
+  --count               Rather than listing tests, just give counts by category/machine/compiler.
+  --list {category,categories,machine,machines,compiler,compilers}
+                        Rather than listing tests, list the available options for
+                        --xml-category, --xml-machine, or --xml-compiler.
+                        (The singular and plural forms are equivalent - so '--list category'
+                        is equivalent to '--list categories', etc.)
+  --show-options        For each test, also show options for that test
+                        (wallclock time, memory leak tolerance, etc.).
+                        (Has no effect with --list or --count options.)
+  --define-testtypes    At the top of the list of tests, define all of the possible test types.
+                        (Has no effect with --list or --count options.)
+  --xml-category XML_CATEGORY
+                        Only include tests in this category; default is all categories.
+  --xml-machine XML_MACHINE
+                        Only include tests for this machine; default is all machines.
+  --xml-compiler XML_COMPILER
+                        Only include tests for this compiler; default is all compilers.
+  --xml-testlist XML_TESTLIST
+                        Path to testlist file from which tests are gathered;
+                        default is all files specified in config_files.xml.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/scripts/query_testlists.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/save_provenance.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/save_provenance.html new file mode 100644 index 00000000000..cdd8f44f5a5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/save_provenance.html @@ -0,0 +1,222 @@ + + + + + + + save_provenance — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

save_provenance

+

save_provenance is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./save_provenance --help
+usage: 
+save_provenance <MODE> [<casedir>] [--verbose]
+OR
+save_provenance --help
+
+EXAMPLES:
+    # Save run (timing) provenance for current case 
+    > save_provenance postrun
+
+This tool provide command-line access to provenance-saving functionality
+
+positional arguments:
+  {build,prerun,postrun}
+                        Phase for which to save provenance. prerun is mostly
+                        for infrastructure testing; it does not make sense to
+                        store this information manually otherwise
+  caseroot              Case directory (default:
+                        /home/runner/work/cime/cime/CIME/Tools)
+
+options:
+  -h, --help            show this help message and exit
+  -l LID, --lid LID     Force system to save provenance with this LID
+                        (default: None)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/save_provenance.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/simple-py-prof.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/simple-py-prof.html new file mode 100644 index 00000000000..edb6591b08d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/simple-py-prof.html @@ -0,0 +1,199 @@ + + + + + + + simple-py-prof — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

simple-py-prof

+

simple-py-prof is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./simple-py-prof --help
+Usage: cProfile.py [-o output_file_path] [-s sort] [-m module | scriptfile] [arg] ...
+
+Options:
+  -h, --help            show this help message and exit
+  -o OUTFILE, --outfile=OUTFILE
+                        Save stats to <outfile>
+  -s SORT, --sort=SORT  Sort order when printing to stdout, based on
+                        pstats.Stats class
+  -m                    Profile a library module
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/simple_compare.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/simple_compare.html new file mode 100644 index 00000000000..96dc4e20828 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/simple_compare.html @@ -0,0 +1,219 @@ + + + + + + + simple_compare — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

simple_compare

+

simple_compare is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./simple_compare --help
+usage: 
+simple_compare <Path to gold namelist file> <Path to non-namelist file> [-c <CASEBASEID>] [--verbose]
+OR
+simple_compare --help
+
+EXAMPLES:
+    # Compare files
+    > simple_compare baseline_dir/test/file mytestarea/file -c <CASE>
+
+Compare files in a normalized way. Used by create_test for diffing non-
+namelist files.
+
+positional arguments:
+  gold_file             Path to gold file
+  new_file              Path to file to compare against gold
+
+options:
+  -h, --help            show this help message and exit
+  -c CASE, --case CASE  The case base id (<TESTCASE>.<GRID>.<COMPSET>). Helps
+                        us normalize data. (default: None)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/simple_compare.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/testreporter.py.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/testreporter.py.html new file mode 100644 index 00000000000..3ba230385b2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/testreporter.py.html @@ -0,0 +1,208 @@ + + + + + + + testreporter.py — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

testreporter.py

+

testreporter.py is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./testreporter.py --help
+usage: testreporter.py [-h] [-d] [-v] [-s] [--tagname TAGNAME]
+                       [--testid TESTID] [--testroot TESTROOT]
+                       [--testtype TESTTYPE] [--dryrun] [--dumpxml]
+
+options:
+  -h, --help           show this help message and exit
+  --tagname TAGNAME    Name of the tag being tested.
+  --testid TESTID      Test id, ie c2_0_a6g_ing,c2_0_b6g_gnu.
+  --testroot TESTROOT  Root directory for tests to populate the database.
+  --testtype TESTTYPE  Type of test, prealpha or prebeta.
+  --dryrun             Do a dry run, database will not be populated.
+  --dumpxml            Dump XML test results to sceen.
+
+Logging options:
+  -d, --debug          Print debug information (very verbose) to file /home/ru
+                       nner/work/cime/cime/CIME/Tools/testreporter.py.log
+  -v, --verbose        Add additional context (time and file) to log messages
+  -s, --silent         Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/wait_for_tests.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/wait_for_tests.html new file mode 100644 index 00000000000..efcc9a88631 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/wait_for_tests.html @@ -0,0 +1,250 @@ + + + + + + + wait_for_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

wait_for_tests

+

wait_for_tests is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./wait_for_tests --help
+usage: 
+wait_for_tests [<Path to TestStatus> <Path to TestStatus> ...]  [--verbose]
+OR
+wait_for_tests --help
+
+EXAMPLES:
+    # Wait for test in current dir
+    > wait_for_tests
+    # Wait for test in user specified tests
+    > wait_for_tests path/to/testdir
+    # Wait for all tests in a test area
+    > wait_for_tests path/to/testarea/*/TestStatus
+
+Wait for a queued set of E3SM tests to finish by watching the TestStatus
+files. If all tests pass, 0 is returned, otherwise a non-zero error code is
+returned. Note that this program waits for the RUN phase specifically and will
+not terminate if the RUN phase didn't happen.
+
+positional arguments:
+  paths                 Paths to test directories or status file. Pwd default.
+                        (default: .)
+
+options:
+  -h, --help            show this help message and exit
+  -n, --no-wait         Do not wait for tests to finish (default: False)
+  --no-run              Do not expect run phase to be completed (default:
+                        False)
+  -t, --check-throughput
+                        Fail if throughput check fails (fail if tests slow
+                        down) (default: False)
+  -m, --check-memory    Fail if memory check fails (fail if tests footprint
+                        grows) (default: False)
+  -i, --ignore-namelist-diffs
+                        Do not fail a test if the only problem is diffing
+                        namelists (default: False)
+  --ignore-memleak      Do not fail a test if the only problem is a memleak
+                        (default: False)
+  --force-log-upload    Always upload logs to cdash, even if test passed
+                        (default: False)
+  -b CDASH_BUILD_NAME, --cdash-build-name CDASH_BUILD_NAME
+                        Build name, implies you want results send to Cdash
+                        (default: None)
+  -p CDASH_PROJECT, --cdash-project CDASH_PROJECT
+                        The name of the CDash project where results should be
+                        uploaded (default: E3SM)
+  -g CDASH_BUILD_GROUP, --cdash-build-group CDASH_BUILD_GROUP
+                        The build group to be used to display results on the
+                        CDash dashboard. (default: ACME_Latest)
+  --timeout TIMEOUT     Timeout wait in seconds. (default: None)
+  --update-success      Record test success in baselines. Only the nightly
+                        process should use this in general. (default: False)
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/r
+                        unner/work/cime/cime/CIME/Tools/wait_for_tests.log
+                        (default: False)
+  -v, --verbose         Add additional context (time and file) to log messages
+                        (default: False)
+  -s, --silent          Print only warnings and error messages (default:
+                        False)
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/xmlchange.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/xmlchange.html new file mode 100644 index 00000000000..3b491f86ab9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/xmlchange.html @@ -0,0 +1,277 @@ + + + + + + + xmlchange — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

xmlchange

+

xmlchange is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./xmlchange --help
+usage: xmlchange [-h] [-d] [-v] [-s] [--caseroot CASEROOT] [--append]
+                 [--subgroup SUBGROUP] [--id ID] [--val VAL] [--file FILE]
+                 [--delimiter DELIMITER] [--dryrun] [--noecho] [-f] [-N]
+                 [-loglevel LOGLEVEL]
+                 [listofsettings]
+
+Allows changing variables in env_*xml files via a command-line interface.
+
+This provides two main benefits over editing the xml files by hand:
+  - Settings are checked immediately for validity
+  - Settings are echoed to the CaseStatus file, providing a "paper trail" of
+    changes made by the user.
+
+Examples:
+
+   To set a single variable:
+      ./xmlchange REST_N=4
+
+   To set multiple variables at once:
+      ./xmlchange REST_OPTION=ndays,REST_N=4
+
+   Alternative syntax (no longer recommended, but supported for backwards
+   compatibility; only works for a single variable at a time):
+      ./xmlchange --id REST_N --val 4
+
+   Several xml variables that have settings for each component have somewhat special treatment.
+   The variables that this currently applies to are:
+    NTASKS, NTHRDS, ROOTPE, PIO_TYPENAME, PIO_STRIDE, PIO_NUMTASKS, PIO_ASYNC_INTERFACE
+   For example, to set the number of tasks for all components to 16, use:
+      ./xmlchange NTASKS=16
+   To set just the number of tasks for the atm component, use:
+      ./xmlchange NTASKS_ATM=16
+
+   The CIME case xml variables are grouped together in xml elements <group></group>.
+   This is done to associate together xml variables with common features.
+   Most variables are only associated with one group. However, in env_batch.xml,
+   there are also xml variables that are associated with each potential batch job.
+   For these variables, the '--subgroup' option may be used to specify a particular
+   group for which the variable's value will be adjusted.
+
+   As an example, in env_batch.xml, the xml variables JOB_QUEUE and JOB_WALLCLOCK_TIME
+   appear in each of the batch job groups (defined in config_batch.xml):
+    <group id="case.run">
+    <group id="case.st_archive">
+    <group id="case.test">
+   To set the variable JOB_WALLCLOCK_TIME only for case.run:
+      ./xmlchange JOB_WALLCLOCK_TIME=0:30 --subgroup case.run
+   To set the variable JOB_WALLCLOCK_TIME for all jobs:
+      ./xmlchange JOB_WALLCLOCK_TIME=0:30
+
+positional arguments:
+  listofsettings        Comma-separated list of settings in the form: var1=value,var2=value,...
+
+options:
+  -h, --help            show this help message and exit
+  --caseroot CASEROOT   Case directory to change.
+                        Default is current directory.
+  --append, -append     Append to the existing value rather than overwriting it.
+  --subgroup SUBGROUP, -subgroup SUBGROUP
+                        Apply to this subgroup only.
+  --id ID, -id ID       The variable to set.
+                        (Used in the alternative --id var --val value form, rather than
+                        the recommended var=value form.)
+  --val VAL, -val VAL   The value to set.
+                        (Used in the alternative --id var --val value form, rather than
+                        the recommended var=value form.)
+  --file FILE, -file FILE
+                        XML file to edit.
+                        Generally not needed, but can be specified to ensure that only the
+                        expected file is being changed. (If a variable is not found in this file,
+                        an error will be generated.)
+  --delimiter DELIMITER, -delimiter DELIMITER
+                        Delimiter string in listofvalues.
+                        Default is ','.
+  --dryrun, -dryrun     Parse settings and print key-value pairs, but don't actually change anything.
+  --noecho, -noecho     Do not update CaseStatus with this change.
+                        This option is mainly meant to be used by cime scripts: the 'paper trail' in
+                        CaseStatus is meant to show changes made by the user, so we generally don't
+                        want this to be contaminated by changes made automatically by cime scripts.
+  -f, --force           Ignore typing checks and store value.
+  -N, --non-local       Use when you've requested a machine that you aren't on. Will reduce errors for missing directories etc.
+  -loglevel LOGLEVEL    Ignored, only for backwards compatibility.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/xmlchange.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/xmlquery.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/xmlquery.html new file mode 100644 index 00000000000..5d8e48b2574 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/xmlquery.html @@ -0,0 +1,326 @@ + + + + + + + xmlquery — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

xmlquery

+

xmlquery is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./xmlquery --help
+usage: xmlquery [-h] [-d] [-v] [-s] [--caseroot CASEROOT] [--listall]
+                [--file FILE] [--subgroup SUBGROUP] [-p] [--no-resolve] [-N]
+                [--full | --fileonly | --value | --raw | --description | --get-group | --type | --valid-values]
+                [variables ...]
+
+Allows querying variables from env_*xml files and listing all available variables.
+
+There are two usage modes:
+
+1) Querying variables:
+
+   - You can query a variable, or a list of variables via
+      ./xmlquery var1
+
+     or, for multiple variables (either comma or space separated)
+      ./xmlquery var1,var2,var3 ....
+      ./xmlquery var1 var2 var3 ....
+     where var1, var2 and var3 are variables that appear in a CIME case xml file
+
+     Several xml variables that have settings for each component have somewhat special treatment
+     The variables that this currently applies to are
+         NTASKS, NTHRDS, ROOTPE, PIO_TYPENAME, PIO_STRIDE, PIO_NUMTASKS
+     As examples:
+     - to show the number of tasks for each component, issue
+        ./xmlquery NTASKS
+     - to show the number of tasks just for the atm component, issue
+        ./xmlquery NTASKS_ATM
+
+     - The CIME case xml variables are grouped together in xml elements <group></group>.
+       This is done to associate together xml variables with common features.
+       Most variables are only associated with one group. However, in env_batch.xml,
+       there are also xml variables that are associated with each potential batch job.
+       For these variables, the '--subgroup' option may be used to query the variable's
+       value for a particular group.
+
+       As an example, in env_batch.xml, the xml variable JOB_QUEUE appears in each of
+       the batch job groups (defined in config_batch.xml):
+        <group id="case.run">
+        <group id="case.st_archive">
+        <group id="case.test">
+
+       To query the variable JOB_QUEUE only for one group in case.run, you need
+       to specify a sub-group argument to xmlquery.
+          ./xmlquery JOB_QUEUE --subgroup case.run
+              JOB_QUEUE: regular
+          ./xmlquery JOB_QUEUE
+            Results in group case.run
+                 JOB_QUEUE: regular
+            Results in group case.st_archive
+                 JOB_QUEUE: caldera
+            Results in group case.test
+                JOB_QUEUE: regular
+
+   - You can tailor the query by adding ONE of the following possible qualifier arguments:
+       [--full --fileonly --value --raw --description --get-group --type --valid-values ]
+       as examples:
+          ./xmlquery var1,var2 --full
+          ./xmlquery var1,var2 --fileonly
+
+   - You can query variables via a partial-match, using --partial-match or -p
+       as examples:
+          ./xmlquery STOP --partial-match
+              Results in group run_begin_stop_restart
+                  STOP_DATE: -999
+                  STOP_N: 5
+                  STOP_OPTION: ndays
+          ./xmlquery STOP_N
+                  STOP_N: 5
+
+    - By default variable values are resolved prior to output. If you want to see the unresolved
+      value(s), use the --no-resolve qualifier
+      as examples:
+         ./xmlquery RUNDIR
+             RUNDIR: /glade/scratch/mvertens/atest/run
+         ./xmlquery RUNDIR --no-resolve
+             RUNDIR: $CIME_OUTPUT_ROOT/$CASE/run
+
+2) Listing all groups and variables in those groups
+
+      ./xmlquery --listall
+
+     - You can list a subset of variables by adding one of the following qualifier arguments:
+       [--subgroup GROUP --file FILE]
+
+       As examples:
+
+       If you want to see the all of the variables in group 'case.run' issue
+          ./xmlquery --listall --subgroup case.run
+
+       If you want to see all of the variables in 'env_run.xml' issue
+          ./xmlquery --listall --file env_run.xml
+
+       If you want to see all of the variables in LockedFiles/env_build.xml issue
+          ./xmlquery --listall --file LockedFiles/env_build.xml
+
+     - You can tailor the query by adding ONE of the following possible qualifier arguments:
+       [--full --fileonly --raw --description --get-group --type --valid-values]
+
+     - The env_mach_specific.xml and env_archive.xml files are not supported by this tool.
+
+positional arguments:
+  variables             Variable name(s) to query from env_*.xml file(s)
+                        ( 'variable_name' from <entry_id id='variable_name'>value</entry_id> ).
+                        Multiple variables can be given, separated by commas or spaces.
+
+options:
+  -h, --help            show this help message and exit
+  --caseroot CASEROOT, -caseroot CASEROOT
+                        Case directory to reference.
+                        Default is current directory.
+  --listall, -listall   List all variables and their values.
+  --file FILE, -file FILE
+                        The file you want to query. If not given, queries all files.
+                        Typically used with the --listall option.
+  --subgroup SUBGROUP, -subgroup SUBGROUP
+                        Apply to this subgroup only.
+  -p, --partial-match   Allow partial matches of variable names, treats args as regex.
+  --no-resolve, -no-resolve
+                        Do not resolve variable values.
+  -N, --non-local       Use when you've requested a machine that you aren't on. Will reduce errors for missing directories etc.
+  --full                Print a full listing for each variable, including value, type,
+                        valid values, description and file.
+  --fileonly, -fileonly
+                        Only print the filename that each variable is defined in.
+  --value, -value       Only print one value without newline character.
+                        If more than one has been found print first value in list.
+  --raw                 Print the complete raw record associated with each variable.
+  --description         Print the description associated with each variable.
+  --get-group           Print the group associated with each variable.
+  --type                Print the data type associated with each variable.
+  --valid-values        Print the valid values associated with each variable, if defined.
+
+Logging options:
+  -d, --debug           Print debug information (very verbose) to file /home/runner/work/cime/cime/CIME/Tools/xmlquery.log
+  -v, --verbose         Add additional context (time and file) to log messages
+  -s, --silent          Print only warnings and error messages
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/Tools_user/xmltestentry.html b/branch/azamat/baselines/update-perf-info/html/Tools_user/xmltestentry.html new file mode 100644 index 00000000000..2329bc17af5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/Tools_user/xmltestentry.html @@ -0,0 +1,192 @@ + + + + + + + xmltestentry — CIME master documentation + + + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +
+

xmltestentry

+

xmltestentry is a script in CIMEROOT/CIME/Tools.

+
+
+
$ ./xmltestentry --help
+Unknown option: help
+Could not open file! at ./xmltestentry line 23.
+
+
+
+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/BuildTools/configure.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/BuildTools/configure.html new file mode 100644 index 00000000000..da15b21aa0a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/BuildTools/configure.html @@ -0,0 +1,364 @@ + + + + + + CIME.BuildTools.configure — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.BuildTools.configure

+#!/usr/bin/env python3
+
+"""This script writes CIME build information to a directory.
+
+The pieces of information that will be written include:
+
+1. Machine-specific build settings (i.e. the "Macros" file).
+2. File-specific build settings (i.e. "Depends" files).
+3. Environment variable loads (i.e. the env_mach_specific files).
+
+The .env_mach_specific.sh and .env_mach_specific.csh files are specific to a
+given compiler, MPI library, and DEBUG setting. By default, these will be the
+machine's default compiler, the machine's default MPI library, and FALSE,
+respectively. These can be changed by setting the environment variables
+COMPILER, MPILIB, and DEBUG, respectively.
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import (
+    expect,
+    safe_copy,
+    get_model,
+    get_src_root,
+    stringify_bool,
+    copy_local_macros_to_dir,
+)
+from CIME.XML.env_mach_specific import EnvMachSpecific
+from CIME.XML.files import Files
+from CIME.build import CmakeTmpBuildDir
+
+import shutil
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +def configure( + machobj, + output_dir, + macros_format, + compiler, + mpilib, + debug, + comp_interface, + sysos, + unit_testing=False, + noenv=False, + threaded=False, + extra_machines_dir=None, +): + """Add Macros, Depends, and env_mach_specific files to a directory. + + Arguments: + machobj - Machines argument for this machine. + output_dir - Directory in which to place output. + macros_format - Container containing the string 'Makefile' to produce + Makefile Macros output, and/or 'CMake' for CMake output. + compiler - String containing the compiler vendor to configure for. + mpilib - String containing the MPI implementation to configure for. + debug - Boolean specifying whether debugging options are enabled. + unit_testing - Boolean specifying whether we're running unit tests (as + opposed to a system run) + extra_machines_dir - String giving path to an additional directory that will be + searched for cmake_macros. + """ + new_cmake_macros_dir = Files(comp_interface=comp_interface).get_value( + "CMAKE_MACROS_DIR" + ) + for form in macros_format: + + if not os.path.isfile(os.path.join(output_dir, "Macros.cmake")): + safe_copy(os.path.join(new_cmake_macros_dir, "Macros.cmake"), output_dir) + output_cmake_macros_dir = os.path.join(output_dir, "cmake_macros") + if not os.path.exists(output_cmake_macros_dir): + shutil.copytree(new_cmake_macros_dir, output_cmake_macros_dir) + + copy_local_macros_to_dir( + output_cmake_macros_dir, extra_machdir=extra_machines_dir + ) + + if form == "Makefile": + # Use the cmake macros to generate the make macros + cmake_args = " -DOS={} -DMACH={} -DCOMPILER={} -DDEBUG={} -DMPILIB={} -Dcompile_threaded={} -DCASEROOT={}".format( + sysos, + machobj.get_machine_name(), + compiler, + stringify_bool(debug), + mpilib, + stringify_bool(threaded), + output_dir, + ) + + with CmakeTmpBuildDir(macroloc=output_dir) as cmaketmp: + output = cmaketmp.get_makefile_vars(cmake_args=cmake_args) + + with open(os.path.join(output_dir, "Macros.make"), "w") as fd: + fd.write(output) + + copy_depends_files( + machobj.get_machine_name(), machobj.machines_dir, output_dir, compiler + ) + generate_env_mach_specific( + output_dir, + machobj, + compiler, + mpilib, + debug, + comp_interface, + sysos, + unit_testing, + threaded, + noenv=noenv, + )
+ + + +
+[docs] +def copy_depends_files(machine_name, machines_dir, output_dir, compiler): + """ + Copy any system or compiler Depends files if they do not exist in the output directory + If there is a match for Depends.machine_name.compiler copy that and ignore the others + """ + # Note, the cmake build system does not stop if Depends.mach.compiler.cmake is found + makefiles_done = False + both = "{}.{}".format(machine_name, compiler) + for suffix in [both, machine_name, compiler]: + for extra_suffix in ["", ".cmake"]: + if extra_suffix == "" and makefiles_done: + continue + + basename = "Depends.{}{}".format(suffix, extra_suffix) + dfile = os.path.join(machines_dir, basename) + outputdfile = os.path.join(output_dir, basename) + if os.path.isfile(dfile): + if suffix == both and extra_suffix == "": + makefiles_done = True + if not os.path.exists(outputdfile): + safe_copy(dfile, outputdfile)
+ + + +
+[docs] +class FakeCase(object): + def __init__(self, compiler, mpilib, debug, comp_interface, threading=False): + # PIO_VERSION is needed to parse config_machines.xml but isn't otherwise used + # by FakeCase + self._vals = { + "COMPILER": compiler, + "MPILIB": mpilib, + "DEBUG": debug, + "COMP_INTERFACE": comp_interface, + "PIO_VERSION": 2, + "SMP_PRESENT": threading, + "MODEL": get_model(), + "SRCROOT": get_src_root(), + } + +
+[docs] + def get_build_threaded(self): + return self.get_value("SMP_PRESENT")
+ + +
+[docs] + def get_case_root(self): + """Returns the root directory for this case.""" + return self.get_value("CASEROOT")
+ + +
+[docs] + def get_value(self, attrib): + expect( + attrib in self._vals, + "FakeCase does not support getting value of '%s'" % attrib, + ) + return self._vals[attrib]
+ + +
+[docs] + def set_value(self, attrib, value): + """Sets a given variable value for the case""" + self._vals[attrib] = value
+
+ + + +
+[docs] +def generate_env_mach_specific( + output_dir, + machobj, + compiler, + mpilib, + debug, + comp_interface, + sysos, + unit_testing, + threaded, + noenv=False, +): + """ + env_mach_specific generation. + """ + ems_path = os.path.join(output_dir, "env_mach_specific.xml") + if os.path.exists(ems_path): + logger.warning("{} already exists, delete to replace".format(ems_path)) + return + + ems_file = EnvMachSpecific( + output_dir, unit_testing=unit_testing, standalone_configure=True + ) + ems_file.populate( + machobj, + attributes={"mpilib": mpilib, "compiler": compiler, "threaded": threaded}, + ) + ems_file.write() + + if noenv: + return + + fake_case = FakeCase(compiler, mpilib, debug, comp_interface) + ems_file.load_env(fake_case) + for shell in ("sh", "csh"): + ems_file.make_env_mach_specific_file(shell, fake_case, output_dir=output_dir) + shell_path = os.path.join(output_dir, ".env_mach_specific." + shell) + with open(shell_path, "a") as shell_file: + if shell == "sh": + shell_file.write("\nexport COMPILER={}\n".format(compiler)) + shell_file.write("export MPILIB={}\n".format(mpilib)) + shell_file.write("export DEBUG={}\n".format(repr(debug).upper())) + shell_file.write("export OS={}\n".format(sysos)) + else: + shell_file.write("\nsetenv COMPILER {}\n".format(compiler)) + shell_file.write("setenv MPILIB {}\n".format(mpilib)) + shell_file.write("setenv DEBUG {}\n".format(repr(debug).upper())) + shell_file.write("setenv OS {}\n".format(sysos))
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/ftp.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/ftp.html new file mode 100644 index 00000000000..e47b9dccac0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/ftp.html @@ -0,0 +1,250 @@ + + + + + + CIME.Servers.ftp — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.Servers.ftp

+"""
+FTP Server class.  Interact with a server using FTP protocol
+"""
+# pylint: disable=super-init-not-called
+from CIME.XML.standard_module_setup import *
+from CIME.Servers.generic_server import GenericServer
+from CIME.utils import Timeout
+from ftplib import FTP as FTPpy
+from ftplib import all_errors as all_ftp_errors
+import socket
+
+logger = logging.getLogger(__name__)
+# I think that multiple inheritence would be useful here, but I couldnt make it work
+# in a py2/3 compatible way.
+
+[docs] +class FTP(GenericServer): + def __init__(self, address, user="", passwd="", server=None): + if not user: + user = "" + if not passwd: + passwd = "" + expect(server, " Must call via ftp_login function") + root_address = address.split("/", 1)[1] + self.ftp = server + self._ftp_server = address + stat = self.ftp.login(user, passwd) + logger.debug("login stat {}".format(stat)) + if "Login successful" not in stat: + logging.warning( + "FAIL: Could not login to ftp server {}\n error {}".format( + address, stat + ) + ) + return None + try: + stat = self.ftp.cwd(root_address) + except all_ftp_errors as err: + logging.warning("ftplib returned error {}".format(err)) + return None + + logger.debug("cwd {} stat {}".format(root_address, stat)) + if "Directory successfully changed" not in stat: + logging.warning( + "FAIL: Could not cd to server root directory {}\n error {}".format( + root_address, stat + ) + ) + return None + +
+[docs] + @classmethod + def ftp_login(cls, address, user="", passwd=""): + ftp_server, root_address = address.split("/", 1) + logger.info("server address {} root path {}".format(ftp_server, root_address)) + try: + with Timeout(60): + ftp = FTPpy(ftp_server) + + except socket.error as e: + logger.warning("ftp login timeout! {} ".format(e)) + return None + except RuntimeError: + logger.warning("ftp login timeout!") + return None + result = None + try: + result = cls(address, user=user, passwd=passwd, server=ftp) + except all_ftp_errors as e: + logger.warning("ftp error: {}".format(e)) + + return result
+ + +
+[docs] + def fileexists(self, rel_path): + try: + stat = self.ftp.nlst(rel_path) + except all_ftp_errors: + logger.warning("ERROR from ftp server, trying next server") + return False + + if rel_path not in stat: + if not stat or not stat[0].startswith(rel_path): + logging.warning( + "FAIL: File {} not found.\nerror {}".format(rel_path, stat) + ) + return False + return True
+ + +
+[docs] + def getfile(self, rel_path, full_path): + try: + stat = self.ftp.retrbinary( + "RETR {}".format(rel_path), open(full_path, "wb").write + ) + except all_ftp_errors: + if os.path.isfile(full_path): + os.remove(full_path) + logger.warning("ERROR from ftp server, trying next server") + return False + + if stat != "226 Transfer complete.": + logging.warning( + "FAIL: Failed to retreve file '{}' from FTP repo '{}' stat={}\n".format( + rel_path, self._ftp_server, stat + ) + ) + return False + return True
+ + +
+[docs] + def getdirectory(self, rel_path, full_path): + try: + stat = self.ftp.nlst(rel_path) + except all_ftp_errors: + logger.warning("ERROR from ftp server, trying next server") + return False + + for _file in stat: + self.getfile(_file, full_path + os.sep + os.path.basename(_file))
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/generic_server.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/generic_server.html new file mode 100644 index 00000000000..4b628a1c9c2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/generic_server.html @@ -0,0 +1,157 @@ + + + + + + CIME.Servers.generic_server — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.Servers.generic_server

+"""
+Generic Server class.  There should be little or no functionality in this class, it serves only
+to make sure that specific server classes maintain a consistant argument list and functionality
+so that they are interchangable objects
+"""
+# pylint: disable=unused-argument
+
+from CIME.XML.standard_module_setup import *
+from socket import _GLOBAL_DEFAULT_TIMEOUT
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class GenericServer(object): + def __init__( + self, host=" ", user=" ", passwd=" ", acct=" ", timeout=_GLOBAL_DEFAULT_TIMEOUT + ): + raise NotImplementedError + +
+[docs] + def fileexists(self, rel_path): + """Returns True if rel_path exists on server""" + raise NotImplementedError
+ + +
+[docs] + def getfile(self, rel_path, full_path): + """Get file from rel_path on server and place in location full_path on client + fail if full_path already exists on client, return True if successful""" + raise NotImplementedError
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/gftp.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/gftp.html new file mode 100644 index 00000000000..2f3d827fb89 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/gftp.html @@ -0,0 +1,192 @@ + + + + + + CIME.Servers.gftp — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.Servers.gftp

+"""
+GridFTP Server class.  Interact with a server using GridFTP protocol
+"""
+# pylint: disable=super-init-not-called
+from CIME.XML.standard_module_setup import *
+from CIME.Servers.generic_server import GenericServer
+from CIME.utils import run_cmd
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class GridFTP(GenericServer): + def __init__(self, address, user="", passwd=""): + self._root_address = address + +
+[docs] + def fileexists(self, rel_path): + stat, out, err = run_cmd( + "globus-url-copy -list {}".format( + os.path.join(self._root_address, os.path.dirname(rel_path)) + os.sep + ) + ) + if stat or os.path.basename(rel_path) not in out: + logging.warning( + "FAIL: File {} not found.\nstat={} error={}".format(rel_path, stat, err) + ) + return False + return True
+ + +
+[docs] + def getfile(self, rel_path, full_path): + stat, _, err = run_cmd( + "globus-url-copy -v {} file://{}".format( + os.path.join(self._root_address, rel_path), full_path + ) + ) + + if stat != 0: + logging.warning( + "FAIL: GridFTP repo '{}' does not have file '{}' error={}\n".format( + self._root_address, rel_path, err + ) + ) + return False + return True
+ + +
+[docs] + def getdirectory(self, rel_path, full_path): + stat, _, err = run_cmd( + "globus-url-copy -v -r {}{} file://{}{}".format( + os.path.join(self._root_address, rel_path), os.sep, full_path, os.sep + ) + ) + + if stat != 0: + logging.warning( + "FAIL: GridFTP repo '{}' does not have directory '{}' error={}\n".format( + self._root_address, rel_path, err + ) + ) + return False + return True
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/svn.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/svn.html new file mode 100644 index 00000000000..01d008c9789 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/svn.html @@ -0,0 +1,222 @@ + + + + + + CIME.Servers.svn — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.Servers.svn

+"""
+SVN Server class.  Interact with a server using SVN protocol
+"""
+# pylint: disable=super-init-not-called
+from CIME.XML.standard_module_setup import *
+from CIME.Servers.generic_server import GenericServer
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class SVN(GenericServer): + def __init__(self, address, user="", passwd=""): + self._args = "" + if user: + self._args += "--username {}".format(user) + if passwd: + self._args += "--password {}".format(passwd) + + self._svn_loc = address + + err = run_cmd( + "svn --non-interactive --trust-server-cert {} ls {}".format( + self._args, address + ) + )[0] + if err != 0: + logging.warning( + """ +Could not connect to svn repo '{0}' +This is most likely either a credential, proxy, or network issue . +To check connection and store your credential run 'svn ls {0}' and permanently store your password""".format( + address + ) + ) + return None + +
+[docs] + def fileexists(self, rel_path): + full_url = os.path.join(self._svn_loc, rel_path) + stat, out, err = run_cmd( + "svn --non-interactive --trust-server-cert {} ls {}".format( + self._args, full_url + ) + ) + if stat != 0: + logging.warning( + "FAIL: SVN repo '{}' does not have file '{}'\nReason:{}\n{}\n".format( + self._svn_loc, full_url, out, err + ) + ) + return False + return True
+ + +
+[docs] + def getfile(self, rel_path, full_path): + if not rel_path: + return False + full_url = os.path.join(self._svn_loc, rel_path) + stat, output, errput = run_cmd( + "svn --non-interactive --trust-server-cert {} export {} {}".format( + self._args, full_url, full_path + ) + ) + if stat != 0: + logging.warning( + "svn export failed with output: {} and errput {}\n".format( + output, errput + ) + ) + return False + else: + logging.info("SUCCESS\n") + return True
+ + +
+[docs] + def getdirectory(self, rel_path, full_path): + full_url = os.path.join(self._svn_loc, rel_path) + stat, output, errput = run_cmd( + "svn --non-interactive --trust-server-cert {} export --force {} {}".format( + self._args, full_url, full_path + ) + ) + if stat != 0: + logging.warning( + "svn export failed with output: {} and errput {}\n".format( + output, errput + ) + ) + return False + else: + logging.info("SUCCESS\n") + return True
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/wget.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/wget.html new file mode 100644 index 00000000000..d6a9d37495f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Servers/wget.html @@ -0,0 +1,238 @@ + + + + + + CIME.Servers.wget — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.Servers.wget

+"""
+WGET Server class.  Interact with a server using WGET protocol
+"""
+# pylint: disable=super-init-not-called
+from CIME.XML.standard_module_setup import *
+from CIME.Servers.generic_server import GenericServer
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class WGET(GenericServer): + def __init__(self, address, user="", passwd=""): + self._args = "--no-check-certificate " + if user: + self._args += "--user {} ".format(user) + if passwd: + self._args += "--password {} ".format(passwd) + self._server_loc = address + +
+[docs] + @classmethod + def wget_login(cls, address, user="", passwd=""): + args = "--no-check-certificate " + if user: + args += "--user {} ".format(user) + if passwd: + args += "--password {} ".format(passwd) + + try: + err = run_cmd("wget {} --spider {}".format(args, address), timeout=60)[0] + except: + logger.warning( + "Could not connect to repo '{0}'\nThis is most likely either a proxy, or network issue .(location 1)".format( + address + ) + ) + return None + + if err and not "storage.neonscience.org" in address: + logger.warning( + "Could not connect to repo '{0}'\nThis is most likely either a proxy, or network issue .(location 2)".format( + address + ) + ) + return None + + return cls(address, user=user, passwd=passwd)
+ + +
+[docs] + def fileexists(self, rel_path): + full_url = os.path.join(self._server_loc, rel_path) + stat, out, err = run_cmd("wget {} --spider {}".format(self._args, full_url)) + + if stat != 0: + logging.warning( + "FAIL: Repo '{}' does not have file '{}'\nReason:{}\n{}\n".format( + self._server_loc, full_url, out, err + ) + ) + return False + return True
+ + +
+[docs] + def getfile(self, rel_path, full_path): + full_url = os.path.join(self._server_loc, rel_path) + stat, output, errput = run_cmd( + "wget {} {} -nc --output-document {}".format( + self._args, full_url, full_path + ) + ) + if stat != 0: + logging.warning( + "wget failed with output: {} and errput {}\n".format(output, errput) + ) + # wget puts an empty file if it fails. + try: + os.remove(full_path) + except OSError: + pass + return False + else: + logging.info("SUCCESS\n") + return True
+ + +
+[docs] + def getdirectory(self, rel_path, full_path): + full_url = os.path.join(self._server_loc, rel_path) + stat, output, errput = run_cmd( + "wget {} {} -r -N --no-directories ".format(self._args, full_url + os.sep), + from_dir=full_path, + ) + logger.debug(output) + logger.debug(errput) + if stat != 0: + logging.warning( + "wget failed with output: {} and errput {}\n".format(output, errput) + ) + # wget puts an empty file if it fails. + try: + os.remove(full_path) + except OSError: + pass + return False + else: + logging.info("SUCCESS\n") + return True
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/dae.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/dae.html new file mode 100644 index 00000000000..15009eed3a9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/dae.html @@ -0,0 +1,341 @@ + + + + + + CIME.SystemTests.dae — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.dae

+"""
+Implementation of the CIME data assimilation test:
+Compares standard run with run broken into two data assimilation cycles.
+Runs a simple DA script on each cycle which performs checks but does not
+change any model state (restart files). Compares answers of two runs.
+
+"""
+
+import os.path
+import logging
+import glob
+import gzip
+
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.utils import expect
+
+###############################################################################
+
+[docs] +class DAE(SystemTestsCompareTwo): + ############################################################################### + """ + Implementation of the CIME data assimilation test: + Compares standard run with a run broken into two data assimilation cycles. + Runs a simple DA script on each cycle which performs checks but does not + change any model state (restart files). Compares answers of two runs. + Refers to a faux data assimilation script in the + cime/scripts/data_assimilation directory + """ + + ########################################################################### + def __init__(self, case, **kwargs): + ########################################################################### + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=False, + run_two_suffix="da", + run_one_description="no data assimilation", + run_two_description="data assimilation", + **kwargs, + ) + + ########################################################################### + def _case_one_setup(self): + ########################################################################### + # Even though there may be test mods turning on data assimilation, + # case1 is the control so turn it off + self._case.set_value("DATA_ASSIMILATION_SCRIPT", "") + self._case.set_value("DATA_ASSIMILATION_CYCLES", 1) + + ########################################################################### + def _case_two_setup(self): + ########################################################################### + # Allow testmods to set an assimilation script + if len(self._case.get_value("DATA_ASSIMILATION_SCRIPT")) == 0: + # We need to find the scripts/data_assimilation directory + # LIB_DIR should be our parent dir + da_dir = os.path.join( + self._case.get_value("CIMEROOT"), "scripts/data_assimilation" + ) + expect( + os.path.isdir(da_dir), + "ERROR: da_dir, '{}', does not exist".format(da_dir), + ) + da_file = os.path.join(da_dir, "da_no_data_mod.sh") + expect( + os.path.isfile(da_file), + "ERROR: da_file, '{}', does not exist".format(da_file), + ) + # Set up two data assimilation cycles each half of the full run + self._case.set_value("DATA_ASSIMILATION_SCRIPT", da_file) + + # We need at least 2 DA cycles + da_cycles = self._case.get_value("DATA_ASSIMILATION_CYCLES") + if da_cycles < 2: + da_cycles = 2 + self._case.set_value("DATA_ASSIMILATION_CYCLES", da_cycles) + stopn = self._case.get_value("STOP_N") + expect( + (stopn % da_cycles) == 0, + "ERROR: DAE test with {0} cycles requires that STOP_N be divisible by {0}".format( + da_cycles + ), + ) + stopn = int(stopn / da_cycles) + self._case.set_value("STOP_N", stopn) + + self._case.flush() + + ########################################################################### +
+[docs] + def run_phase(self): # pylint: disable=arguments-differ + ########################################################################### + # Clean up any da.log files in case this is a re-run. + self._activate_case2() + case_root = self._get_caseroot2() + rundir2 = self._case.get_value("RUNDIR") + da_files = glob.glob(os.path.join(rundir2, "da.log.*")) + for file_ in da_files: + os.remove(file_) + # End for + + # CONTINUE_RUN ends up TRUE, set it back in case this is a re-run. + with self._case: + self._case.set_value("CONTINUE_RUN", False) + # Turn off post DA in case this is a re-run + for comp in self._case.get_values("COMP_CLASSES"): + if comp == "ESP": + continue + else: + self._case.set_value("DATA_ASSIMILATION_{}".format(comp), False) + + # Start normal run here + self._activate_case1() + SystemTestsCompareTwo.run_phase(self) + + # Do some checks on the data assimilation 'output' from case2 + self._activate_case2() + da_files = glob.glob(os.path.join(rundir2, "da.log.*")) + if da_files is None: + logger = logging.getLogger(__name__) + path = os.path.join(case_root, "da.log.*") + logger.warning("No DA files in {}".format(path)) + + da_cycles = self._case.get_value("DATA_ASSIMILATION_CYCLES") + expect( + (da_files is not None) and (len(da_files) == da_cycles), + "ERROR: There were {:d} DA cycles in run but {:d} DA files were found".format( + da_cycles, len(da_files) if da_files is not None else 0 + ), + ) + da_files.sort() + cycle_num = 0 + compset = self._case.get_value("COMPSET") + # Special case for DWAV so we can make sure other variables are set + is_dwav = "_DWAV" in compset + for fname in da_files: + found_caseroot = False + found_cycle = False + found_signal = 0 + found_init = 0 + if is_dwav: + expected_init = self._case.get_value("NINST_WAV") + else: + # Expect a signal from every instance of every DA component + expected_init = 0 + for comp in self._case.get_values("COMP_CLASSES"): + if comp == "ESP": + continue + elif self._case.get_value("DATA_ASSIMILATION_{}".format(comp)): + expected_init = expected_init + self._case.get_value( + "NINST_{}".format(comp) + ) + + # Adjust expected initial run and post-DA numbers + if cycle_num == 0: + expected_signal = 0 + else: + expected_signal = expected_init + expected_init = 0 + + with gzip.open(fname, "r") as dfile: + for bline in dfile: + line = bline.decode("utf-8") + expect( + not "ERROR" in line, + "ERROR, error line {} found in {}".format(line, fname), + ) + if "caseroot" in line[0:8]: + found_caseroot = True + elif "cycle" in line[0:5]: + found_cycle = True + expect( + int(line[7:]) == cycle_num, + "ERROR: Wrong cycle ({:d}) found in {} (expected {:d})".format( + int(line[7:]), fname, cycle_num + ), + ) + elif "resume signal" in line: + found_signal = found_signal + 1 + expect( + "Post-DA resume signal found" in line[0:27], + "ERROR: bad post-DA message found in {}".format(fname), + ) + elif "Initial run" in line: + found_init = found_init + 1 + expect( + "Initial run signal found" in line[0:24], + "ERROR: bad Initial run message found in {}".format(fname), + ) + else: + expect( + False, + "ERROR: Unrecognized line ('{}') found in {}".format( + line, fname + ), + ) + + # End for + expect(found_caseroot, "ERROR: No caseroot found in {}".format(fname)) + expect(found_cycle, "ERROR: No cycle found in {}".format(fname)) + expect( + found_signal == expected_signal, + "ERROR: Expected {} post-DA resume signal message(s), {} found in {}".format( + expected_signal, found_signal, fname + ), + ) + expect( + found_init == expected_init, + "ERROR: Expected {} Initial run message(s), {} found in {}".format( + expected_init, found_init, fname + ), + ) + # End with + cycle_num = cycle_num + 1
+
+ + # End for +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/eri.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/eri.html new file mode 100644 index 00000000000..911f75a0cbd --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/eri.html @@ -0,0 +1,398 @@ + + + + + + CIME.SystemTests.eri — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.eri

+"""
+CIME ERI test  This class inherits from SystemTestsCommon
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.utils import safe_copy
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from stat import S_ISDIR, ST_CTIME, ST_MODE
+import shutil, glob, os
+
+logger = logging.getLogger(__name__)
+
+
+def _get_rest_date(archive_root):
+    restdir = os.path.join(archive_root, "rest")
+    # get all entries in the directory w/ stats
+    entries = (os.path.join(restdir, fn) for fn in os.listdir(restdir))
+    entries = ((os.stat(path), path) for path in entries)
+    entries = sorted(
+        (stat[ST_CTIME], path) for stat, path in entries if S_ISDIR(stat[ST_MODE])
+    )
+    last_dir = os.path.basename(entries[-1][1])
+    ref_sec = last_dir[-5:]
+    ref_date = last_dir[:10]
+    return ref_date, ref_sec
+
+
+def _helper(dout_sr, refdate, refsec, rundir):
+    rest_path = os.path.join(dout_sr, "rest", "{}-{}".format(refdate, refsec))
+
+    for item in glob.glob("{}/*{}*".format(rest_path, refdate)):
+        dst = os.path.join(rundir, os.path.basename(item))
+        if os.path.exists(dst):
+            os.remove(dst)
+        os.symlink(item, dst)
+
+    for item in glob.glob("{}/*rpointer*".format(rest_path)):
+        safe_copy(item, rundir)
+
+
+
+[docs] +class ERI(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the ERI system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + self._testname = "ERI" + +
+[docs] + def run_phase(self): + caseroot = self._case.get_value("CASEROOT") + clone1_path = "{}.ref1".format(caseroot) + clone2_path = "{}.ref2".format(caseroot) + # self._case.set_value("CHECK_TIMING", False) + + # + # clone the main case to create ref1 and ref2 cases + # + for clone_path in [clone1_path, clone2_path]: + if os.path.exists(clone_path): + shutil.rmtree(clone_path) + + clone1, clone2 = [ + self._case.create_clone(clone_path, keepexe=True) + for clone_path in [clone1_path, clone2_path] + ] + orig_case = self._case + orig_casevar = orig_case.get_value("CASE") + # + # determine run lengths needed below + # + stop_n = self._case.get_value("STOP_N") + stop_option = self._case.get_value("STOP_OPTION") + run_startdate = self._case.get_value("RUN_STARTDATE") + start_tod = self._case.get_value("START_TOD") + if start_tod == 0: + start_tod = "00000" + + stop_n1 = int(stop_n / 6) + rest_n1 = stop_n1 + start_1 = run_startdate + + stop_n2 = stop_n - stop_n1 + rest_n2 = int(stop_n2 / 2 + 1) + hist_n = stop_n2 + + start_1_year, start_1_month, start_1_day = [ + int(item) for item in start_1.split("-") + ] + start_2_year = start_1_year + 2 + start_2 = "{:04d}-{:02d}-{:02d}".format( + start_2_year, start_1_month, start_1_day + ) + + stop_n3 = stop_n2 - rest_n2 + rest_n3 = int(stop_n3 / 2 + 1) + + stop_n4 = stop_n3 - rest_n3 + + expect(stop_n4 >= 1 and stop_n1 >= 1, "Run length too short") + + # + # (1) Test run: + # do an initial ref1 case run + # cloned the case and running there + # (NOTE: short term archiving is on) + # + + os.chdir(clone1_path) + self._set_active_case(clone1) + + logger.info( + "ref1 startup: doing a {} {} startup run from {} and {} seconds".format( + stop_n1, stop_option, start_1, start_tod + ) + ) + logger.info(" writing restarts at {} {}".format(rest_n1, stop_option)) + logger.info(" short term archiving is on ") + + with clone1: + clone1.set_value("CONTINUE_RUN", False) + clone1.set_value("RUN_STARTDATE", start_1) + clone1.set_value("STOP_N", stop_n1) + clone1.set_value("REST_OPTION", stop_option) + clone1.set_value("REST_N", rest_n1) + clone1.set_value("HIST_OPTION", "never") + + dout_sr1 = clone1.get_value("DOUT_S_ROOT") + + # force cam/eam namelist to write out initial file at end of run + for model in ["cam", "eam"]: + user_nl = "user_nl_{}".format(model) + if os.path.exists(user_nl): + if "inithist" not in open(user_nl, "r").read(): + with open(user_nl, "a") as fd: + fd.write("inithist = 'ENDOFRUN'\n") + + with clone1: + clone1.case_setup(test_mode=True, reset=True) + # if the initial case is hybrid this will put the reference data in the correct location + clone1.check_all_input_data() + + self._skip_pnl = False + self.run_indv(st_archive=True, suffix=None) + + # + # (2) Test run: + # do a hybrid ref2 case run + # cloned the main case and running with ref1 restarts + # (NOTE: short term archiving is on) + # + + os.chdir(clone2_path) + self._set_active_case(clone2) + + # Set startdate to start2, set ref date based on ref1 restart + refdate_2, refsec_2 = _get_rest_date(dout_sr1) + + logger.info( + "ref2 hybrid: doing a {} {} startup hybrid run".format(stop_n2, stop_option) + ) + logger.info( + " starting from {} and using ref1 {} and {} seconds".format( + start_2, refdate_2, refsec_2 + ) + ) + logger.info(" writing restarts at {} {}".format(rest_n2, stop_option)) + logger.info(" short term archiving is on ") + + # setup ref2 case + with clone2: + clone2.set_value("RUN_TYPE", "hybrid") + clone2.set_value("RUN_STARTDATE", start_2) + clone2.set_value("RUN_REFCASE", "{}.ref1".format(orig_casevar)) + clone2.set_value("RUN_REFDATE", refdate_2) + clone2.set_value("RUN_REFTOD", refsec_2) + clone2.set_value("GET_REFCASE", False) + clone2.set_value("CONTINUE_RUN", False) + clone2.set_value("STOP_N", stop_n2) + clone2.set_value("REST_OPTION", stop_option) + clone2.set_value("REST_N", rest_n2) + clone2.set_value("HIST_OPTION", stop_option) + clone2.set_value("HIST_N", hist_n) + + rundir2 = clone2.get_value("RUNDIR") + dout_sr2 = clone2.get_value("DOUT_S_ROOT") + + _helper(dout_sr1, refdate_2, refsec_2, rundir2) + + # run ref2 case (all component history files will go to short term archiving) + with clone2: + clone2.case_setup(test_mode=True, reset=True) + + self._skip_pnl = False + self.run_indv(suffix="hybrid", st_archive=True) + + # + # (3a) Test run: + # do a branch run from ref2 restart (short term archiving is off) + # + + os.chdir(caseroot) + self._set_active_case(orig_case) + refdate_3, refsec_3 = _get_rest_date(dout_sr2) + + logger.info("branch: doing a {} {} branch".format(stop_n3, stop_option)) + logger.info( + " starting from ref2 {} and {} seconds restarts".format( + refdate_3, refsec_3 + ) + ) + logger.info(" writing restarts at {} {}".format(rest_n3, stop_option)) + logger.info(" short term archiving is off") + + self._case.set_value("RUN_TYPE", "branch") + self._case.set_value( + "RUN_REFCASE", "{}.ref2".format(self._case.get_value("CASE")) + ) + self._case.set_value("RUN_REFDATE", refdate_3) + self._case.set_value("RUN_REFTOD", refsec_3) + self._case.set_value("GET_REFCASE", False) + self._case.set_value("CONTINUE_RUN", False) + self._case.set_value("STOP_N", stop_n3) + self._case.set_value("REST_OPTION", stop_option) + self._case.set_value("REST_N", rest_n3) + self._case.set_value("HIST_OPTION", stop_option) + self._case.set_value("HIST_N", stop_n2) + self._case.set_value("DOUT_S", False) + self._case.flush() + + rundir = self._case.get_value("RUNDIR") + if not os.path.exists(rundir): + os.makedirs(rundir) + + _helper(dout_sr2, refdate_3, refsec_3, rundir) + + # link the hybrid history files from ref2 to the run dir for comparison + for item in glob.iglob("%s/*.hybrid" % rundir2): + newfile = "{}".format(item.replace(".ref2", "")) + newfile = os.path.basename(newfile) + dst = os.path.join(rundir, newfile) + if os.path.exists(dst): + os.remove(dst) + os.symlink(item, dst) + + self._skip_pnl = False + # run branch case (short term archiving is off) + self.run_indv() + + # + # (3b) Test run: + # do a restart continue from (3a) (short term archiving off) + # + + logger.info( + "branch restart: doing a {} {} continue restart test".format( + stop_n4, stop_option + ) + ) + + self._case.set_value("CONTINUE_RUN", True) + self._case.set_value("STOP_N", stop_n4) + self._case.set_value("REST_OPTION", "never") + self._case.set_value("DOUT_S", False) + self._case.set_value("HIST_OPTION", stop_option) + self._case.set_value("HIST_N", hist_n) + self._case.flush() + + # do the restart run (short term archiving is off) + self.run_indv(suffix="rest") + + self._component_compare_test("base", "hybrid") + self._component_compare_test("base", "rest")
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erio.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erio.html new file mode 100644 index 00000000000..0c9f76c84cb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erio.html @@ -0,0 +1,209 @@ + + + + + + CIME.SystemTests.erio — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.erio

+"""
+ERIO tests restart with different PIO methods
+
+This class inherits from SystemTestsCommon
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERIO(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to file env_test.xml in the case directory + """ + SystemTestsCommon.__init__(self, case, expected=["TEST"], **kwargs) + + self._pio_types = self._case.get_env("run").get_valid_values("PIO_TYPENAME") + self._stop_n = self._case.get_value("STOP_N") + + def _full_run(self, pio_type): + stop_option = self._case.get_value("STOP_OPTION") + expect(self._stop_n > 0, "Bad STOP_N: {:d}".format(self._stop_n)) + + # Move to config_tests.xml once that's ready + rest_n = int(self._stop_n / 2) + 1 + self._case.set_value("REST_N", rest_n) + self._case.set_value("REST_OPTION", stop_option) + self._case.set_value("HIST_N", self._stop_n) + self._case.set_value("HIST_OPTION", stop_option) + self._case.set_value("CONTINUE_RUN", False) + self._case.flush() + + expect( + self._stop_n > 2, "ERROR: stop_n value {:d} too short".format(self._stop_n) + ) + logger.info( + "doing an {0} {1} initial test with restart file at {2} {1} with pio type {3}".format( + str(self._stop_n), stop_option, str(rest_n), pio_type + ) + ) + self.run_indv(suffix=pio_type) + + def _restart_run(self, pio_type, other_pio_type): + stop_option = self._case.get_value("STOP_OPTION") + + rest_n = int(self._stop_n / 2) + 1 + stop_new = self._stop_n - rest_n + expect( + stop_new > 0, + "ERROR: stop_n value {:d} too short {:d} {:d}".format( + stop_new, self._stop_n, rest_n + ), + ) + + self._case.set_value("STOP_N", stop_new) + self._case.set_value("CONTINUE_RUN", True) + self._case.set_value("REST_OPTION", "never") + self._case.flush() + logger.info( + "doing an {} {} restart test with {} against {}".format( + str(stop_new), stop_option, pio_type, other_pio_type + ) + ) + + suffix = "{}.{}".format(other_pio_type, pio_type) + self.run_indv(suffix=suffix) + + # Compare restart file + self._component_compare_test(other_pio_type, suffix) + +
+[docs] + def run_phase(self): + + for idx, pio_type1 in enumerate(self._pio_types): + if pio_type1 != "default" and pio_type1 != "nothing": + self._case.set_value("PIO_TYPENAME", pio_type1) + self._full_run(pio_type1) + for pio_type2 in self._pio_types[idx + 1 :]: + if pio_type2 != "default" and pio_type2 != "nothing": + self._case.set_value("PIO_TYPENAME", pio_type2) + self._restart_run(pio_type2, pio_type1)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erp.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erp.html new file mode 100644 index 00000000000..a4afcfd3a90 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erp.html @@ -0,0 +1,175 @@ + + + + + + CIME.SystemTests.erp — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.erp

+"""
+CIME ERP test.  This class inherits from RestartTest
+
+This is a pes counts hybrid (open-MP/MPI) restart bfb test from
+startup.  This is just like an ERS test but the pe-counts/threading
+count are modified on restart.
+(1) Do an initial run with pes set up out of the box (suffix base)
+(2) Do a restart test with half the number of tasks and threads (suffix rest)
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.restart_tests import RestartTest
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERP(RestartTest): + def __init__(self, case, **kwargs): + """ + initialize a test object + """ + RestartTest.__init__( + self, + case, + separate_builds=True, + run_two_suffix="rest", + run_one_description="initial", + run_two_description="restart", + **kwargs + ) + + def _case_two_setup(self): + # halve the number of tasks and threads + for comp in self._case.get_values("COMP_CLASSES"): + ntasks = self._case1.get_value("NTASKS_{}".format(comp)) + nthreads = self._case1.get_value("NTHRDS_{}".format(comp)) + rootpe = self._case1.get_value("ROOTPE_{}".format(comp)) + if nthreads > 1: + self._case.set_value("NTHRDS_{}".format(comp), int(nthreads / 2)) + if ntasks > 1: + self._case.set_value("NTASKS_{}".format(comp), int(ntasks / 2)) + self._case.set_value("ROOTPE_{}".format(comp), int(rootpe / 2)) + + RestartTest._case_two_setup(self) + self._case.case_setup(test_mode=True, reset=True) + # Note, some components, like CESM-CICE, have + # decomposition information in env_build.xml that + # needs to be regenerated for the above new tasks and thread counts + + def _case_one_custom_postrun_action(self): + self.copy_case1_restarts_to_case2()
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/err.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/err.html new file mode 100644 index 00000000000..8bdd5422865 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/err.html @@ -0,0 +1,181 @@ + + + + + + CIME.SystemTests.err — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.err

+"""
+CIME ERR test  This class inherits from ERS
+ERR tests short term archiving and restart capabilities
+"""
+import glob, os
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.restart_tests import RestartTest
+from CIME.utils import ls_sorted_by_mtime, safe_copy
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERR(RestartTest): + def __init__(self, case, **kwargs): # pylint: disable=super-init-not-called + """ + initialize an object interface to the ERR system test + """ + super(ERR, self).__init__( + case, + separate_builds=False, + run_two_suffix="rest", + run_one_description="initial", + run_two_description="restart", + multisubmit=True, + **kwargs + ) + + def _case_one_setup(self): + super(ERR, self)._case_one_setup() + self._case.set_value("DOUT_S", True) + + def _case_two_setup(self): + super(ERR, self)._case_two_setup() + self._case.set_value("DOUT_S", False) + + def _case_two_custom_prerun_action(self): + dout_s_root = self._case1.get_value("DOUT_S_ROOT") + rest_root = os.path.abspath(os.path.join(dout_s_root, "rest")) + restart_list = ls_sorted_by_mtime(rest_root) + expect(len(restart_list) >= 1, "No restart files found in {}".format(rest_root)) + self._case.restore_from_archive( + rest_dir=os.path.join(rest_root, restart_list[0]) + ) + + def _case_two_custom_postrun_action(self): + # Link back to original case1 name + # This is needed so that the necessary files are present for + # baseline comparison and generation, + # since some of them may have been moved to the archive directory + for case_file in glob.iglob( + os.path.join( + self._case1.get_value("RUNDIR"), "*.nc.{}".format(self._run_one_suffix) + ) + ): + orig_file = case_file[: -(1 + len(self._run_one_suffix))] + if not os.path.isfile(orig_file): + safe_copy(case_file, orig_file)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erri.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erri.html new file mode 100644 index 00000000000..7d7049e45f4 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/erri.html @@ -0,0 +1,153 @@ + + + + + + CIME.SystemTests.erri — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.erri

+"""
+CIME ERRI test  This class inherits from ERR
+ERRI tests short term archiving and restart capabilities with "incomplete" (unzipped) log files
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.err import ERR
+
+import shutil, glob, gzip
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERRI(ERR): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the ERU system test + """ + ERR.__init__(self, case, **kwargs) + + def _case_two_custom_postrun_action(self): + rundir = self._case.get_value("RUNDIR") + for logname_gz in glob.glob(os.path.join(rundir, "*.log*.gz")): + # gzipped logfile names are of the form $LOGNAME.gz + # Removing the last three characters restores the original name + logname = logname_gz[:-3] + with gzip.open(logname_gz, "rb") as f_in, open(logname, "w") as f_out: + shutil.copyfileobj(f_in, f_out) + os.remove(logname_gz)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ers.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ers.html new file mode 100644 index 00000000000..e70ed537fb7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ers.html @@ -0,0 +1,192 @@ + + + + + + CIME.SystemTests.ers — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.ers

+"""
+CIME restart test  This class inherits from SystemTestsCommon
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+import glob
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERS(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the ERS system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + + def _ers_first_phase(self): + stop_n = self._case.get_value("STOP_N") + stop_option = self._case.get_value("STOP_OPTION") + rest_n = self._case.get_value("REST_N") + expect(stop_n > 0, "Bad STOP_N: {:d}".format(stop_n)) + + expect(stop_n > 2, "ERROR: stop_n value {:d} too short".format(stop_n)) + logger.info( + "doing an {0} {1} initial test with restart file at {2} {1}".format( + str(stop_n), stop_option, str(rest_n) + ) + ) + self.run_indv() + + def _ers_second_phase(self): + stop_n = self._case.get_value("STOP_N") + stop_option = self._case.get_value("STOP_OPTION") + + rest_n = int(stop_n / 2 + 1) + stop_new = stop_n - rest_n + expect( + stop_new > 0, + "ERROR: stop_n value {:d} too short {:d} {:d}".format( + stop_new, stop_n, rest_n + ), + ) + rundir = self._case.get_value("RUNDIR") + for pfile in glob.iglob(os.path.join(rundir, "PET*")): + os.rename( + pfile, + os.path.join(os.path.dirname(pfile), "run1." + os.path.basename(pfile)), + ) + + self._case.set_value("HIST_N", stop_n) + self._case.set_value("STOP_N", stop_new) + self._case.set_value("CONTINUE_RUN", True) + self._case.set_value("REST_OPTION", "never") + self._case.flush() + logger.info("doing an {} {} restart test".format(str(stop_new), stop_option)) + self._skip_pnl = False + self.run_indv(suffix="rest") + + # Compare restart file + self._component_compare_test("base", "rest") + +
+[docs] + def run_phase(self): + self._ers_first_phase() + self._ers_second_phase()
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ers2.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ers2.html new file mode 100644 index 00000000000..8cdcbbdb869 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ers2.html @@ -0,0 +1,192 @@ + + + + + + CIME.SystemTests.ers2 — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.ers2

+"""
+CIME restart test 2  This class inherits from SystemTestsCommon
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERS2(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the ERS2 system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + + def _ers2_first_phase(self): + stop_n = self._case.get_value("STOP_N") + stop_option = self._case.get_value("STOP_OPTION") + rest_n = self._case.get_value("REST_N") + + # Don't need restarts for first run + self._case.set_value("REST_OPTION", "never") + + expect(stop_n > 0, "Bad STOP_N: {:d}".format(stop_n)) + expect(stop_n > 2, "ERROR: stop_n value {:d} too short".format(stop_n)) + + logger.info( + "doing an {0} {1} initial test with restart file at {2} {1}".format( + str(stop_n), stop_option, str(rest_n) + ) + ) + self.run_indv() + + def _ers2_second_phase(self): + stop_n = self._case.get_value("STOP_N") + stop_option = self._case.get_value("STOP_OPTION") + + rest_n = int(stop_n / 2 + 1) + stop_new = rest_n + + self._case.set_value("REST_OPTION", stop_option) + self._case.set_value("STOP_N", stop_new) + self._case.flush() + logger.info( + "doing first part {} {} restart test".format(str(stop_new), stop_option) + ) + self.run_indv(suffix="intermediate") + + stop_new = int(stop_n - rest_n) + self._case.set_value("STOP_N", stop_new) + self._case.set_value("CONTINUE_RUN", True) + self._case.set_value("REST_OPTION", "never") + + logger.info( + "doing second part {} {} restart test".format(str(stop_new), stop_option) + ) + self.run_indv(suffix="rest") + + # Compare restart file + self._component_compare_test("base", "rest") + +
+[docs] + def run_phase(self): + self._ers2_first_phase() + self._ers2_second_phase()
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ert.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ert.html new file mode 100644 index 00000000000..bc0842bc34b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ert.html @@ -0,0 +1,176 @@ + + + + + + CIME.SystemTests.ert — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.ert

+"""
+CIME production restart test  This class inherits from SystemTestsCommon
+Exact restart from startup, default 2 month + 1 month
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ERT(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the ERT system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + + def _ert_first_phase(self): + + self._case.set_value("STOP_N", 2) + self._case.set_value("STOP_OPTION", "nmonths") + self._case.set_value("REST_N", 1) + self._case.set_value("REST_OPTION", "nmonths") + self._case.set_value("HIST_N", 1) + self._case.set_value("HIST_OPTION", "nmonths") + self._case.set_value("AVG_HIST_N", 1) + self._case.set_value("AVG_HIST_OPTION", "nmonths") + self._case.set_value("CONTINUE_RUN", False) + self._case.flush() + + logger.info("doing a 2 month initial test with restart files at 1 month") + self.run_indv() + + def _ert_second_phase(self): + + self._case.set_value("STOP_N", 1) + self._case.set_value("CONTINUE_RUN", True) + self._case.set_value("REST_OPTION", "never") + self._case.flush() + + logger.info("doing an 1 month restart test with no restart files") + self.run_indv(suffix="rest") + # Compare restart file + self._component_compare_test("base", "rest") + +
+[docs] + def run_phase(self): + self._ert_first_phase() + self._ert_second_phase()
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/funit.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/funit.html new file mode 100644 index 00000000000..a5d985ad36e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/funit.html @@ -0,0 +1,207 @@ + + + + + + CIME.SystemTests.funit — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.funit

+"""
+CIME FUNIT test. This class inherits from SystemTestsCommon. It runs
+the fortran unit tests; grid and compset are ignored.
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.build import post_build
+from CIME.utils import append_testlog, get_cime_root
+from CIME.test_status import *
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class FUNIT(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the FUNIT system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + case.load_env() + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + if not sharedlib_only: + exeroot = self._case.get_value("EXEROOT") + logfile = os.path.join(exeroot, "funit.bldlog") + with open(logfile, "w") as fd: + fd.write("No-op\n") + + post_build(self._case, [logfile], build_complete=True)
+ + +
+[docs] + def get_test_spec_dir(self): + """ + Override this to change what gets tested. + """ + return get_cime_root()
+ + +
+[docs] + def run_phase(self): + + rundir = self._case.get_value("RUNDIR") + exeroot = self._case.get_value("EXEROOT") + mach = self._case.get_value("MACH") + + log = os.path.join(rundir, "funit.log") + if os.path.exists(log): + os.remove(log) + + test_spec_dir = self.get_test_spec_dir() + unit_test_tool = os.path.abspath( + os.path.join( + get_cime_root(), "scripts", "fortran_unit_testing", "run_tests.py" + ) + ) + args = "--build-dir {} --test-spec-dir {} --machine {}".format( + exeroot, test_spec_dir, mach + ) + + stat = run_cmd( + "{} {} >& funit.log".format(unit_test_tool, args), from_dir=rundir + )[0] + + append_testlog(open(os.path.join(rundir, "funit.log"), "r").read()) + + expect(stat == 0, "RUN FAIL for FUNIT")
+ + + # Funit is a bit of an oddball test since it's not really running the E3SM model + # We need to override some methods to make the core infrastructure work. + + def _generate_baseline(self): + with self._test_status: + self._test_status.set_status(GENERATE_PHASE, TEST_PASS_STATUS) + + def _compare_baseline(self): + with self._test_status: + self._test_status.set_status(BASELINE_PHASE, TEST_PASS_STATUS)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/homme.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/homme.html new file mode 100644 index 00000000000..42d2f44d73f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/homme.html @@ -0,0 +1,131 @@ + + + + + + CIME.SystemTests.homme — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.homme

+from CIME.SystemTests.hommebaseclass import HommeBase
+
+
+
+[docs] +class HOMME(HommeBase): + def __init__(self, case, **kwargs): + HommeBase.__init__(self, case, **kwargs) + self.cmakesuffix = ""
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/hommebaseclass.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/hommebaseclass.html new file mode 100644 index 00000000000..bf7365e670b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/hommebaseclass.html @@ -0,0 +1,275 @@ + + + + + + CIME.SystemTests.hommebaseclass — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.hommebaseclass

+"""
+CIME HOMME test. This class inherits from SystemTestsCommon
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.build import post_build
+from CIME.utils import append_testlog, SharedArea
+from CIME.test_status import *
+
+import shutil
+from distutils import dir_util
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class HommeBase(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the SMS system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + case.load_env() + self.csnd = "not defined" + self.cmakesuffix = self.csnd + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + if not sharedlib_only: + # Build HOMME + srcroot = self._case.get_value("SRCROOT") + mach = self._case.get_value("MACH") + procs = self._case.get_value("TOTALPES") + exeroot = self._case.get_value("EXEROOT") + baseline = self._case.get_value("BASELINE_ROOT") + basecmp = self._case.get_value("BASECMP_CASE") + compare = self._case.get_value("COMPARE_BASELINE") + gmake = self._case.get_value("GMAKE") + gmake_j = self._case.get_value("GMAKE_J") + cprnc = self._case.get_value("CCSM_CPRNC") + + if compare: + basename = basecmp + baselinedir = baseline + else: + basename = "" + baselinedir = exeroot + + expect( + self.cmakesuffix != self.csnd, + "ERROR in hommebaseclass: Must have cmakesuffix set up", + ) + + cmake_cmd = "cmake -C {0}/components/homme/cmake/machineFiles/{1}{6}.cmake -DUSE_NUM_PROCS={2} {0}/components/homme -DHOMME_BASELINE_DIR={3}/{4} -DCPRNC_DIR={5}/..".format( + srcroot, mach, procs, baselinedir, basename, cprnc, self.cmakesuffix + ) + + run_cmd_no_fail( + cmake_cmd, + arg_stdout="homme.bldlog", + combine_output=True, + from_dir=exeroot, + ) + run_cmd_no_fail( + "{} -j{} VERBOSE=1 test-execs".format(gmake, gmake_j), + arg_stdout="homme.bldlog", + combine_output=True, + from_dir=exeroot, + ) + + post_build( + self._case, [os.path.join(exeroot, "homme.bldlog")], build_complete=True + )
+ + +
+[docs] + def run_phase(self): + + rundir = self._case.get_value("RUNDIR") + exeroot = self._case.get_value("EXEROOT") + baseline = self._case.get_value("BASELINE_ROOT") + compare = self._case.get_value("COMPARE_BASELINE") + generate = self._case.get_value("GENERATE_BASELINE") + basegen = self._case.get_value("BASEGEN_CASE") + gmake = self._case.get_value("GMAKE") + + log = os.path.join(rundir, "homme.log") + if os.path.exists(log): + os.remove(log) + + if generate: + full_baseline_dir = os.path.join(baseline, basegen, "tests", "baseline") + stat = run_cmd( + "{} -j 4 baseline".format(gmake), + arg_stdout=log, + combine_output=True, + from_dir=exeroot, + )[0] + if stat == 0: + if os.path.isdir(full_baseline_dir): + shutil.rmtree(full_baseline_dir) + + with SharedArea(): + dir_util.copy_tree( + os.path.join(exeroot, "tests", "baseline"), + full_baseline_dir, + preserve_mode=False, + ) + + elif compare: + stat = run_cmd( + "{} -j 4 check".format(gmake), + arg_stdout=log, + combine_output=True, + from_dir=exeroot, + )[0] + + else: + stat = run_cmd( + "{} -j 4 baseline".format(gmake), + arg_stdout=log, + combine_output=True, + from_dir=exeroot, + )[0] + if stat == 0: + stat = run_cmd( + "{} -j 4 check".format(gmake), + arg_stdout=log, + combine_output=True, + from_dir=exeroot, + )[0] + + # Add homme.log output to TestStatus.log so that it can + # appear on the dashboard. Otherwise, the TestStatus.log + # is pretty useless for this test. + append_testlog(open(log, "r").read()) + + expect(stat == 0, "RUN FAIL for HOMME")
+ + + # Homme is a bit of an oddball test since it's not really running the E3SM model + # We need to override some methods to make the core infrastructure work. + + def _generate_baseline(self): + with self._test_status: + self._test_status.set_status(GENERATE_PHASE, TEST_PASS_STATUS) + + def _compare_baseline(self): + with self._test_status: + self._test_status.set_status(BASELINE_PHASE, TEST_PASS_STATUS)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/hommebfb.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/hommebfb.html new file mode 100644 index 00000000000..bedab04561c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/hommebfb.html @@ -0,0 +1,131 @@ + + + + + + CIME.SystemTests.hommebfb — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.hommebfb

+from CIME.SystemTests.hommebaseclass import HommeBase
+
+
+
+[docs] +class HOMMEBFB(HommeBase): + def __init__(self, case, **kwargs): + HommeBase.__init__(self, case, **kwargs) + self.cmakesuffix = "-bfb"
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/icp.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/icp.html new file mode 100644 index 00000000000..338bcea5668 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/icp.html @@ -0,0 +1,155 @@ + + + + + + CIME.SystemTests.icp — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.icp

+"""
+CIME ICP test  This class inherits from SystemTestsCommon
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+
+
+
+[docs] +class ICP(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to file env_test.xml in the case directory + """ + SystemTestsCommon.__init__(self, case, **kwargs) + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + self._case.set_value("CICE_AUTO_DECOMP", "false")
+ + +
+[docs] + def run_phase(self): + self._case.set_value("CONTINUE_RUN", False) + self._case.set_value("REST_OPTION", "none") + self._case.set_value("HIST_OPTION", "$STOP_OPTION") + self._case.set_value("HIST_N", "$STOP_N") + self._case.flush() + + self.run_indv(self)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/irt.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/irt.html new file mode 100644 index 00000000000..223770151c8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/irt.html @@ -0,0 +1,169 @@ + + + + + + CIME.SystemTests.irt — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.irt

+"""
+Implementation of the CIME IRT. (Interim Restart Test)
+This test the model's restart capability as well as the short term archiver's interim restart capability
+
+(1) Do a Run of length N with restart at N/2 and DOUT_S_SAVE_INTERIM_RESTART set to TRUE
+(2) Archive Run using ST archive tools
+(3) Recover first interim restart to the case2 run directory
+(4) Start case2 from restart and run to the end of case1
+(5) compare results.
+(6) this test does not save or compare history files in baselines.
+
+"""
+
+from CIME.SystemTests.restart_tests import RestartTest
+from CIME.XML.standard_module_setup import *
+from CIME.utils import ls_sorted_by_mtime
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class IRT(RestartTest): + def __init__(self, case, **kwargs): + RestartTest.__init__( + self, + case, + separate_builds=False, + run_two_suffix="restart", + run_one_description="initial", + run_two_description="restart", + multisubmit=False, + **kwargs + ) + self._skip_pnl = False + + def _case_one_custom_postrun_action(self): + self._case.case_st_archive() + # Since preview namelist is run before _case_two_prerun_action, we need to do this here. + dout_s_root = self._case1.get_value("DOUT_S_ROOT") + restart_list = ls_sorted_by_mtime(os.path.join(dout_s_root, "rest")) + logger.info("Restart directory list is {}".format(restart_list)) + expect(len(restart_list) >= 2, "Expected at least two restart directories") + # Get the older of the two restart directories + self._case2.restore_from_archive( + rest_dir=os.path.abspath(os.path.join(dout_s_root, "rest", restart_list[0])) + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ldsta.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ldsta.html new file mode 100644 index 00000000000..d9c75c51bb5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ldsta.html @@ -0,0 +1,206 @@ + + + + + + CIME.SystemTests.ldsta — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.ldsta

+"""
+CIME last date short term archiver test. This class inherits from SystemTestsCommon
+It does a run without restarting, then runs the archiver with various last-date parameters
+The test verifies the archive directory contains the expected files
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.utils import expect
+from CIME.date import get_file_date
+
+import datetime
+import glob
+import os
+import random
+import shutil
+
+logger = logging.getLogger(__name__)
+
+# datetime objects can't be used anywhere else
+def _date_to_datetime(date_obj):
+    return datetime.datetime(
+        year=date_obj.year(),
+        month=date_obj.month(),
+        day=date_obj.day(),
+        hour=date_obj.hour(),
+        minute=date_obj.minute(),
+        second=date_obj.second(),
+    )
+
+
+
+[docs] +class LDSTA(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the SMS system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + +
+[docs] + def run_phase(self): + archive_dir = self._case.get_value("DOUT_S_ROOT") + if os.path.isdir(archive_dir): + shutil.rmtree(archive_dir) + self.run_indv() + # finished running, so all archive files should exist + start_date = _date_to_datetime( + get_file_date(self._case.get_value("RUN_STARTDATE")) + ) + rest_dir = os.path.join(archive_dir, "rest") + delta_day = datetime.timedelta(1) + current_date = start_date + delta_day + next_datecheck = current_date + days_left = self._case.get_value("STOP_N") + final_date = start_date + delta_day * days_left + while current_date < final_date: + logger.info("Testing archiving with last date: {}".format(current_date)) + current_date_str = "{:04}-{:02}-{:02}".format( + current_date.year, current_date.month, current_date.day + ) + self._case.case_st_archive(last_date_str=current_date_str, copy_only=False) + archive_dates = [ + _date_to_datetime(get_file_date(fname)) + for fname in glob.glob(os.path.join(rest_dir, "*")) + ] + while next_datecheck <= current_date: + expect( + next_datecheck in archive_dates, + "Not all dates generated and/or archived: " + + "{} is missing".format(next_datecheck), + ) + next_datecheck += delta_day + for date in archive_dates: + expect( + date <= current_date, + "Archived date greater than specified by last-date: " + + "{}".format(date), + ) + num_days = random.randint(1, min(3, days_left)) + days_left -= num_days + current_date += num_days * delta_day
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/mcc.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/mcc.html new file mode 100644 index 00000000000..42eb66baa37 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/mcc.html @@ -0,0 +1,158 @@ + + + + + + CIME.SystemTests.mcc — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.mcc

+"""
+Implemetation of CIME MCC test: Compares ensemble methods
+
+This does two runs: In the first we run a three member ensemble using the
+ MULTI_DRIVER capability, then we run a second single instance case and compare
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class MCC(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + self._comp_classes = [] + self._test_instances = 3 + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=True, + run_two_suffix="single_instance", + run_two_description="single instance", + run_one_description="multi driver", + **kwargs + ) + + def _case_one_setup(self): + # The multicoupler case will increase the number of tasks by the + # number of requested couplers. + self._case.set_value("MULTI_DRIVER", True) + self._case.set_value("NINST", self._test_instances) + + def _case_two_setup(self): + self._case.set_value("NINST", 1)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/mvk.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/mvk.html new file mode 100644 index 00000000000..45337fed944 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/mvk.html @@ -0,0 +1,333 @@ + + + + + + CIME.SystemTests.mvk — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.mvk

+"""
+Multivariate test for climate reproducibility using the Kolmogrov-Smirnov (K-S)
+test and based on The CESM/E3SM model's multi-instance capability is used to
+conduct an ensemble of simulations starting from different initial conditions.
+
+This class inherits from SystemTestsCommon.
+"""
+
+import os
+import json
+import logging
+
+from distutils import dir_util
+
+import CIME.test_status
+import CIME.utils
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.case.case_setup import case_setup
+from CIME.XML.machines import Machines
+
+
+import evv4esm  # pylint: disable=import-error
+from evv4esm.__main__ import main as evv  # pylint: disable=import-error
+
+evv_lib_dir = os.path.abspath(os.path.dirname(evv4esm.__file__))
+logger = logging.getLogger(__name__)
+NINST = 30
+
+
+
+[docs] +class MVK(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the MVK test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + + if self._case.get_value("MODEL") == "e3sm": + self.component = "eam" + else: + self.component = "cam" + + if ( + self._case.get_value("RESUBMIT") == 0 + and self._case.get_value("GENERATE_BASELINE") is False + ): + self._case.set_value("COMPARE_BASELINE", True) + else: + self._case.set_value("COMPARE_BASELINE", False) + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + # Only want this to happen once. It will impact the sharedlib build + # so it has to happen there. + if not model_only: + logging.warning("Starting to build multi-instance exe") + for comp in self._case.get_values("COMP_CLASSES"): + self._case.set_value("NTHRDS_{}".format(comp), 1) + + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + + self._case.set_value("NTASKS_{}".format(comp), ntasks * NINST) + if comp != "CPL": + self._case.set_value("NINST_{}".format(comp), NINST) + + self._case.flush() + + case_setup(self._case, test_mode=False, reset=True) + + for iinst in range(1, NINST + 1): + with open( + "user_nl_{}_{:04d}".format(self.component, iinst), "w" + ) as nl_atm_file: + nl_atm_file.write("new_random = .true.\n") + nl_atm_file.write("pertlim = 1.0e-10\n") + nl_atm_file.write("seed_custom = {}\n".format(iinst)) + nl_atm_file.write("seed_clock = .true.\n") + + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only)
+ + + def _generate_baseline(self): + """ + generate a new baseline case based on the current test + """ + super(MVK, self)._generate_baseline() + + with CIME.utils.SharedArea(): + basegen_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASEGEN_CASE"), + ) + + rundir = self._case.get_value("RUNDIR") + ref_case = self._case.get_value("RUN_REFCASE") + + env_archive = self._case.get_env("archive") + hists = env_archive.get_all_hist_files( + self._case.get_value("CASE"), self.component, rundir, ref_case=ref_case + ) + logger.debug("MVK additional baseline files: {}".format(hists)) + hists = [os.path.join(rundir, hist) for hist in hists] + for hist in hists: + basename = hist[hist.rfind(self.component) :] + baseline = os.path.join(basegen_dir, basename) + if os.path.exists(baseline): + os.remove(baseline) + + CIME.utils.safe_copy(hist, baseline, preserve_meta=False) + + def _compare_baseline(self): + with self._test_status: + if int(self._case.get_value("RESUBMIT")) > 0: + # This is here because the comparison is run for each submission + # and we only want to compare once the whole run is finished. We + # need to return a pass here to continue the submission process. + self._test_status.set_status( + CIME.test_status.BASELINE_PHASE, CIME.test_status.TEST_PASS_STATUS + ) + return + + self._test_status.set_status( + CIME.test_status.BASELINE_PHASE, CIME.test_status.TEST_FAIL_STATUS + ) + + run_dir = self._case.get_value("RUNDIR") + case_name = self._case.get_value("CASE") + base_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASECMP_CASE"), + ) + + test_name = "{}".format(case_name.split(".")[-1]) + evv_config = { + test_name: { + "module": os.path.join(evv_lib_dir, "extensions", "ks.py"), + "test-case": "Test", + "test-dir": run_dir, + "ref-case": "Baseline", + "ref-dir": base_dir, + "var-set": "default", + "ninst": NINST, + "critical": 13, + "component": self.component, + } + } + + json_file = os.path.join(run_dir, ".".join([case_name, "json"])) + with open(json_file, "w") as config_file: + json.dump(evv_config, config_file, indent=4) + + evv_out_dir = os.path.join(run_dir, ".".join([case_name, "evv"])) + evv(["-e", json_file, "-o", evv_out_dir]) + + with open(os.path.join(evv_out_dir, "index.json")) as evv_f: + evv_status = json.load(evv_f) + + comments = "" + for evv_ele in evv_status["Page"]["elements"]: + if "Table" in evv_ele: + comments = "; ".join( + "{}: {}".format(key, val[0]) + for key, val in evv_ele["Table"]["data"].items() + ) + if evv_ele["Table"]["data"]["Test status"][0].lower() == "pass": + self._test_status.set_status( + CIME.test_status.BASELINE_PHASE, + CIME.test_status.TEST_PASS_STATUS, + ) + break + + status = self._test_status.get_status(CIME.test_status.BASELINE_PHASE) + mach_name = self._case.get_value("MACH") + mach_obj = Machines(machine=mach_name) + htmlroot = CIME.utils.get_htmlroot(mach_obj) + urlroot = CIME.utils.get_urlroot(mach_obj) + if htmlroot is not None: + with CIME.utils.SharedArea(): + dir_util.copy_tree( + evv_out_dir, + os.path.join(htmlroot, "evv", case_name), + preserve_mode=False, + ) + if urlroot is None: + urlroot = "[{}_URL]".format(mach_name.capitalize()) + viewing = "{}/evv/{}/index.html".format(urlroot, case_name) + else: + viewing = ( + "{}\n" + " EVV viewing instructions can be found at: " + " https://github.com/E3SM-Project/E3SM/blob/master/cime/scripts/" + "climate_reproducibility/README.md#test-passfail-and-extended-output" + "".format(evv_out_dir) + ) + + comments = ( + "{} {} for test '{}'.\n" + " {}\n" + " EVV results can be viewed at:\n" + " {}".format( + CIME.test_status.BASELINE_PHASE, + status, + test_name, + comments, + viewing, + ) + ) + + CIME.utils.append_testlog(comments, self._orig_caseroot)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/nck.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/nck.html new file mode 100644 index 00000000000..1c4bb9ebd90 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/nck.html @@ -0,0 +1,188 @@ + + + + + + CIME.SystemTests.nck — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.nck

+"""
+Implementation of the CIME NCK test: Tests multi-instance
+
+This does two runs: In the first, we use one instance per component; in the
+second, we use two instances per components. NTASKS are changed in each run so
+that the number of tasks per instance is the same for both runs.
+
+Lay all of the components out sequentially
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class NCK(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + self._comp_classes = [] + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=True, + run_two_suffix="multiinst", + run_one_description="one instance", + run_two_description="two instances", + **kwargs, + ) + + def _common_setup(self): + # We start by halving the number of tasks for both cases. This ensures + # that we use the same number of tasks per instance in both cases: For + # the two-instance case, we'll double this halved number, so you may + # think that the halving was unnecessary; but it's needed in case the + # original NTASKS was odd. (e.g., for NTASKS originally 15, we want to + # use NTASKS = int(15/2) * 2 = 14 tasks for case two.) + self._comp_classes = self._case.get_values("COMP_CLASSES") + self._comp_classes.remove("CPL") + for comp in self._comp_classes: + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + if ntasks > 1: + self._case.set_value("NTASKS_{}".format(comp), int(ntasks / 2)) + # the following assures that both cases use the same number of total tasks + rootpe = self._case.get_value("ROOTPE_{}".format(comp)) + if rootpe > 1: + self._case.set_value("ROOTPE_{}".format(comp), int(rootpe + ntasks / 2)) + + def _case_one_setup(self): + for comp in self._comp_classes: + self._case.set_value("NINST_{}".format(comp), 1) + + def _case_two_setup(self): + for comp in self._comp_classes: + if comp == "ESP": + self._case.set_value("NINST_{}".format(comp), 1) + else: + self._case.set_value("NINST_{}".format(comp), 2) + + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + rootpe = self._case.get_value("ROOTPE_{}".format(comp)) + if rootpe > 1: + self._case.set_value("ROOTPE_{}".format(comp), int(rootpe - ntasks)) + self._case.set_value("NTASKS_{}".format(comp), ntasks * 2) + self._case.case_setup(test_mode=True, reset=True)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ncr.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ncr.html new file mode 100644 index 00000000000..5107250dbec --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/ncr.html @@ -0,0 +1,193 @@ + + + + + + CIME.SystemTests.ncr — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.ncr

+"""
+Implementation of the CIME NCR test.  This class inherits from SystemTestsCommon
+
+Build two exectuables for this test:
+The first runs two instances for each component with the same total number of tasks,
+and runs each of them concurrently
+The second is a default build
+
+NOTE: This is currently untested, and may not be working properly
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class NCR(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + """ + initialize an NCR test + """ + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=True, + run_two_suffix="singleinst", + run_one_description="two instances, each with the same number of tasks", + run_two_description="default build", + **kwargs + ) + + def _comp_classes(self): + # Return the components which we need to set things for + # ESP cannot have more than one instance, so don't set anything for it + comp_classes = self._case.get_values("COMP_CLASSES") + if "CPL" in comp_classes: + comp_classes.remove("CPL") + if "ESP" in comp_classes: + comp_classes.remove("ESP") + return comp_classes + + def _common_setup(self): + # Set the default number of tasks + for comp in self._comp_classes(): + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + if ntasks > 1: + self._case.set_value("NTASKS_{}".format(comp), ntasks // 2) + + def _case_one_setup(self): + # Set the number of instances, the ROOTPEs, and the number of tasks + # This case should have twice the number of instances and half the number of tasks + # All tasks should be running concurrently + # Note that this case must be the multiinstance one + # to correctly set the required number of nodes and avoid crashing + ntasks_sum = 0 + + for comp in self._comp_classes(): + self._case.set_value("NINST_{}".format(comp), str(2)) + self._case.set_value("ROOTPE_{}".format(comp), ntasks_sum) + ntasks = self._case.get_value("NTASKS_{}".format(comp)) * 2 + ntasks_sum += ntasks + self._case.set_value("NTASKS_{}".format(comp), ntasks) + # test_mode must be False here so the case.test file is updated + # This ensures that the correct number of nodes are used in case it's larger than in case 2 + + def _case_two_setup(self): + for comp in self._comp_classes(): + self._case.set_value("NINST_{}".format(comp), str(1)) + self._case.set_value("ROOTPE_{}".format(comp), 0)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/nodefail.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/nodefail.html new file mode 100644 index 00000000000..5f38922db24 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/nodefail.html @@ -0,0 +1,209 @@ + + + + + + CIME.SystemTests.nodefail — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.nodefail

+"""
+CIME restart upon failed node test.
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.ers import ERS
+from CIME.utils import get_model
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class NODEFAIL(ERS): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the ERS system test + """ + ERS.__init__(self, case, **kwargs) + + self._fail_sentinel = os.path.join(case.get_value("RUNDIR"), "FAIL_SENTINEL") + self._fail_str = case.get_value("NODE_FAIL_REGEX") + + def _restart_fake_phase(self): + # Swap out model.exe for one that emits node failures + rundir = self._case.get_value("RUNDIR") + exeroot = self._case.get_value("EXEROOT") + driver = self._case.get_value("COMP_INTERFACE") + if driver == "nuopc": + logname = "drv" + else: + logname = "cpl" + fake_exe = """#!/bin/bash + +fail_sentinel={0} +cpl_log={1}/{4}.log.$LID +model_log={1}/{2}.log.$LID +touch $cpl_log +touch $fail_sentinel +declare -i num_fails=$(cat $fail_sentinel | wc -l) +declare -i times_to_fail=${{NODEFAIL_NUM_FAILS:-3}} + +if ((num_fails < times_to_fail)); then + echo FAKE FAIL >> $cpl_log + echo FAIL >> $fail_sentinel + echo '{3}' >> $model_log + sleep 1 + exit -1 +else + echo Insta pass + echo SUCCESSFUL TERMINATION > $cpl_log +fi +""".format( + self._fail_sentinel, rundir, get_model(), self._fail_str, logname + ) + + fake_exe_file = os.path.join(exeroot, "fake.sh") + with open(fake_exe_file, "w") as fd: + fd.write(fake_exe) + + os.chmod(fake_exe_file, 0o755) + + prev_run_exe = self._case.get_value("run_exe") + env_mach_specific = self._case.get_env("mach_specific") + env_mach_specific.set_value("run_exe", fake_exe_file) + self._case.flush(flushall=True) + + # This flag is needed by mpt to run a script under mpiexec + mpilib = self._case.get_value("MPILIB") + if mpilib == "mpt": + os.environ["MPI_SHEPHERD"] = "true" + + self.run_indv(suffix=None) + + if mpilib == "mpt": + del os.environ["MPI_SHEPHERD"] + + env_mach_specific = self._case.get_env("mach_specific") + env_mach_specific.set_value("run_exe", prev_run_exe) + self._case.flush(flushall=True) + +
+[docs] + def run_phase(self): + self._ers_first_phase() + self._restart_fake_phase() + self._ers_second_phase()
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pea.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pea.html new file mode 100644 index 00000000000..e25fc01c7d8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pea.html @@ -0,0 +1,175 @@ + + + + + + CIME.SystemTests.pea — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.pea

+"""
+Implementation of the CIME PEA test.
+
+Builds runs and compares a single processor mpi model to a model built using mpi-serial
+(1) do a run with default mpi library (suffix base)
+(2) do a run with mpi-serial (suffix mpi-serial)
+"""
+
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.XML.standard_module_setup import *
+from CIME.XML.machines import Machines
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class PEA(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=True, + run_two_suffix="mpi-serial", + run_one_description="default mpi library", + run_two_description="mpi-serial", + **kwargs, + ) + + def _common_setup(self): + for comp in self._case.get_values("COMP_CLASSES"): + self._case.set_value("NTASKS_{}".format(comp), 1) + self._case.set_value("NTHRDS_{}".format(comp), 1) + self._case.set_value("ROOTPE_{}".format(comp), 0) + + def _case_one_setup(self): + pass + + def _case_two_setup(self): + mach_name = self._case.get_value("MACH") + mach_obj = Machines(machine=mach_name) + if mach_obj.is_valid_MPIlib("mpi-serial"): + self._case.set_value("MPILIB", "mpi-serial") + else: + logger.warning( + "mpi-serial is not supported on machine '{}', " + "so we have to fall back to default MPI and " + "therefore very little is being tested".format(mach_name) + ) + + if os.path.isfile("Macros"): + os.remove("Macros") + self._case.case_setup(test_mode=True, reset=True)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pem.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pem.html new file mode 100644 index 00000000000..9cf57659606 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pem.html @@ -0,0 +1,169 @@ + + + + + + CIME.SystemTests.pem — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.pem

+"""
+Implementation of the CIME PEM test: Tests bfb with different MPI
+processor counts
+
+This is just like running a smoke test twice - but the pe-counts
+are modified the second time.
+(1) Run with pes set up out of the box (suffix base)
+(2) Run with half the number of tasks (suffix modpes)
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class PEM(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + build_separately = False + # cice, pop require separate builds + comps = case.get_compset_components() + if "cice" in comps or "pop" in comps: + build_separately = True + + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=build_separately, + run_two_suffix="modpes", + run_one_description="default pe counts", + run_two_description="halved pe counts", + **kwargs + ) + + def _case_one_setup(self): + pass + + def _case_two_setup(self): + for comp in self._case.get_values("COMP_CLASSES"): + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + rootpe = self._case1.get_value("ROOTPE_{}".format(comp)) + if ntasks > 1: + self._case.set_value("NTASKS_{}".format(comp), int(ntasks / 2)) + self._case.set_value("ROOTPE_{}".format(comp), int(rootpe / 2)) + self._case.case_setup(test_mode=True, reset=True)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pet.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pet.html new file mode 100644 index 00000000000..13b395793ad --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pet.html @@ -0,0 +1,169 @@ + + + + + + CIME.SystemTests.pet — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.pet

+"""
+Implementation of the CIME PET test.  This class inherits from SystemTestsCommon
+
+This is an openmp test to determine that changing thread counts does not change answers.
+(1) do an initial run where all components are threaded by default (suffix: base)
+(2) do another initial run with nthrds=1 for all components (suffix: single_thread)
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class PET(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + """ + initialize a test object + """ + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=False, + multisubmit=True, + run_two_suffix="single_thread", + run_one_description="default threading", + run_two_description="threads set to 1", + **kwargs + ) + + def _case_one_setup(self): + # first make sure that all components have threaded settings + for comp in self._case.get_values("COMP_CLASSES"): + if self._case.get_value("NTHRDS_{}".format(comp)) <= 1: + self._case.set_value("NTHRDS_{}".format(comp), 2) + + # Need to redo case_setup because we may have changed the number of threads + + def _case_two_setup(self): + # Do a run with all threads set to 1 + for comp in self._case.get_values("COMP_CLASSES"): + self._case.set_value("NTHRDS_{}".format(comp), 1) + + # Need to redo case_setup because we may have changed the number of threads + self._case.case_setup(reset=True, test_mode=True)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pfs.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pfs.html new file mode 100644 index 00000000000..102d9205db0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pfs.html @@ -0,0 +1,149 @@ + + + + + + CIME.SystemTests.pfs — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.pfs

+"""
+CIME performance test  This class inherits from SystemTestsCommon
+
+20 days performance test, no restart files written
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class PFS(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the PFS system test + """ + SystemTestsCommon.__init__(self, case, **kwargs) + +
+[docs] + def run_phase(self): + logger.info("doing an 20 day initial test, no restarts written") + self.run_indv(suffix=None)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pgn.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pgn.html new file mode 100644 index 00000000000..19c65656c53 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pgn.html @@ -0,0 +1,485 @@ + + + + + + CIME.SystemTests.pgn — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.pgn

+"""
+Perturbation Growth New (PGN) - The CESM/ACME model's
+multi-instance capability is used to conduct an ensemble
+of simulations starting from different initial conditions.
+
+This class inherits from SystemTestsCommon.
+
+"""
+
+from __future__ import division
+
+import os
+import re
+import json
+import shutil
+import logging
+
+from collections import OrderedDict
+from distutils import dir_util
+
+import pandas as pd
+import numpy as np
+
+
+import CIME.test_status
+import CIME.utils
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.case.case_setup import case_setup
+from CIME.XML.machines import Machines
+
+import evv4esm  # pylint: disable=import-error
+from evv4esm.extensions import pg  # pylint: disable=import-error
+from evv4esm.__main__ import main as evv  # pylint: disable=import-error
+
+evv_lib_dir = os.path.abspath(os.path.dirname(evv4esm.__file__))
+
+logger = logging.getLogger(__name__)
+
+NUMBER_INITIAL_CONDITIONS = 6
+PERTURBATIONS = OrderedDict(
+    [
+        ("woprt", 0.0),
+        ("posprt", 1.0e-14),
+        ("negprt", -1.0e-14),
+    ]
+)
+FCLD_NC = "cam.h0.cloud.nc"
+INIT_COND_FILE_TEMPLATE = "20210915.v2.ne4_oQU240.F2010.{}.{}.0002-{:02d}-01-00000.nc"
+INSTANCE_FILE_TEMPLATE = "{}{}_{:04d}.h0.0001-01-01-00000{}.nc"
+
+
+
+[docs] +class PGN(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the PGN test + """ + super(PGN, self).__init__(case, **kwargs) + if self._case.get_value("MODEL") == "e3sm": + self.atmmod = "eam" + self.lndmod = "elm" + self.atmmodIC = "eam" + self.lndmodIC = "elm" + else: + self.atmmod = "cam" + self.lndmod = "clm" + self.atmmodIC = "cam" + self.lndmodIC = "clm2" + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + ninst = NUMBER_INITIAL_CONDITIONS * len(PERTURBATIONS) + logger.debug("PGN_INFO: number of instance: " + str(ninst)) + + default_ninst = self._case.get_value("NINST_ATM") + + if default_ninst == 1: # if multi-instance is not already set + # Only want this to happen once. It will impact the sharedlib build + # so it has to happen here. + if not model_only: + # Lay all of the components out concurrently + logger.debug( + "PGN_INFO: Updating NINST for multi-instance in " "env_mach_pes.xml" + ) + for comp in ["ATM", "OCN", "WAV", "GLC", "ICE", "ROF", "LND"]: + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + self._case.set_value("ROOTPE_{}".format(comp), 0) + self._case.set_value("NINST_{}".format(comp), ninst) + self._case.set_value("NTASKS_{}".format(comp), ntasks * ninst) + + self._case.set_value("ROOTPE_CPL", 0) + self._case.set_value("NTASKS_CPL", ntasks * ninst) + self._case.flush() + + case_setup(self._case, test_mode=False, reset=True) + + logger.debug("PGN_INFO: Updating user_nl_* files") + + csmdata_root = self._case.get_value("DIN_LOC_ROOT") + csmdata_atm = os.path.join(csmdata_root, "atm/cam/inic/homme/ne4_v2_init") + csmdata_lnd = os.path.join(csmdata_root, "lnd/clm2/initdata/ne4_oQU240_v2_init") + + iinst = 1 + for icond in range(1, NUMBER_INITIAL_CONDITIONS + 1): + fatm_in = os.path.join( + csmdata_atm, INIT_COND_FILE_TEMPLATE.format(self.atmmodIC, "i", icond) + ) + flnd_in = os.path.join( + csmdata_lnd, INIT_COND_FILE_TEMPLATE.format(self.lndmodIC, "r", icond) + ) + for iprt in PERTURBATIONS.values(): + with open( + "user_nl_{}_{:04d}".format(self.atmmod, iinst), "w" + ) as atmnlfile, open( + "user_nl_{}_{:04d}".format(self.lndmod, iinst), "w" + ) as lndnlfile: + + atmnlfile.write("ncdata = '{}' \n".format(fatm_in)) + lndnlfile.write("finidat = '{}' \n".format(flnd_in)) + + atmnlfile.write("avgflag_pertape = 'I' \n") + atmnlfile.write("nhtfrq = 1 \n") + atmnlfile.write("mfilt = 2 \n") + atmnlfile.write("ndens = 1 \n") + atmnlfile.write("pergro_mods = .true. \n") + atmnlfile.write("pergro_test_active = .true. \n") + + if iprt != 0.0: + atmnlfile.write("pertlim = {} \n".format(iprt)) + + iinst += 1 + + self._case.set_value("STOP_N", "1") + self._case.set_value("STOP_OPTION", "nsteps") + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only)
+ + +
+[docs] + def get_var_list(self): + """ + Get variable list for pergro specific output vars + """ + rundir = self._case.get_value("RUNDIR") + prg_fname = "pergro_ptend_names.txt" + var_file = os.path.join(rundir, prg_fname) + CIME.utils.expect( + os.path.isfile(var_file), + "File {} does not exist in: {}".format(prg_fname, rundir), + ) + + with open(var_file, "r") as fvar: + var_list = fvar.readlines() + + return list(map(str.strip, var_list))
+ + + def _compare_baseline(self): + """ + Compare baselines in the pergro test sense. That is, + compare PGE from the test simulation with the baseline + cloud + """ + with self._test_status: + self._test_status.set_status( + CIME.test_status.BASELINE_PHASE, CIME.test_status.TEST_FAIL_STATUS + ) + + logger.debug("PGN_INFO:BASELINE COMPARISON STARTS") + + run_dir = self._case.get_value("RUNDIR") + case_name = self._case.get_value("CASE") + base_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASECMP_CASE"), + ) + + var_list = self.get_var_list() + + test_name = "{}".format(case_name.split(".")[-1]) + evv_config = { + test_name: { + "module": os.path.join(evv_lib_dir, "extensions", "pg.py"), + "test-case": case_name, + "test-name": "Test", + "test-dir": run_dir, + "ref-name": "Baseline", + "ref-dir": base_dir, + "variables": var_list, + "perturbations": PERTURBATIONS, + "pge-cld": FCLD_NC, + "ninit": NUMBER_INITIAL_CONDITIONS, + "init-file-template": INIT_COND_FILE_TEMPLATE, + "instance-file-template": INSTANCE_FILE_TEMPLATE, + "init-model": "cam", + "component": self.atmmod, + } + } + + json_file = os.path.join(run_dir, ".".join([case_name, "json"])) + with open(json_file, "w") as config_file: + json.dump(evv_config, config_file, indent=4) + + evv_out_dir = os.path.join(run_dir, ".".join([case_name, "evv"])) + evv(["-e", json_file, "-o", evv_out_dir]) + + with open(os.path.join(evv_out_dir, "index.json"), "r") as evv_f: + evv_status = json.load(evv_f) + + comments = "" + for evv_ele in evv_status["Page"]["elements"]: + if "Table" in evv_ele: + comments = "; ".join( + "{}: {}".format(key, val[0]) + for key, val in evv_ele["Table"]["data"].items() + ) + if evv_ele["Table"]["data"]["Test status"][0].lower() == "pass": + self._test_status.set_status( + CIME.test_status.BASELINE_PHASE, + CIME.test_status.TEST_PASS_STATUS, + ) + break + + status = self._test_status.get_status(CIME.test_status.BASELINE_PHASE) + mach_name = self._case.get_value("MACH") + mach_obj = Machines(machine=mach_name) + htmlroot = CIME.utils.get_htmlroot(mach_obj) + urlroot = CIME.utils.get_urlroot(mach_obj) + if htmlroot is not None: + with CIME.utils.SharedArea(): + dir_util.copy_tree( + evv_out_dir, + os.path.join(htmlroot, "evv", case_name), + preserve_mode=False, + ) + if urlroot is None: + urlroot = "[{}_URL]".format(mach_name.capitalize()) + viewing = "{}/evv/{}/index.html".format(urlroot, case_name) + else: + viewing = ( + "{}\n" + " EVV viewing instructions can be found at: " + " https://github.com/E3SM-Project/E3SM/blob/master/cime/scripts/" + "climate_reproducibility/README.md#test-passfail-and-extended-output" + "".format(evv_out_dir) + ) + + comments = ( + "{} {} for test '{}'.\n" + " {}\n" + " EVV results can be viewed at:\n" + " {}".format( + CIME.test_status.BASELINE_PHASE, + status, + test_name, + comments, + viewing, + ) + ) + + CIME.utils.append_testlog(comments, self._orig_caseroot) + +
+[docs] + def run_phase(self): + logger.debug("PGN_INFO: RUN PHASE") + + self.run_indv() + + # Here were are in case directory, we need to go to the run directory + # and rename files + rundir = self._case.get_value("RUNDIR") + casename = self._case.get_value("CASE") + logger.debug("PGN_INFO: Case name is:{}".format(casename)) + + for icond in range(NUMBER_INITIAL_CONDITIONS): + for iprt, ( + prt_name, + prt_value, # pylint: disable=unused-variable + ) in enumerate(PERTURBATIONS.items()): + iinst = pg._sub2instance(icond, iprt, len(PERTURBATIONS)) + fname = os.path.join( + rundir, + INSTANCE_FILE_TEMPLATE.format( + casename + ".", self.atmmod, iinst, "" + ), + ) + renamed_fname = re.sub(r"\.nc$", "_{}.nc".format(prt_name), fname) + + logger.debug("PGN_INFO: fname to rename:{}".format(fname)) + logger.debug("PGN_INFO: Renamed file:{}".format(renamed_fname)) + try: + shutil.move(fname, renamed_fname) + except IOError: + CIME.utils.expect( + os.path.isfile(renamed_fname), + "ERROR: File {} does not exist".format(renamed_fname), + ) + logger.debug( + "PGN_INFO: Renamed file already exists:" + "{}".format(renamed_fname) + ) + + logger.debug("PGN_INFO: RUN PHASE ENDS")
+ + + def _generate_baseline(self): + super(PGN, self)._generate_baseline() + + basegen_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), self._case.get_value("BASEGEN_CASE") + ) + + rundir = self._case.get_value("RUNDIR") + casename = self._case.get_value("CASE") + + var_list = self.get_var_list() + nvar = len(var_list) + nprt = len(PERTURBATIONS) + rmse_prototype = {} + for icond in range(NUMBER_INITIAL_CONDITIONS): + prt_rmse = {} + for iprt, prt_name in enumerate(PERTURBATIONS): + if prt_name == "woprt": + continue + iinst_ctrl = pg._sub2instance(icond, 0, nprt) + ifile_ctrl = os.path.join( + rundir, + INSTANCE_FILE_TEMPLATE.format( + casename + ".", self.atmmod, iinst_ctrl, "_woprt" + ), + ) + + iinst_test = pg._sub2instance(icond, iprt, nprt) + ifile_test = os.path.join( + rundir, + INSTANCE_FILE_TEMPLATE.format( + casename + ".", self.atmmod, iinst_test, "_" + prt_name + ), + ) + + prt_rmse[prt_name] = pg.variables_rmse( + ifile_test, ifile_ctrl, var_list, "t_" + ) + rmse_prototype[icond] = pd.concat(prt_rmse) + rmse = pd.concat(rmse_prototype) + cld_rmse = np.reshape( + rmse.RMSE.values, (NUMBER_INITIAL_CONDITIONS, nprt - 1, nvar) + ) + + pg.rmse_writer( + os.path.join(rundir, FCLD_NC), + cld_rmse, + list(PERTURBATIONS.keys()), + var_list, + INIT_COND_FILE_TEMPLATE, + "cam", + ) + + logger.debug("PGN_INFO:copy:{} to {}".format(FCLD_NC, basegen_dir)) + shutil.copy(os.path.join(rundir, FCLD_NC), basegen_dir)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pre.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pre.html new file mode 100644 index 00000000000..4af9a30dd74 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/pre.html @@ -0,0 +1,273 @@ + + + + + + CIME.SystemTests.pre — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.pre

+"""
+Implementation of the CIME pause/resume test: Tests having driver
+'pause' (write cpl restart file) and 'resume' (read cpl restart file)
+possibly changing the restart file. Compared to non-pause/resume run.
+Test can also be run with other component combinations.
+Test requires DESP component to function correctly.
+"""
+
+import os.path
+import logging
+import glob
+
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.utils import expect
+from CIME.hist_utils import cprnc
+
+###############################################################################
+
+[docs] +class PRE(SystemTestsCompareTwo): + ############################################################################### + """ + Implementation of the CIME pause/resume test: Tests having driver + 'pause' (write cpl and/or other restart file(s)) and 'resume' + (read cpl and/or other restart file(s)) possibly changing restart + file. Compare to non-pause/resume run. + """ + + ########################################################################### + def __init__(self, case, **kwargs): + ########################################################################### + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=False, + run_two_suffix="pr", + run_one_description="no pause/resume", + run_two_description="pause/resume", + **kwargs + ) + self._stopopt = "" + self._stopn = 0 + self._cprnc_exe = None + + ########################################################################### + def _case_one_setup(self): + ########################################################################### + pass + + ########################################################################### + def _case_two_setup(self): + ########################################################################### + # Set up a pause/resume run + stopopt = self._case1.get_value("STOP_OPTION") + stopn = self._case1.get_value("STOP_N") + self._case.set_value("STOP_OPTION", stopopt) + self._case.set_value("STOP_N", stopn) + self._case.set_value("ESP_RUN_ON_PAUSE", "TRUE") + if stopn > 3: + pausen = 2 + else: + pausen = 1 + # End if + + self._case.set_value("PAUSE_OPTION", stopopt) + self._case.set_value("PAUSE_N", pausen) + comps = self._case.get_values("COMP_CLASSES") + pause_active = [] + for comp in comps: + pause_active.append(self._case.get_value("PAUSE_ACTIVE_{}".format(comp))) + + expect(any(pause_active), "No pause_active flag is set") + + self._case.flush() + + ########################################################################### +
+[docs] + def run_phase(self): # pylint: disable=arguments-differ + ########################################################################### + self._activate_case2() + should_match = self._case.get_value("DESP_MODE") == "NOCHANGE" + SystemTestsCompareTwo.run_phase(self, success_change=not should_match) + # Look for expected coupler restart files + logger = logging.getLogger(__name__) + self._activate_case1() + rundir1 = self._case.get_value("RUNDIR") + self._cprnc_exe = self._case.get_value("CCSM_CPRNC") + self._activate_case2() + rundir2 = self._case.get_value("RUNDIR") + compare_ok = True + multi_driver = self._case.get_value("MULTI_DRIVER") + comps = self._case.get_values("COMP_CLASSES") + for comp in comps: + if not self._case.get_value("PAUSE_ACTIVE_{}".format(comp)): + continue + if comp == "CPL": + if multi_driver: + ninst = self._case.get_value("NINST_MAX") + else: + ninst = 1 + else: + ninst = self._case.get_value("NINST_{}".format(comp)) + + comp_name = self._case.get_value("COMP_{}".format(comp)) + for index in range(1, ninst + 1): + if ninst == 1: + rname = "*.{}.r.*".format(comp_name) + else: + rname = "*.{}_{:04d}.r.*".format(comp_name, index) + + restart_files_1 = glob.glob(os.path.join(rundir1, rname)) + expect( + (len(restart_files_1) > 0), + "No case1 restart files for {}".format(comp), + ) + restart_files_2 = glob.glob(os.path.join(rundir2, rname)) + expect( + (len(restart_files_2) > len(restart_files_1)), + "No pause (restart) files found in case2 for {}".format(comp), + ) + # Do cprnc of restart files. + rfile1 = restart_files_1[len(restart_files_1) - 1] + # rfile2 has to match rfile1 (same time string) + parts = os.path.basename(rfile1).split(".") + glob_str = "*.{}".format(".".join(parts[len(parts) - 4 :])) + restart_files_2 = glob.glob(os.path.join(rundir2, glob_str)) + expect( + (len(restart_files_2) == 1), + "Missing case2 restart file, {}", + glob_str, + ) + rfile2 = restart_files_2[0] + ok = cprnc( + comp, rfile1, rfile2, self._case, rundir2, cprnc_exe=self._cprnc_exe + )[0] + logger.warning( + "CPRNC result for {}: {}".format( + os.path.basename(rfile1), + "PASS" if (ok == should_match) else "FAIL", + ) + ) + compare_ok = compare_ok and (should_match == ok) + + expect( + compare_ok, + "Not all restart files {}".format( + "matched" if should_match else "failed to match" + ), + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/rep.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/rep.html new file mode 100644 index 00000000000..91511c21487 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/rep.html @@ -0,0 +1,144 @@ + + + + + + CIME.SystemTests.rep — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.rep

+"""
+Implementation of the CIME REP test
+
+This test verifies that two identical runs give bit-for-bit results
+"""
+
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+
+
+[docs] +class REP(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + SystemTestsCompareTwo.__init__( + self, case, separate_builds=False, run_two_suffix="rep2", **kwargs + ) + + def _case_one_setup(self): + pass + + def _case_two_setup(self): + pass
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/restart_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/restart_tests.html new file mode 100644 index 00000000000..fcaae2cdddc --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/restart_tests.html @@ -0,0 +1,176 @@ + + + + + + CIME.SystemTests.restart_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.restart_tests

+"""
+Abstract class for restart tests
+
+"""
+
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.XML.standard_module_setup import *
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class RestartTest(SystemTestsCompareTwo): + def __init__( + self, + case, + separate_builds, + run_two_suffix="restart", + run_one_description="initial", + run_two_description="restart", + multisubmit=False, + **kwargs + ): + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds, + run_two_suffix=run_two_suffix, + run_one_description=run_one_description, + run_two_description=run_two_description, + multisubmit=multisubmit, + **kwargs + ) + + def _case_one_setup(self): + stop_n = self._case1.get_value("STOP_N") + expect(stop_n >= 3, "STOP_N must be at least 3, STOP_N = {}".format(stop_n)) + + def _case_two_setup(self): + rest_n = self._case1.get_value("REST_N") + stop_n = self._case1.get_value("STOP_N") + stop_new = stop_n - rest_n + expect( + stop_new > 0, + "ERROR: stop_n value {:d} too short {:d} {:d}".format( + stop_new, stop_n, rest_n + ), + ) + # hist_n is set to the stop_n value of case1 + self._case.set_value("HIST_N", stop_n) + self._case.set_value("STOP_N", stop_new) + self._case.set_value("CONTINUE_RUN", True) + self._case.set_value("REST_OPTION", "never")
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/reuseinitfiles.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/reuseinitfiles.html new file mode 100644 index 00000000000..cf7af1f7880 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/reuseinitfiles.html @@ -0,0 +1,185 @@ + + + + + + CIME.SystemTests.reuseinitfiles — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.reuseinitfiles

+"""
+Implementation of the CIME REUSEINITFILES test
+
+This test does two runs:
+
+(1) A standard initial run
+
+(2) A run that reuses the init-generated files from run (1).
+
+This verifies that it works to reuse these init-generated files, and that you can get
+bit-for-bit results by doing so. This is important because these files are typically
+reused whenever a user reruns an initial case.
+"""
+
+import os
+import shutil
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.SystemTests.system_tests_common import INIT_GENERATED_FILES_DIRNAME
+
+
+
+[docs] +class REUSEINITFILES(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=False, + run_two_suffix="reuseinit", + run_one_description="standard initial run", + run_two_description="reuse init-generated files from run 1", + # The following line is a key part of this test: we will copy the + # init_generated_files from case1 and then need to make sure they are NOT + # deleted like is normally done for tests: + case_two_keep_init_generated_files=True, + **kwargs + ) + + def _case_one_setup(self): + pass + + def _case_two_setup(self): + pass + + def _case_two_custom_prerun_action(self): + case1_igf_dir = os.path.join( + self._case1.get_value("RUNDIR"), INIT_GENERATED_FILES_DIRNAME + ) + case2_igf_dir = os.path.join( + self._case2.get_value("RUNDIR"), INIT_GENERATED_FILES_DIRNAME + ) + + expect( + os.path.isdir(case1_igf_dir), + "ERROR: Expected a directory named {} in case1's rundir".format( + INIT_GENERATED_FILES_DIRNAME + ), + ) + if os.path.isdir(case2_igf_dir): + shutil.rmtree(case2_igf_dir) + + shutil.copytree(case1_igf_dir, case2_igf_dir)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/seq.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/seq.html new file mode 100644 index 00000000000..5491aa7fc93 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/seq.html @@ -0,0 +1,176 @@ + + + + + + CIME.SystemTests.seq — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.seq

+"""
+sequencing bfb test (10 day seq,conc tests)
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class SEQ(SystemTestsCompareTwo): + def __init__(self, case, **kwargs): + """ + initialize an object interface to file env_test.xml in the case directory + """ + SystemTestsCompareTwo.__init__( + self, + case, + separate_builds=True, + run_two_suffix="seq", + run_one_description="base", + run_two_description="sequence", + **kwargs + ) + + def _case_one_setup(self): + pass + + def _case_two_setup(self): + comp_classes = self._case.get_values("COMP_CLASSES") + any_changes = False + for comp in comp_classes: + any_changes |= self._case.get_value("ROOTPE_{}".format(comp)) != 0 + if any_changes: + for comp in comp_classes: + self._case.set_value("ROOTPE_{}".format(comp), 0) + else: + totalpes = self._case.get_value("TOTALPES") + newntasks = max(1, totalpes // len(comp_classes)) + rootpe = newntasks + + for comp in comp_classes: + # here we set the cpl to have the first 2 tasks + # and each component to have a different ROOTPE + if comp == "CPL": + self._case.set_value("NTASKS_CPL", newntasks) + else: + self._case.set_value("NTASKS_{}".format(comp), newntasks) + self._case.set_value("ROOTPE_{}".format(comp), rootpe) + rootpe += newntasks + + self._case.flush() + self._case.case_setup(test_mode=True, reset=True)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/sms.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/sms.html new file mode 100644 index 00000000000..3c3f7e437f0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/sms.html @@ -0,0 +1,141 @@ + + + + + + CIME.SystemTests.sms — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.sms

+"""
+CIME smoke test  This class inherits from SystemTestsCommon
+It does a startup run with restarts off and optionally compares to or generates baselines
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class SMS(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the SMS system test + """ + SystemTestsCommon.__init__(self, case, **kwargs)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_common.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_common.html new file mode 100644 index 00000000000..19edc5c1173 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_common.html @@ -0,0 +1,1393 @@ + + + + + + CIME.SystemTests.system_tests_common — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.system_tests_common

+"""
+Base class for CIME system tests
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.env_run import EnvRun
+from CIME.XML.env_test import EnvTest
+from CIME.utils import (
+    append_testlog,
+    get_model,
+    safe_copy,
+    get_timestamp,
+    CIMEError,
+    expect,
+    get_current_commit,
+    SharedArea,
+)
+from CIME.test_status import *
+from CIME.hist_utils import (
+    copy_histfiles,
+    compare_test,
+    generate_teststatus,
+    compare_baseline,
+    get_ts_synopsis,
+    generate_baseline,
+)
+from CIME.config import Config
+from CIME.provenance import save_test_time, get_test_success
+from CIME.locked_files import LOCKED_DIR, lock_file, is_locked
+from CIME.baselines.performance import (
+    get_latest_cpl_logs,
+    perf_get_memory_list,
+    perf_compare_memory_baseline,
+    perf_compare_throughput_baseline,
+    perf_write_baseline,
+    load_coupler_customization,
+)
+import CIME.build as build
+
+import glob, gzip, time, traceback, os
+from contextlib import ExitStack
+
+logger = logging.getLogger(__name__)
+
+# Name of directory under the run directory in which init-generated files are placed
+INIT_GENERATED_FILES_DIRNAME = "init_generated_files"
+
+
+
+[docs] +def fix_single_exe_case(case): + """Fixes cases created with --single-exe. + + When tests are created using --single-exe, the test_scheduler will set + `BUILD_COMPLETE` to True, but some tests require calls to `case.case_setup` + which can resets `BUILD_COMPLETE` to false. This function will check if a + case was created with `--single-exe` and ensure `BUILD_COMPLETE` is True. + + Returns: + True when case required modification otherwise False. + """ + if is_single_exe_case(case): + with ExitStack() as stack: + # enter context if case is still read-only, entering the context + # multiple times can cause side effects for later calls to + # `set_value` when it's assumed the cause is writeable. + if case._read_only_mode: + stack.enter_context(case) + + case.set_value("BUILD_COMPLETE", True) + + return True + + return False
+ + + +
+[docs] +def is_single_exe_case(case): + """Determines if the case was created with the --single-exe option. + + If `CASEROOT` is not part of `EXEROOT` and the `TEST` variable is True, + then its safe to assume the case was created with `./create_test` + and the `--single-exe` option. + + Returns: + True when the case was created with `--single-exe` otherwise false. + """ + caseroot = case.get_value("CASEROOT") + + exeroot = case.get_value("EXEROOT") + + test = case.get_value("TEST") + + return caseroot not in exeroot and test
+ + + +
+[docs] +class SystemTestsCommon(object): + def __init__( + self, case, expected=None, **kwargs + ): # pylint: disable=unused-argument + """ + initialize a CIME system test object, if the locked env_run.orig.xml + does not exist copy the current env_run.xml file. If it does exist restore values + changed in a previous run of the test. + """ + self._case = case + caseroot = case.get_value("CASEROOT") + self._caseroot = caseroot + self._orig_caseroot = caseroot + self._runstatus = None + self._casebaseid = self._case.get_value("CASEBASEID") + self._test_status = TestStatus(test_dir=caseroot, test_name=self._casebaseid) + self._init_environment(caseroot) + self._init_locked_files(caseroot, expected) + self._skip_pnl = False + self._cpllog = ( + "drv" if self._case.get_value("COMP_INTERFACE") == "nuopc" else "cpl" + ) + self._ninja = False + self._dry_run = False + self._user_separate_builds = False + self._expected_num_cmp = None + + def _init_environment(self, caseroot): + """ + Do initializations of environment variables that are needed in __init__ + """ + # Needed for sh scripts + os.environ["CASEROOT"] = caseroot + + def _init_locked_files(self, caseroot, expected): + """ + If the locked env_run.orig.xml does not exist, copy the current + env_run.xml file. If it does exist, restore values changed in a previous + run of the test. + """ + if is_locked("env_run.orig.xml"): + self.compare_env_run(expected=expected) + elif os.path.isfile(os.path.join(caseroot, "env_run.xml")): + lock_file("env_run.xml", caseroot=caseroot, newname="env_run.orig.xml") + + def _resetup_case(self, phase, reset=False): + """ + Re-setup this case. This is necessary if user is re-running an already-run + phase. + """ + # We never want to re-setup if we're doing the resubmitted run + phase_status = self._test_status.get_status(phase) + phase_comment = self._test_status.get_comment(phase) + rerunning = ( + phase_status != TEST_PEND_STATUS or phase_comment == TEST_RERUN_COMMENT + ) + if reset or (self._case.get_value("IS_FIRST_RUN") and rerunning): + + logging.warning( + "Resetting case due to detected re-run of phase {}".format(phase) + ) + self._case.set_initial_test_values() + self._case.case_setup(reset=True, test_mode=True) + fix_single_exe_case(self._case) + +
+[docs] + def build( + self, + sharedlib_only=False, + model_only=False, + ninja=False, + dry_run=False, + separate_builds=False, + skip_submit=False, + ): + """ + Do NOT override this method, this method is the framework that + controls the build phase. build_phase is the extension point + that subclasses should use. + """ + success = True + self._ninja = ninja + self._dry_run = dry_run + self._user_separate_builds = separate_builds + + was_run_pend = self._test_status.current_is(RUN_PHASE, TEST_PEND_STATUS) + + for phase_name, phase_bool in [ + (SHAREDLIB_BUILD_PHASE, not model_only), + (MODEL_BUILD_PHASE, not sharedlib_only), + ]: + if phase_bool: + self._resetup_case(phase_name) + with self._test_status: + self._test_status.set_status(phase_name, TEST_PEND_STATUS) + + start_time = time.time() + try: + self.build_phase( + sharedlib_only=(phase_name == SHAREDLIB_BUILD_PHASE), + model_only=(phase_name == MODEL_BUILD_PHASE), + ) + except BaseException as e: # We want KeyboardInterrupts to generate FAIL status + success = False + if isinstance(e, CIMEError): + # Don't want to print stacktrace for a build failure since that + # is not a CIME/infrastructure problem. + excmsg = str(e) + else: + excmsg = "Exception during build:\n{}\n{}".format( + str(e), traceback.format_exc() + ) + + append_testlog(excmsg, self._orig_caseroot) + raise + + finally: + time_taken = time.time() - start_time + with self._test_status: + self._test_status.set_status( + phase_name, + TEST_PASS_STATUS if success else TEST_FAIL_STATUS, + comments=("time={:d}".format(int(time_taken))), + ) + + # Building model while job is queued and awaiting run + if ( + skip_submit + and was_run_pend + and self._test_status.current_is(SUBMIT_PHASE, TEST_PEND_STATUS) + ): + with self._test_status: + self._test_status.set_status(SUBMIT_PHASE, TEST_PASS_STATUS) + + return success
+ + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + """ + This is the default build phase implementation, it just does an individual build. + This is the subclass' extension point if they need to define a custom build + phase. + + PLEASE THROW EXCEPTION ON FAIL + """ + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only)
+ + +
+[docs] + def build_indv(self, sharedlib_only=False, model_only=False): + """ + Perform an individual build + """ + model = self._case.get_value("MODEL") + build.case_build( + self._caseroot, + case=self._case, + sharedlib_only=sharedlib_only, + model_only=model_only, + save_build_provenance=not model == "cesm", + ninja=self._ninja, + dry_run=self._dry_run, + separate_builds=self._user_separate_builds, + ) + logger.info("build_indv complete")
+ + +
+[docs] + def clean_build(self, comps=None): + if comps is None: + comps = [x.lower() for x in self._case.get_values("COMP_CLASSES")] + build.clean(self._case, cleanlist=comps)
+ + +
+[docs] + def run(self, skip_pnl=False): + """ + Do NOT override this method, this method is the framework that controls + the run phase. run_phase is the extension point that subclasses should use. + """ + success = True + start_time = time.time() + self._skip_pnl = skip_pnl + try: + self._resetup_case(RUN_PHASE) + do_baseline_ops = True + with self._test_status: + self._test_status.set_status(RUN_PHASE, TEST_PEND_STATUS) + + # We do not want to do multiple repetitions of baseline operations for + # multi-submit tests. We just want to do them upon the final submission. + # Other submissions will need to mark those phases as PEND to ensure wait_for_tests + # waits for them. + if self._case.get_value("BATCH_SYSTEM") != "none": + do_baseline_ops = self._case.get_value("RESUBMIT") == 0 + + self.run_phase() + if self._case.get_value("GENERATE_BASELINE"): + if do_baseline_ops: + self._phase_modifying_call(GENERATE_PHASE, self._generate_baseline) + else: + with self._test_status: + self._test_status.set_status(GENERATE_PHASE, TEST_PEND_STATUS) + + if self._case.get_value("COMPARE_BASELINE"): + if do_baseline_ops: + self._phase_modifying_call(BASELINE_PHASE, self._compare_baseline) + self._phase_modifying_call(MEMCOMP_PHASE, self._compare_memory) + self._phase_modifying_call( + THROUGHPUT_PHASE, self._compare_throughput + ) + else: + with self._test_status: + self._test_status.set_status(BASELINE_PHASE, TEST_PEND_STATUS) + self._test_status.set_status(MEMCOMP_PHASE, TEST_PEND_STATUS) + self._test_status.set_status(THROUGHPUT_PHASE, TEST_PEND_STATUS) + + self._phase_modifying_call(MEMLEAK_PHASE, self._check_for_memleak) + self._phase_modifying_call(STARCHIVE_PHASE, self._st_archive_case_test) + + except BaseException as e: # We want KeyboardInterrupts to generate FAIL status + success = False + if isinstance(e, CIMEError): + # Don't want to print stacktrace for a model failure since that + # is not a CIME/infrastructure problem. + excmsg = str(e) + else: + excmsg = "Exception during run:\n{}\n{}".format( + str(e), traceback.format_exc() + ) + + append_testlog(excmsg, self._orig_caseroot) + raise + + finally: + # Writing the run status should be the very last thing due to wait_for_tests + time_taken = time.time() - start_time + status = TEST_PASS_STATUS if success else TEST_FAIL_STATUS + with self._test_status: + self._test_status.set_status( + RUN_PHASE, status, comments=("time={:d}".format(int(time_taken))) + ) + + config = Config.instance() + + if config.verbose_run_phase: + # If run phase worked, remember the time it took in order to improve later walltime ests + baseline_root = self._case.get_value("BASELINE_ROOT") + if success: + srcroot = self._case.get_value("SRCROOT") + save_test_time( + baseline_root, + self._casebaseid, + time_taken, + get_current_commit(repo=srcroot), + ) + + # If overall things did not pass, offer the user some insight into what might have broken things + overall_status = self._test_status.get_overall_test_status( + ignore_namelists=True + )[0] + if overall_status != TEST_PASS_STATUS: + srcroot = self._case.get_value("SRCROOT") + worked_before, last_pass, last_fail_transition = get_test_success( + baseline_root, srcroot, self._casebaseid + ) + + if worked_before: + if last_pass is not None: + # commits between last_pass and now broke things + stat, out, err = run_cmd( + "git rev-list --first-parent {}..{}".format( + last_pass, "HEAD" + ), + from_dir=srcroot, + ) + if stat == 0: + append_testlog( + "NEW FAIL: Potentially broken merges:\n{}".format( + out + ), + self._orig_caseroot, + ) + else: + logger.warning( + "Unable to list potentially broken merges: {}\n{}".format( + out, err + ) + ) + else: + if last_pass is not None and last_fail_transition is not None: + # commits between last_pass and last_fail_transition broke things + stat, out, err = run_cmd( + "git rev-list --first-parent {}..{}".format( + last_pass, last_fail_transition + ), + from_dir=srcroot, + ) + if stat == 0: + append_testlog( + "OLD FAIL: Potentially broken merges:\n{}".format( + out + ), + self._orig_caseroot, + ) + else: + logger.warning( + "Unable to list potentially broken merges: {}\n{}".format( + out, err + ) + ) + + if config.baseline_store_teststatus and self._case.get_value( + "GENERATE_BASELINE" + ): + baseline_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASEGEN_CASE"), + ) + generate_teststatus(self._caseroot, baseline_dir) + + # We return success if the run phase worked; memleaks, diffs will NOT be taken into account + # with this return value. + return success
+ + +
+[docs] + def run_phase(self): + """ + This is the default run phase implementation, it just does an individual run. + This is the subclass' extension point if they need to define a custom run phase. + + PLEASE THROW AN EXCEPTION ON FAIL + """ + self.run_indv()
+ + + def _get_caseroot(self): + """ + Returns the current CASEROOT value + """ + return self._caseroot + + def _set_active_case(self, case): + """ + Use for tests that have multiple cases + """ + self._case = case + self._case.load_env(reset=True) + self._caseroot = case.get_value("CASEROOT") + +
+[docs] + def run_indv( + self, + suffix="base", + st_archive=False, + submit_resubmits=None, + keep_init_generated_files=False, + ): + """ + Perform an individual run. Raises an EXCEPTION on fail. + + keep_init_generated_files: If False (the default), we remove the + init_generated_files subdirectory of the run directory before running the case. + This is usually what we want for tests, but some specific tests may want to leave + this directory in place, so can set this variable to True to do so. + """ + stop_n = self._case.get_value("STOP_N") + stop_option = self._case.get_value("STOP_OPTION") + run_type = self._case.get_value("RUN_TYPE") + rundir = self._case.get_value("RUNDIR") + try: + self._case.check_all_input_data() + except CIMEError: + caseroot = self._case.get_value("CASEROOT") + raise CIMEError( + "Could not find all inputdata on any server, try " + "manually running `./check_input_data --download " + f"--versbose` from {caseroot!r}." + ) from None + if submit_resubmits is None: + do_resub = self._case.get_value("BATCH_SYSTEM") != "none" + else: + do_resub = submit_resubmits + + # remove any cprnc output leftover from previous runs + for compout in glob.iglob(os.path.join(rundir, "*.cprnc.out")): + os.remove(compout) + + if not keep_init_generated_files: + # remove all files in init_generated_files directory if it exists + init_generated_files_dir = os.path.join( + rundir, INIT_GENERATED_FILES_DIRNAME + ) + if os.path.isdir(init_generated_files_dir): + for init_file in glob.iglob( + os.path.join(init_generated_files_dir, "*") + ): + os.remove(init_file) + + infostr = "doing an {:d} {} {} test".format(stop_n, stop_option, run_type) + + rest_option = self._case.get_value("REST_OPTION") + if rest_option == "none" or rest_option == "never": + infostr += ", no restarts written" + else: + rest_n = self._case.get_value("REST_N") + infostr += ", with restarts every {:d} {}".format(rest_n, rest_option) + + logger.info(infostr) + + self._case.case_run(skip_pnl=self._skip_pnl, submit_resubmits=do_resub) + + if not self._coupler_log_indicates_run_complete(): + expect(False, "Coupler did not indicate run passed") + + if suffix is not None: + self._component_compare_copy(suffix) + + if st_archive: + self._case.case_st_archive(resubmit=True)
+ + + def _coupler_log_indicates_run_complete(self): + newestcpllogfiles = get_latest_cpl_logs(self._case) + logger.debug("Latest Coupler log file(s) {}".format(newestcpllogfiles)) + # Exception is raised if the file is not compressed + allgood = len(newestcpllogfiles) + for cpllog in newestcpllogfiles: + try: + if b"SUCCESSFUL TERMINATION" in gzip.open(cpllog, "rb").read(): + allgood = allgood - 1 + except Exception as e: # Probably want to be more specific here + msg = e.__str__() + + logger.info( + "{} is not compressed, assuming run failed {}".format(cpllog, msg) + ) + + return allgood == 0 + + def _component_compare_copy(self, suffix): + # Only match .nc files + comments, num_copied = copy_histfiles(self._case, suffix, match_suffix="nc") + self._expected_num_cmp = num_copied + + append_testlog(comments, self._orig_caseroot) + + def _log_cprnc_output_tail(self, filename_pattern, prepend=None): + rundir = self._case.get_value("RUNDIR") + + glob_pattern = "{}/{}".format(rundir, filename_pattern) + + cprnc_logs = glob.glob(glob_pattern) + + for output in cprnc_logs: + with open(output) as fin: + cprnc_log_tail = fin.readlines()[-20:] + + cprnc_log_tail.insert(0, "tail -n20 {}\n\n".format(output)) + + if prepend is not None: + cprnc_log_tail.insert(0, "{}\n\n".format(prepend)) + + append_testlog("".join(cprnc_log_tail), self._orig_caseroot) + + def _component_compare_test( + self, suffix1, suffix2, success_change=False, ignore_fieldlist_diffs=False + ): + """ + Return value is not generally checked, but is provided in case a custom + run case needs indirection based on success. + If success_change is True, success requires some files to be different. + If ignore_fieldlist_diffs is True, then: If the two cases differ only in their + field lists (i.e., all shared fields are bit-for-bit, but one case has some + diagnostic fields that are missing from the other case), treat the two cases + as identical. + """ + success, comments, num_compared = self._do_compare_test( + suffix1, suffix2, ignore_fieldlist_diffs=ignore_fieldlist_diffs + ) + if success_change: + success = not success + + if ( + self._expected_num_cmp is not None + and num_compared is not None + and self._expected_num_cmp != num_compared + ): + comments = comments.replace("PASS", "") + comments += """\nWARNING +Expected to compare {} hist files, but only compared {}. It's possible +that the hist_file_extension entry in config_archive.xml is not correct +for some of your components. +""".format( + self._expected_num_cmp, num_compared + ) + + append_testlog(comments, self._orig_caseroot) + + pattern = "*.nc.{}.cprnc.out".format(suffix1) + message = "compared suffixes suffix1 {!r} suffix2 {!r}".format(suffix1, suffix2) + + self._log_cprnc_output_tail(pattern, message) + + status = TEST_PASS_STATUS if success else TEST_FAIL_STATUS + with self._test_status: + self._test_status.set_status( + "{}_{}_{}".format(COMPARE_PHASE, suffix1, suffix2), status + ) + return success + + def _do_compare_test(self, suffix1, suffix2, ignore_fieldlist_diffs=False): + """ + Wraps the call to compare_test to facilitate replacement in unit + tests + """ + return compare_test( + self._case, suffix1, suffix2, ignore_fieldlist_diffs=ignore_fieldlist_diffs + ) + + def _st_archive_case_test(self): + result = self._case.test_env_archive() + with self._test_status: + if result: + self._test_status.set_status(STARCHIVE_PHASE, TEST_PASS_STATUS) + else: + self._test_status.set_status(STARCHIVE_PHASE, TEST_FAIL_STATUS) + + def _phase_modifying_call(self, phase, function): + """ + Ensures that unexpected exceptions from phases will result in a FAIL result + in the TestStatus file for that phase. + """ + try: + function() + except Exception as e: # Do NOT want to catch KeyboardInterrupt + msg = e.__str__() + excmsg = "Exception during {}:\n{}\n{}".format( + phase, msg, traceback.format_exc() + ) + + logger.warning(excmsg) + append_testlog(excmsg, self._orig_caseroot) + + with self._test_status: + self._test_status.set_status( + phase, TEST_FAIL_STATUS, comments="exception" + ) + + def _check_for_memleak(self): + """ + Examine memory usage as recorded in the cpl log file and look for unexpected + increases. + """ + config = load_coupler_customization(self._case) + + # default to 0.1 + tolerance = self._case.get_value("TEST_MEMLEAK_TOLERANCE") or 0.1 + + expect(tolerance > 0.0, "Bad value for memleak tolerance in test") + + with self._test_status: + try: + memleak, comment = config.perf_check_for_memory_leak( + self._case, tolerance + ) + except AttributeError: + memleak, comment = perf_check_for_memory_leak(self._case, tolerance) + + if memleak: + append_testlog(comment, self._orig_caseroot) + + status = TEST_FAIL_STATUS + else: + status = TEST_PASS_STATUS + + self._test_status.set_status(MEMLEAK_PHASE, status, comments=comment) + +
+[docs] + def compare_env_run(self, expected=None): + """ + Compare env_run file to original and warn about differences + """ + components = self._case.get_values("COMP_CLASSES") + f1obj = self._case.get_env("run") + f2obj = EnvRun( + self._caseroot, + os.path.join(LOCKED_DIR, "env_run.orig.xml"), + components=components, + ) + diffs = f1obj.compare_xml(f2obj) + for key in diffs.keys(): + if expected is not None and key in expected: + logging.warning(" Resetting {} for test".format(key)) + f1obj.set_value(key, f2obj.get_value(key, resolved=False)) + else: + print( + "WARNING: Found difference in test {}: case: {} original value {}".format( + key, diffs[key][0], diffs[key][1] + ) + ) + return False + return True
+ + + def _compare_memory(self): + """ + Compares current test memory usage to baseline. + """ + with self._test_status: + try: + below_tolerance, comment = perf_compare_memory_baseline(self._case) + except Exception as e: + logger.info("Failed to compare memory usage baseline: {!s}".format(e)) + + self._test_status.set_status( + MEMCOMP_PHASE, TEST_FAIL_STATUS, comments=str(e) + ) + else: + if below_tolerance is not None: + append_testlog(comment, self._orig_caseroot) + + if ( + below_tolerance + and self._test_status.get_status(MEMCOMP_PHASE) is None + ): + self._test_status.set_status(MEMCOMP_PHASE, TEST_PASS_STATUS) + elif ( + self._test_status.get_status(MEMCOMP_PHASE) != TEST_FAIL_STATUS + ): + self._test_status.set_status( + MEMCOMP_PHASE, TEST_FAIL_STATUS, comments=comment + ) + + def _compare_throughput(self): + """ + Compares current test throughput to baseline. + """ + with self._test_status: + try: + below_tolerance, comment = perf_compare_throughput_baseline(self._case) + except Exception as e: + logger.info("Failed to compare throughput baseline: {!s}".format(e)) + + self._test_status.set_status( + THROUGHPUT_PHASE, TEST_FAIL_STATUS, comments=str(e) + ) + else: + if below_tolerance is not None: + append_testlog(comment, self._orig_caseroot) + + if ( + below_tolerance + and self._test_status.get_status(THROUGHPUT_PHASE) is None + ): + self._test_status.set_status(THROUGHPUT_PHASE, TEST_PASS_STATUS) + elif ( + self._test_status.get_status(THROUGHPUT_PHASE) + != TEST_FAIL_STATUS + ): + self._test_status.set_status( + THROUGHPUT_PHASE, TEST_FAIL_STATUS, comments=comment + ) + + def _compare_baseline(self): + """ + compare the current test output to a baseline result + """ + with self._test_status: + # compare baseline + success, comments = compare_baseline(self._case) + + append_testlog(comments, self._orig_caseroot) + + pattern = "*.nc.cprnc.out" + + self._log_cprnc_output_tail(pattern) + + status = TEST_PASS_STATUS if success else TEST_FAIL_STATUS + baseline_name = self._case.get_value("BASECMP_CASE") + ts_comments = ( + os.path.dirname(baseline_name) + ": " + get_ts_synopsis(comments) + ) + self._test_status.set_status(BASELINE_PHASE, status, comments=ts_comments) + + def _generate_baseline(self): + """ + generate a new baseline case based on the current test + """ + with self._test_status: + # generate baseline + success, comments = generate_baseline(self._case) + append_testlog(comments, self._orig_caseroot) + status = TEST_PASS_STATUS if success else TEST_FAIL_STATUS + baseline_name = self._case.get_value("BASEGEN_CASE") + self._test_status.set_status( + GENERATE_PHASE, status, comments=os.path.dirname(baseline_name) + ) + basegen_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASEGEN_CASE"), + ) + # copy latest cpl log to baseline + # drop the date so that the name is generic + newestcpllogfiles = get_latest_cpl_logs(self._case) + with SharedArea(): + # TODO ever actually more than one cpl log? + for cpllog in newestcpllogfiles: + m = re.search(r"/({}.*.log).*.gz".format(self._cpllog), cpllog) + + if m is not None: + baselog = os.path.join(basegen_dir, m.group(1)) + ".gz" + + safe_copy( + cpllog, + os.path.join(basegen_dir, baselog), + preserve_meta=False, + ) + + perf_write_baseline(self._case, basegen_dir, cpllog)
+ + + +
+[docs] +def perf_check_for_memory_leak(case, tolerance): + leak = False + comment = "" + + latestcpllogs = get_latest_cpl_logs(case) + + for cpllog in latestcpllogs: + try: + memlist = perf_get_memory_list(case, cpllog) + except RuntimeError: + return False, "insufficient data for memleak test" + + # last day - second day, skip first day, can be too low while initializing + elapsed_days = int(memlist[-1][0]) - int(memlist[1][0]) + + finalmem, originalmem = float(memlist[-1][1]), float(memlist[1][1]) + + memdiff = -1 if originalmem <= 0 else (finalmem - originalmem) / originalmem + + if memdiff < 0: + leak = False + comment = "data for memleak test is insufficient" + elif memdiff < tolerance: + leak = False + comment = "" + else: + leak = True + comment = ( + "memleak detected, memory went from {:f} to {:f} in {:d} days".format( + originalmem, finalmem, elapsed_days + ) + ) + + return leak, comment
+ + + +
+[docs] +class FakeTest(SystemTestsCommon): + """ + Inheriters of the FakeTest Class are intended to test the code. + + All members of the FakeTest Class must + have names beginning with "TEST" this is so that the find_system_test + in utils.py will work with these classes. + """ + + def __init__(self, case, expected=None, **kwargs): + super(FakeTest, self).__init__(case, expected=expected, **kwargs) + self._script = None + self._requires_exe = False + self._case._non_local = True + self._original_exe = self._case.get_value("run_exe") + + def _set_script(self, script, requires_exe=False): + self._script = script + self._requires_exe = requires_exe + + def _resetup_case(self, phase, reset=False): + run_exe = self._case.get_value("run_exe") + super(FakeTest, self)._resetup_case(phase, reset=reset) + self._case.set_value("run_exe", run_exe) + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + if self._requires_exe: + super(FakeTest, self).build_phase( + sharedlib_only=sharedlib_only, model_only=model_only + ) + + if not sharedlib_only: + exeroot = self._case.get_value("EXEROOT") + modelexe = os.path.join(exeroot, "fake.exe") + self._case.set_value("run_exe", modelexe) + + with open(modelexe, "w") as f: + f.write("#!/bin/bash\n") + f.write(self._script) + + os.chmod(modelexe, 0o755) + + if not self._requires_exe: + build.post_build(self._case, [], build_complete=True) + else: + expect( + os.path.exists(modelexe), + "Could not find expected file {}".format(modelexe), + ) + logger.info( + "FakeTest build_phase complete {} {}".format( + modelexe, self._requires_exe + ) + )
+ + +
+[docs] + def run_indv( + self, + suffix="base", + st_archive=False, + submit_resubmits=None, + keep_init_generated_files=False, + ): + mpilib = self._case.get_value("MPILIB") + # This flag is needed by mpt to run a script under mpiexec + if mpilib == "mpt": + os.environ["MPI_SHEPHERD"] = "true" + super(FakeTest, self).run_indv( + suffix, st_archive=st_archive, submit_resubmits=submit_resubmits + )
+
+ + + +
+[docs] +class TESTRUNPASS(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + script = """ +echo Insta pass +echo SUCCESSFUL TERMINATION > {rundir}/{log}.log.$LID +cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +""".format( + rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+
+ + + +
+[docs] +class TESTRUNDIFF(FakeTest): + """ + You can generate a diff with this test as follows: + 1) Run the test and generate a baseline + 2) set TESTRUNDIFF_ALTERNATE environment variable to TRUE + 3) Re-run the same test from step 1 but do a baseline comparison instead of generation + 3.a) This should give you a DIFF + """ + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + script = """ +echo Insta pass +echo SUCCESSFUL TERMINATION > {rundir}/{log}.log.$LID +if [ -z "$TESTRUNDIFF_ALTERNATE" ]; then + cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +else + cp {root}/scripts/tests/cpl.hi2.nc.test {rundir}/{case}.cpl.hi.0.nc +fi +""".format( + rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+
+ + + +
+[docs] +class TESTRUNDIFFRESUBMIT(TESTRUNDIFF): + pass
+ + + +
+[docs] +class TESTTESTDIFF(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + script = """ +echo Insta pass +echo SUCCESSFUL TERMINATION > {rundir}/{log}.log.$LID +cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +cp {root}/scripts/tests/cpl.hi2.nc.test {rundir}/{case}.cpl.hi.0.nc.rest +""".format( + rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + super(TESTTESTDIFF, self).build_phase( + sharedlib_only=sharedlib_only, model_only=model_only + )
+ + +
+[docs] + def run_phase(self): + super(TESTTESTDIFF, self).run_phase() + self._component_compare_test("base", "rest")
+
+ + + +
+[docs] +class TESTRUNFAIL(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + script = """ +if [ -z "$TESTRUNFAIL_PASS" ]; then + echo Insta fail + echo model failed > {rundir}/{log}.log.$LID + exit -1 +else + echo Insta pass + echo SUCCESSFUL TERMINATION > {rundir}/{log}.log.$LID + cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +fi +""".format( + rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+
+ + + +
+[docs] +class TESTRUNFAILRESET(TESTRUNFAIL): + """This fake test can fail for two reasons: + 1. As in the TESTRUNFAIL test: If the environment variable TESTRUNFAIL_PASS is *not* set + 2. Even if that environment variable *is* set, it will fail if STOP_N differs from the + original value + + The purpose of (2) is to ensure that test's values get properly reset if the test is + rerun after an initial failure. + """ + +
+[docs] + def run_indv( + self, + suffix="base", + st_archive=False, + submit_resubmits=None, + keep_init_generated_files=False, + ): + # Make sure STOP_N matches the original value for the case. This tests that STOP_N + # has been reset properly if we are rerunning the test after a failure. + env_test = EnvTest(self._get_caseroot()) + stop_n = self._case.get_value("STOP_N") + stop_n_test = int(env_test.get_test_parameter("STOP_N")) + expect( + stop_n == stop_n_test, + "Expect STOP_N to match original ({} != {})".format(stop_n, stop_n_test), + ) + + # Now modify STOP_N so that an error will be generated if it isn't reset properly + # upon a rerun + self._case.set_value("STOP_N", stop_n + 1) + + super(TESTRUNFAILRESET, self).run_indv( + suffix=suffix, st_archive=st_archive, submit_resubmits=submit_resubmits + )
+
+ + + +
+[docs] +class TESTRUNFAILEXC(TESTRUNPASS): +
+[docs] + def run_phase(self): + raise RuntimeError("Exception from run_phase")
+
+ + + +
+[docs] +class TESTRUNSTARCFAIL(TESTRUNPASS): + def _st_archive_case_test(self): + raise RuntimeError("Exception from st archive")
+ + + +
+[docs] +class TESTBUILDFAIL(TESTRUNPASS): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + if "TESTBUILDFAIL_PASS" in os.environ: + TESTRUNPASS.build_phase(self, sharedlib_only, model_only) + else: + if not sharedlib_only: + blddir = self._case.get_value("EXEROOT") + bldlog = os.path.join( + blddir, + "{}.bldlog.{}".format(get_model(), get_timestamp("%y%m%d-%H%M%S")), + ) + with open(bldlog, "w") as fd: + fd.write("BUILD FAIL: Intentional fail for testing infrastructure") + + expect(False, "BUILD FAIL: Intentional fail for testing infrastructure")
+
+ + + +
+[docs] +class TESTBUILDFAILEXC(FakeTest): + def __init__(self, case, **kwargs): + FakeTest.__init__(self, case, **kwargs) + raise RuntimeError("Exception from init")
+ + + +
+[docs] +class TESTRUNUSERXMLCHANGE(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + caseroot = self._case.get_value("CASEROOT") + modelexe = self._case.get_value("run_exe") + new_stop_n = self._case.get_value("STOP_N") * 2 + + script = """ +cd {caseroot} +./xmlchange --file env_test.xml STOP_N={stopn} +./xmlchange RESUBMIT=1,STOP_N={stopn},CONTINUE_RUN=FALSE,RESUBMIT_SETS_CONTINUE_RUN=FALSE +cd - +{originalexe} "$@" +cd {caseroot} +./xmlchange run_exe={modelexe} +sleep 5 +""".format( + originalexe=self._original_exe, + caseroot=caseroot, + modelexe=modelexe, + stopn=str(new_stop_n), + ) + self._set_script(script, requires_exe=True) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+ + +
+[docs] + def run_phase(self): + self.run_indv(submit_resubmits=True)
+
+ + + +
+[docs] +class TESTRUNSLOWPASS(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + script = """ +sleep 300 +echo Slow pass +echo SUCCESSFUL TERMINATION > {rundir}/{log}.log.$LID +cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +""".format( + rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+
+ + + +
+[docs] +class TESTMEMLEAKFAIL(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + testfile = os.path.join(cimeroot, "scripts", "tests", "cpl.log.failmemleak.gz") + script = """ +echo Insta pass +gunzip -c {testfile} > {rundir}/{log}.log.$LID +cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +""".format( + testfile=testfile, rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+
+ + + +
+[docs] +class TESTMEMLEAKPASS(FakeTest): +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + rundir = self._case.get_value("RUNDIR") + cimeroot = self._case.get_value("CIMEROOT") + case = self._case.get_value("CASE") + testfile = os.path.join(cimeroot, "scripts", "tests", "cpl.log.passmemleak.gz") + script = """ +echo Insta pass +gunzip -c {testfile} > {rundir}/{log}.log.$LID +cp {root}/scripts/tests/cpl.hi1.nc.test {rundir}/{case}.cpl.hi.0.nc +""".format( + testfile=testfile, rundir=rundir, log=self._cpllog, root=cimeroot, case=case + ) + self._set_script(script) + FakeTest.build_phase(self, sharedlib_only=sharedlib_only, model_only=model_only)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_compare_n.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_compare_n.html new file mode 100644 index 00000000000..0ab4a6547ba --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_compare_n.html @@ -0,0 +1,698 @@ + + + + + + CIME.SystemTests.system_tests_compare_n — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.system_tests_compare_n

+"""
+Base class for CIME system tests that involve doing multiple runs and comparing the base run (index=0)
+with the subsequent runs (indices=1..N-1).
+
+NOTE: Below is the flow of a multisubmit test.
+Non-batch:
+case_submit -> case_run     # PHASE 1
+            -> case_run     # PHASE 2
+            ...
+            -> case_run     # PHASE N
+
+batch:
+case_submit -> case_run     # PHASE 1
+case_run    -> case_submit
+case_submit -> case_run     # PHASE 2
+...
+case_submit -> case_run     # PHASE N
+
+In the __init__ method for your test, you MUST call
+    SystemTestsCompareN.__init__
+See the documentation of that method for details.
+
+Classes that inherit from this are REQUIRED to implement the following method:
+
+(1) _case_setup
+    This method will be called to set up case i, where i==0 corresponds to the base case
+    and i=={1,..N-1} corresponds to subsequent runs to be compared with the base.
+
+In addition, they MAY require the following methods:
+
+(1) _common_setup
+    This method will be called to set up all cases. It should contain any setup
+    that's needed in all cases. This is called before _case_setup_config
+
+(2) _case_custom_prerun_action(self, i):
+    Use this to do arbitrary actions immediately before running case i
+
+(3) _case_custom_postrun_action(self, i):
+    Use this to do arbitrary actions immediately after running case one
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon, fix_single_exe_case
+from CIME.case import Case
+from CIME.config import Config
+from CIME.test_status import *
+
+import shutil, os, glob
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class SystemTestsCompareN(SystemTestsCommon): + def __init__( + self, + case, + N=2, + separate_builds=False, + run_suffixes=None, + run_descriptions=None, + multisubmit=False, + ignore_fieldlist_diffs=False, + dry_run=False, + **kwargs + ): + """ + Initialize a SystemTestsCompareN object. Individual test cases that + inherit from SystemTestsCompareN MUST call this __init__ method. + + Args: + case: case object passsed to __init__ method of individual + test. This is the main case associated with the test. + N (int): number of test cases including the base case. + separate_builds (bool): Whether separate builds are needed for the + cases. If False, case[i:1..N-1] uses the case[0] executable. + run_suffixes (list of str, optional): List of suffixes appended to the case names. + Defaults to ["base", "subsq_1", "subsq_2", .. "subsq_N-1"]. Each + suffix must be unique. + run_descriptions (list of str, optional): Descriptions printed to log file + of each case when starting the runs. Defaults to ['']*N. + multisubmit (bool): Do base and subsequent runs as different submissions. + Designed for tests with RESUBMIT=1 + ignore_fieldlist_diffs (bool): If True, then: If the cases differ only in + their field lists (i.e., all shared fields are bit-for-bit, but one case + has some diagnostic fields that are missing from the base case), treat + the cases as identical. (This is needed for tests where one case + exercises an option that produces extra diagnostic fields.) + """ + SystemTestsCommon.__init__(self, case, **kwargs) + + self._separate_builds = separate_builds + self._ignore_fieldlist_diffs = ignore_fieldlist_diffs + + expect(N > 1, "Number of cases must be greater than 1.") + self._cases = [None] * N + self.N = N + + if run_suffixes: + expect( + isinstance(run_suffixes, list) + and all([isinstance(sfx, str) for sfx in run_suffixes]), + "run_suffixes must be a list of strings", + ) + expect( + len(run_suffixes) == self.N, + "run_suffixes list must include {} strings".format(self.N), + ) + expect( + len(set(run_suffixes)) == len(run_suffixes), + "each suffix in run_suffixes must be unique", + ) + self._run_suffixes = [sfx.rstrip() for sfx in run_suffixes] + else: + self._run_suffixes = ["base"] + ["subsq_{}".format(i) for i in range(1, N)] + + if run_descriptions: + expect( + isinstance(run_descriptions, list) + and all([isinstance(dsc, str) for dsc in run_descriptions]), + "run_descriptions must be a list of strings", + ) + expect( + len(run_descriptions) == self.N, + "run_descriptions list must include {} strings".format(self.N), + ) + self._run_descriptions = run_descriptions + else: + self._run_descriptions = [""] * self.N + + # Set the base case for referencing purposes + self._cases[0] = self._case + self._caseroots = self._get_caseroots() + + if not dry_run: + self._setup_cases_if_not_yet_done() + + self._multisubmit = ( + multisubmit and self._cases[0].get_value("BATCH_SYSTEM") != "none" + ) + + # ======================================================================== + # Methods that MUST be implemented by specific tests that inherit from this + # base class + # ======================================================================== + + def _case_setup(self, i): + """ + This method will be called to set up case[i], where case[0] is the base case. + + This should be written to refer to self._case: this object will point to + case[i] at the point that this is called. + """ + raise NotImplementedError + + # ======================================================================== + # Methods that MAY be implemented by specific tests that inherit from this + # base class, if they have any work to do in these methods + # ======================================================================== + + def _common_setup(self): + """ + This method will be called to set up all cases. It should contain any setup + that's needed in both cases. + + This should be written to refer to self._case: It will be called once with + self._case pointing to case1, and once with self._case pointing to case2. + """ + + def _case_custom_prerun_action(self, i): + """ + Use to do arbitrary actions immediately before running case i:0..N + """ + + def _case_custom_postrun_action(self, i): + """ + Use to do arbitrary actions immediately after running case i:0..N + """ + + # ======================================================================== + # Main public methods + # ======================================================================== + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + # Subtle issue: base case is already in a writeable state since it tends to be opened + # with a with statement in all the API entrances in CIME. subsequent cases were + # created via clone, not a with statement, so it's not in a writeable state, + # so we need to use a with statement here to put it in a writeable state. + config = Config.instance() + + for i in range(1, self.N): + with self._cases[i]: + if self._separate_builds: + self._activate_case(0) + self.build_indv( + sharedlib_only=sharedlib_only, model_only=model_only + ) + self._activate_case(i) + # Although we're doing separate builds, it still makes sense + # to share the sharedlibroot area with case1 so we can reuse + # pieces of the build from there. + if config.common_sharedlibroot: + # We need to turn off this change for E3SM because it breaks + # the MPAS build system + ## TODO: ^this logic mimics what's done in SystemTestsCompareTwo + # Confirm this is needed in SystemTestsCompareN as well. + self._cases[i].set_value( + "SHAREDLIBROOT", self._cases[0].get_value("SHAREDLIBROOT") + ) + + self.build_indv( + sharedlib_only=sharedlib_only, model_only=model_only + ) + else: + self._activate_case(0) + self.build_indv( + sharedlib_only=sharedlib_only, model_only=model_only + ) + # pio_typename may be changed during the build if the default is not a + # valid value for this build, update case i to reflect this change + for comp in self._cases[i].get_values("COMP_CLASSES"): + comp_pio_typename = "{}_PIO_TYPENAME".format(comp) + self._cases[i].set_value( + comp_pio_typename, + self._cases[0].get_value(comp_pio_typename), + ) + + # The following is needed when _case_two_setup has a case_setup call + # despite sharing the build (e.g., to change NTHRDS) + self._cases[i].set_value("BUILD_COMPLETE", True)
+ + +
+[docs] + def run_phase(self, success_change=False): # pylint: disable=arguments-differ + """ + Runs all phases of the N-phase test and compares base results with subsequent ones + If success_change is True, success requires some files to be different + """ + is_first_run = self._cases[0].get_value("IS_FIRST_RUN") + + # On a batch system with a multisubmit test "RESUBMIT" is used to track + # which phase is being ran. By the end of the test it equals 0. If the + # the test fails in a way where the RUN_PHASE is PEND then "RESUBMIT" + # does not get reset to 1 on a rerun and the first phase is skipped + # causing the COMPARE_PHASE to fail. This ensures that "RESUBMIT" will + # get reset if the test state is not correct for a rerun. + # NOTE: "IS_FIRST_RUN" is reset in "case_submit.py" + ### todo: confirm below code block + if ( + is_first_run + and self._multisubmit + and self._cases[0].get_value("RESUBMIT") == 0 + ): + self._resetup_case(RUN_PHASE, reset=True) + + base_phase = ( + self._cases[0].get_value("RESUBMIT") == 1 + ) # Only relevant for multi-submit tests + run_type = self._cases[0].get_value("RUN_TYPE") + + logger.info( + "_multisubmit {} first phase {}".format(self._multisubmit, base_phase) + ) + + # First run + if not self._multisubmit or base_phase: + logger.info("Doing first run: " + self._run_descriptions[0]) + + # Add a PENDing compare phase so that we'll notice if the second part of compare two + # doesn't run. + compare_phase_name = "{}_{}_{}".format( + COMPARE_PHASE, self._run_suffixes[1], self._run_suffixes[0] + ) + with self._test_status: + self._test_status.set_status(compare_phase_name, TEST_PEND_STATUS) + + self._activate_case(0) + self._case_custom_prerun_action(0) + self.run_indv(suffix=self._run_suffixes[0]) + self._case_custom_postrun_action(0) + + # Subsequent runs + if not self._multisubmit or not base_phase: + # Subtle issue: case1 is already in a writeable state since it tends to be opened + # with a with statement in all the API entrances in CIME. subsq cases were created + # via clone, not a with statement, so it's not in a writeable state, so we need to + # use a with statement here to put it in a writeable state. + for i in range(1, self.N): + with self._cases[i]: + logger.info("Doing run {}: ".format(i) + self._run_descriptions[i]) + self._activate_case(i) + # This assures that case i namelists are populated + self._skip_pnl = False + # we need to make sure run i is properly staged. + if run_type != "startup": + self._cases[i].check_case() + + self._case_custom_prerun_action(i) + self.run_indv(suffix=self._run_suffixes[i]) + self._case_custom_postrun_action(i) + # Compare results + self._activate_case(0) + self._link_to_subsq_case_output(i) + self._component_compare_test( + self._run_suffixes[i], + self._run_suffixes[0], + success_change=success_change, + ignore_fieldlist_diffs=self._ignore_fieldlist_diffs, + )
+ + + # ======================================================================== + # Private methods + # ======================================================================== + + def _get_caseroots(self): + """ + Determines and returns caseroot for each cases and returns a list + """ + casename_base = self._cases[0].get_value("CASE") + caseroot_base = self._get_caseroot() + + return [caseroot_base] + [ + os.path.join(caseroot_base, "case{}".format(i), casename_base) + for i in range(1, self.N) + ] + + def _get_subsq_output_root(self, i): + """ + Determines and returns cime_output_root for case i where i!=0 + + Assumes that self._case1 is already set to point to the case1 object + """ + # Since subsequent cases have the same name as base, their CIME_OUTPUT_ROOT + # must also be different, so that anything put in + # $CIME_OUTPUT_ROOT/$CASE/ is not accidentally shared between + # cases. (Currently nothing is placed here, but this + # helps prevent future problems.) + + expect(i != 0, "ERROR: cannot call _get_subsq_output_root for the base class") + + output_root_i = os.path.join( + self._cases[0].get_value("CIME_OUTPUT_ROOT"), + self._cases[0].get_value("CASE"), + "case{}_output_root".format(i), + ) + return output_root_i + + def _get_subsq_case_exeroot(self, i): + """ + Gets exeroot for case i. + + Returns None if we should use the default value of exeroot. + """ + + expect(i != 0, "ERROR: cannot call _get_subsq_case_exeroot for the base class") + + if self._separate_builds: + # subsequent case's EXEROOT needs to be somewhere that (1) is unique + # to this case (considering that all cases have the + # same case name), and (2) does not have too long of a path + # name (because too-long paths can make some compilers + # fail). + base_exeroot = self._cases[0].get_value("EXEROOT") + case_i_exeroot = os.path.join(base_exeroot, "case{}bld".format(i)) + else: + # Use default exeroot + case_i_exeroot = None + return case_i_exeroot + + def _get_subsq_case_rundir(self, i): + """ + Gets rundir for case i. + """ + + expect(i != 0, "ERROR: cannot call _get_subsq_case_rundir for the base class") + + # subsequent case's RUNDIR needs to be somewhere that is unique to this + # case (considering that all cases have the same case + # name). Note that the location below is symmetrical to the + # location of case's EXEROOT set in _get_subsq_case_exeroot. + base_rundir = self._cases[0].get_value("RUNDIR") + case_i_rundir = os.path.join(base_rundir, "case{}run".format(i)) + return case_i_rundir + + def _setup_cases_if_not_yet_done(self): + """ + Determines if subsequent cases already exist on disk. If they do, this method + creates the self.cases entries pointing to the case directories. If they + don't exist, then this method creates cases[i:1..N-1] as a clone of cases[0], and + sets the self.cases objects appropriately. + + This also does the setup for all cases including the base case. + + Assumes that the following variables are already set in self: + _caseroots + _cases[0] + + Sets self.cases[i:1..N-1] + """ + + # Use the existence of the cases[N-1] directory to signal whether we have + # done the necessary test setup for all cases: When we initially create + # the last case directory, we set up all cases; then, if we find that + # the last case directory already exists, we assume that the setup has + # already been done for all cases. (In some cases it could be problematic + # to redo the test setup when it's not needed - e.g., by appending things + # to user_nl files multiple times. This is why we want to make sure to just + # do the test setup once.) + if os.path.exists(self._caseroots[-1]): + for i in range(1, self.N): + caseroot_i = self._caseroots[i] + self._cases[i] = self._case_from_existing_caseroot(caseroot_i) + else: + # Create the subsequent cases by cloning the base case. + for i in range(1, self.N): + self._cases[i] = self._cases[0].create_clone( + self._caseroots[i], + keepexe=not self._separate_builds, + cime_output_root=self._get_subsq_output_root(i), + exeroot=self._get_subsq_case_exeroot(i), + rundir=self._get_subsq_case_rundir(i), + ) + self._write_info_to_subsq_case_output_root(i) + + # Set up all cases, including the base case. + for i in range(0, self.N): + caseroot_i = self._caseroots[i] + try: + self._setup_case(i) + except BaseException: + # If a problem occurred in setting up the test case i, it's + # important to remove the case i directory: If it's kept around, + # that would signal that test setup was done successfully, and + # thus doesn't need to be redone - which is not the case. Of + # course, we'll likely be left in an inconsistent state in this + # case, but if we didn't remove the case i directory, the next + # re-build of the test would think, "okay, setup is done, I can + # move on to the build", which would be wrong. + if os.path.isdir(caseroot_i): + shutil.rmtree(caseroot_i) + self._activate_case(0) + logger.warning( + "WARNING: Test case setup failed. Case {} has been removed, " + "but the main case may be in an inconsistent state. " + "If you want to rerun this test, you should create " + "a new test rather than trying to rerun this one.".format(i) + ) + raise + + def _case_from_existing_caseroot(self, caseroot): + """ + Returns a Case object from an existing caseroot directory + + Args: + caseroot (str): path to existing caseroot + """ + return Case(case_root=caseroot, read_only=False) + + def _activate_case(self, i): + """ + Make case i active for upcoming calls + """ + os.chdir(self._caseroots[i]) + self._set_active_case(self._cases[i]) + + def _write_info_to_subsq_case_output_root(self, i): + """ + Writes a file with some helpful information to case[i]'s + output_root. + + The motivation here is two-fold: + + (1) Currently, case i's output_root directory is empty. + This could be confusing. + + (2) For users who don't know where to look, it could be hard to + find case i's bld and run directories. It is somewhat easier + to stumble upon case i output_root, so we put a file there + pointing them to the right place. + """ + + readme_path = os.path.join(self._get_subsq_output_root(i), "README") + try: + with open(readme_path, "w") as fd: + fd.write("This directory is typically empty.\n\n") + fd.write( + "case's run dir is here: {}\n\n".format( + self._cases[i].get_value("RUNDIR") + ) + ) + fd.write( + "case's bld dir is here: {}\n".format( + self._cases[i].get_value("EXEROOT") + ) + ) + except IOError: + # It's not a big deal if we can't write the README file + # (e.g., because the directory doesn't exist or isn't + # writeable; note that the former may be the case in unit + # tests). So just continue merrily on our way if there was a + # problem. + pass + + def _setup_case(self, i): + """ + Does all test-specific set up for the test case i. + """ + + # Set up case 1 + self._activate_case(i) + self._common_setup() + self._case_setup(i) + fix_single_exe_case(self._cases[i]) + if i == 0: + # Flush the case so that, if errors occur later, then at least base case is + # in a correct, post-setup state. This is important because the mere + # existence of a cases[-1] directory signals that setup is done. So if the + # build fails and the user rebuilds, setup won't be redone - so it's + # important to ensure that the results of setup are flushed to disk. + # + # Note that base case will be in its post-setup state even if case[i!=0] setup fails. + self._case.flush() + # This assures that case one namelists are populated + # and creates the case.test script + self._case.case_setup(test_mode=False, reset=True) + fix_single_exe_case(self._case) + else: + # Go back to base case to ensure that's where we are for any following code + self._activate_case(0) + + def _link_to_subsq_case_output(self, i): + """ + Looks for all files in rundir-i matching the pattern casename-i*.nc.run-i-suffix + + For each file found, makes a link in base rundir pointing to this file; the + link is renamed so that the original occurrence of casename-i is replaced + with base casename. + + For example: + + /glade/scratch/sacks/somecase/run/somecase.clm2.h0.nc.run2 -> + /glade/scratch/sacks/somecase.run2/run/somecase.run2.clm2.h0.nc.run2 + + If the destination link already exists and points to the correct + location, it is maintained as is. However, an exception will be raised + if the destination link is not exactly as it should be: we avoid + overwriting some existing file or link. + """ + + expect( + i != 0, "ERROR: cannot call _link_to_subsq_case_output for the base class" + ) + + base_casename = self._cases[0].get_value("CASE") + subsq_casename = self._cases[i].get_value("CASE") + base_rundir = self._cases[0].get_value("RUNDIR") + subsq_rundir = self._cases[i].get_value("RUNDIR") + + pattern = "{}*.nc.{}".format(subsq_casename, self._run_suffixes[i]) + subsq_case_files = glob.glob(os.path.join(subsq_rundir, pattern)) + for one_file in subsq_case_files: + file_basename = os.path.basename(one_file) + modified_basename = file_basename.replace(subsq_casename, base_casename, 1) + one_link = os.path.join(base_rundir, modified_basename) + if os.path.islink(one_link) and os.readlink(one_link) == one_file: + # Link is already set up correctly: do nothing + # (os.symlink raises an exception if you try to replace an + # existing file) + pass + else: + os.symlink(one_file, one_link)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_compare_two.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_compare_two.html new file mode 100644 index 00000000000..dc9d2faa783 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/system_tests_compare_two.html @@ -0,0 +1,738 @@ + + + + + + CIME.SystemTests.system_tests_compare_two — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.system_tests_compare_two

+"""
+Base class for CIME system tests that involve doing two runs and comparing their
+output.
+
+NOTE: Below is the flow of a multisubmit test.
+Non-batch:
+case_submit -> case_run     # PHASE 1
+            -> case_run     # PHASE 2
+
+batch:
+case_submit -> case_run     # PHASE 1
+case_run    -> case_submit
+case_submit -> case_run     # PHASE 2
+
+In the __init__ method for your test, you MUST call
+    SystemTestsCompareTwo.__init__
+See the documentation of that method for details.
+
+Classes that inherit from this are REQUIRED to implement the following methods:
+
+(1) _case_one_setup
+    This method will be called to set up case 1, the "base" case
+
+(2) _case_two_setup
+    This method will be called to set up case 2, the "test" case
+
+In addition, they MAY require the following methods:
+
+(1) _common_setup
+    This method will be called to set up both cases. It should contain any setup
+    that's needed in both cases. This is called before _case_one_setup or
+    _case_two_setup.
+
+(2) _case_one_custom_prerun_action(self):
+    Use this to do arbitrary actions immediately before running case one
+
+(3) _case_two_custom_prerun_action(self):
+    Use this to do arbitrary actions immediately before running case two
+
+(4) _case_one_custom_postrun_action(self):
+    Use this to do arbitrary actions immediately after running case one
+
+(5) _case_two_custom_postrun_action(self):
+    Use this to do arbitrary actions immediately after running case two
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.SystemTests.system_tests_common import SystemTestsCommon, fix_single_exe_case
+from CIME.case import Case
+from CIME.config import Config
+from CIME.test_status import *
+
+import shutil, os, glob
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class SystemTestsCompareTwo(SystemTestsCommon): + def __init__( + self, + case, + separate_builds=False, + run_two_suffix="test", + run_one_description="", + run_two_description="", + multisubmit=False, + ignore_fieldlist_diffs=False, + case_two_keep_init_generated_files=False, + dry_run=False, + **kwargs + ): + """ + Initialize a SystemTestsCompareTwo object. Individual test cases that + inherit from SystemTestsCompareTwo MUST call this __init__ method. + + Args: + case: case object passsed to __init__ method of individual + test. This is the main case associated with the test. + separate_builds (bool): Whether separate builds are needed for the + two cases. If False, case2 uses the case1 executable. + run_two_suffix (str, optional): Suffix appended to the case name for + the second run. Defaults to 'test'. This can be anything other + than 'base'. + run_one_description (str, optional): Description printed to log file + when starting the first run. Defaults to ''. + run_two_description (str, optional): Description printed to log file + when starting the second run. Defaults to ''. + multisubmit (bool): Do first and second runs as different submissions. + Designed for tests with RESUBMIT=1 + ignore_fieldlist_diffs (bool): If True, then: If the two cases differ only in + their field lists (i.e., all shared fields are bit-for-bit, but one case + has some diagnostic fields that are missing from the other case), treat + the two cases as identical. (This is needed for tests where one case + exercises an option that produces extra diagnostic fields.) + case_two_keep_init_generated_files (bool): If True, then do NOT remove the + init_generated_files subdirectory of the case2 run directory before + running case2. This should typically be kept at its default (False) so + that rerunning a test gives the same behavior as in the initial run rather + than reusing init_generated_files in the second run. However, this option + is provided for the sake of specific tests, e.g., a test of the behavior + of running with init_generated_files in place. + """ + SystemTestsCommon.__init__(self, case, **kwargs) + + self._separate_builds = separate_builds + self._ignore_fieldlist_diffs = ignore_fieldlist_diffs + self._case_two_keep_init_generated_files = case_two_keep_init_generated_files + + # run_one_suffix is just used as the suffix for the netcdf files + # produced by the first case; we may eventually remove this, but for now + # it is needed by the various component_*.sh scripts. run_two_suffix is + # also used as the suffix for netcdf files, but more importantly is used + # to create the case name for the clone case. + # + # NOTE(wjs, 2016-08-03) It is currently CRITICAL for run_one_suffix to + # be 'base', because this is assumed for baseline comparison and + # generation. Once that assumption is relaxed, then run_one_suffix can + # be set in the call to the constructor just like run_two_suffix + # currently is. Or, if these tools are rewritten to work without any + # suffix, then run_one_suffix can be removed entirely. + self._run_one_suffix = "base" + self._run_two_suffix = run_two_suffix.rstrip() + expect( + self._run_two_suffix != self._run_one_suffix, + "ERROR: Must have different suffixes for run one and run two", + ) + + self._run_one_description = run_one_description + self._run_two_description = run_two_description + + # Save case for first run so we can return to it if we switch self._case + # to point to self._case2 + self._case1 = self._case + self._caseroot1 = self._get_caseroot() + + self._caseroot2 = self._get_caseroot2() + # Initialize self._case2; it will get set to its true value in + # _setup_cases_if_not_yet_done + self._case2 = None + + # Prevent additional setup_case calls when detecting support for `--single-exe` + if not dry_run: + self._setup_cases_if_not_yet_done() + + self._multisubmit = ( + multisubmit and self._case1.get_value("BATCH_SYSTEM") != "none" + ) + + # ======================================================================== + # Methods that MUST be implemented by specific tests that inherit from this + # base class + # ======================================================================== + + def _case_one_setup(self): + """ + This method will be called to set up case 1, the "base" case. + + This should be written to refer to self._case: this object will point to + case1 at the point that this is called. + """ + raise NotImplementedError + + def _case_two_setup(self): + """ + This method will be called to set up case 2, the "test" case + + This should be written to refer to self._case: this object will point to + case2 at the point that this is called. + """ + raise NotImplementedError + + # ======================================================================== + # Methods that MAY be implemented by specific tests that inherit from this + # base class, if they have any work to do in these methods + # ======================================================================== + + def _common_setup(self): + """ + This method will be called to set up both cases. It should contain any setup + that's needed in both cases. This is called before _case_one_setup or + _case_two_setup. + + This should be written to refer to self._case: It will be called once with + self._case pointing to case1, and once with self._case pointing to case2. + """ + + def _case_one_custom_prerun_action(self): + """ + Use to do arbitrary actions immediately before running case one + """ + + def _case_two_custom_prerun_action(self): + """ + Use to do arbitrary actions immediately before running case two + """ + + def _case_one_custom_postrun_action(self): + """ + Use to do arbitrary actions immediately after running case one + """ + + def _case_two_custom_postrun_action(self): + """ + Use to do arbitrary actions immediately after running case two + """ + + # ======================================================================== + # Main public methods + # ======================================================================== + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + # Subtle issue: case1 is already in a writeable state since it tends to be opened + # with a with statement in all the API entrances in CIME. case2 was created via clone, + # not a with statement, so it's not in a writeable state, so we need to use a with + # statement here to put it in a writeable state. + with self._case2: + if self._separate_builds: + self._activate_case1() + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only) + self._activate_case2() + # Although we're doing separate builds, it still makes sense + # to share the sharedlibroot area with case1 so we can reuse + # pieces of the build from there. + if Config.instance().common_sharedlibroot: + # We need to turn off this change for E3SM because it breaks + # the MPAS build system + self._case2.set_value( + "SHAREDLIBROOT", self._case1.get_value("SHAREDLIBROOT") + ) + + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only) + else: + self._activate_case1() + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only) + # pio_typename may be changed during the build if the default is not a + # valid value for this build, update case2 to reflect this change + for comp in self._case1.get_values("COMP_CLASSES"): + comp_pio_typename = "{}_PIO_TYPENAME".format(comp) + self._case2.set_value( + comp_pio_typename, self._case1.get_value(comp_pio_typename) + ) + + # The following is needed when _case_two_setup has a case_setup call + # despite sharing the build (e.g., to change NTHRDS) + self._case2.set_value("BUILD_COMPLETE", True)
+ + +
+[docs] + def run_phase(self, success_change=False): # pylint: disable=arguments-differ + """ + Runs both phases of the two-phase test and compares their results + If success_change is True, success requires some files to be different + """ + is_first_run = self._case1.get_value("IS_FIRST_RUN") + + compare_phase_name = "{}_{}_{}".format( + COMPARE_PHASE, self._run_one_suffix, self._run_two_suffix + ) + + # On a batch system with a multisubmit test "RESUBMIT" is used to track + # which phase is being ran. By the end of the test it equals 0. If the + # the test fails in a way where the RUN_PHASE is PEND then "RESUBMIT" + # does not get reset to 1 on a rerun and the first phase is skiped + # causing the COMPARE_PHASE to fail. This ensures that "RESUBMIT" will + # get reset if the test state is not correct for a rerun. + # NOTE: "IS_FIRST_RUN" is reset in "case_submit.py" + if ( + is_first_run + and self._multisubmit + and self._case1.get_value("RESUBMIT") == 0 + ): + self._resetup_case(RUN_PHASE, reset=True) + + first_phase = ( + self._case1.get_value("RESUBMIT") == 1 + ) # Only relevant for multi-submit tests + run_type = self._case1.get_value("RUN_TYPE") + + logger.info( + "_multisubmit {} first phase {}".format(self._multisubmit, first_phase) + ) + + # First run + if not self._multisubmit or first_phase: + logger.info("Doing first run: " + self._run_one_description) + + # Add a PENDing compare phase so that we'll notice if the second part of compare two + # doesn't run. + with self._test_status: + self._test_status.set_status(compare_phase_name, TEST_PEND_STATUS) + + self._activate_case1() + self._case_one_custom_prerun_action() + self.run_indv(suffix=self._run_one_suffix) + self._case_one_custom_postrun_action() + + # Second run + if not self._multisubmit or not first_phase: + # Subtle issue: case1 is already in a writeable state since it tends to be opened + # with a with statement in all the API entrances in CIME. case2 was created via clone, + # not a with statement, so it's not in a writeable state, so we need to use a with + # statement here to put it in a writeable state. + with self._case2: + logger.info("Doing second run: " + self._run_two_description) + self._activate_case2() + # This assures that case two namelists are populated + self._skip_pnl = False + # we need to make sure run2 is properly staged. + if run_type != "startup": + self._case2.check_case() + + self._case_two_custom_prerun_action() + self.run_indv( + suffix=self._run_two_suffix, + keep_init_generated_files=self._case_two_keep_init_generated_files, + ) + self._case_two_custom_postrun_action() + # Compare results + # Case1 is the "main" case, and we need to do the comparisons from there + self._activate_case1() + self._link_to_case2_output() + self._component_compare_test( + self._run_one_suffix, + self._run_two_suffix, + success_change=success_change, + ignore_fieldlist_diffs=self._ignore_fieldlist_diffs, + )
+ + +
+[docs] + def copy_case1_restarts_to_case2(self): + """ + Makes a copy (or symlink) of restart files and related files + (necessary history files, rpointer files) from case1 to case2. + + This is not done automatically, but can be called by individual + tests where case2 does a continue_run using case1's restart + files. + """ + rundir2 = self._case2.get_value("RUNDIR") + self._case1.archive_last_restarts( + archive_restdir=rundir2, + rundir=self._case1.get_value("RUNDIR"), + link_to_restart_files=True, + )
+ + + # ======================================================================== + # Private methods + # ======================================================================== + + def _get_caseroot2(self): + """ + Determines and returns caseroot for case2 + + Assumes that self._case1 is already set to point to the case1 object + """ + casename2 = self._case1.get_value("CASE") + caseroot1 = self._case1.get_value("CASEROOT") + + # Nest the case directory for case2 inside the case directory for case1 + caseroot2 = os.path.join(caseroot1, "case2", casename2) + + return caseroot2 + + def _get_output_root2(self): + """ + Determines and returns cime_output_root for case2 + + Assumes that self._case1 is already set to point to the case1 object + """ + # Since case2 has the same name as case1, its CIME_OUTPUT_ROOT + # must also be different, so that anything put in + # $CIME_OUTPUT_ROOT/$CASE/ is not accidentally shared between + # case1 and case2. (Currently nothing is placed here, but this + # helps prevent future problems.) + output_root2 = os.path.join( + self._case1.get_value("CIME_OUTPUT_ROOT"), + self._case1.get_value("CASE"), + "case2_output_root", + ) + return output_root2 + + def _get_case2_exeroot(self): + """ + Gets exeroot for case2. + + Returns None if we should use the default value of exeroot. + """ + if self._separate_builds: + # case2's EXEROOT needs to be somewhere that (1) is unique + # to this case (considering that case1 and case2 have the + # same case name), and (2) does not have too long of a path + # name (because too-long paths can make some compilers + # fail). + case1_exeroot = self._case1.get_value("EXEROOT") + case2_exeroot = os.path.join(case1_exeroot, "case2bld") + else: + # Use default exeroot + case2_exeroot = None + return case2_exeroot + + def _get_case2_rundir(self): + """ + Gets rundir for case2. + """ + # case2's RUNDIR needs to be somewhere that is unique to this + # case (considering that case1 and case2 have the same case + # name). Note that the location below is symmetrical to the + # location of case2's EXEROOT set in _get_case2_exeroot. + case1_rundir = self._case1.get_value("RUNDIR") + case2_rundir = os.path.join(case1_rundir, "case2run") + return case2_rundir + + def _setup_cases_if_not_yet_done(self): + """ + Determines if case2 already exists on disk. If it does, this method + creates the self._case2 object pointing to the case directory. If it + doesn't exist, then this method creates case2 as a clone of case1, and + sets the self._case2 object appropriately. + + This also does the setup for both case1 and case2. + + Assumes that the following variables are already set in self: + _caseroot1 + _caseroot2 + _case1 + + Sets self._case2 + """ + + # Use the existence of the case2 directory to signal whether we have + # done the necessary test setup for this test: When we initially create + # the case2 directory, we set up both test cases; then, if we find that + # the case2 directory already exists, we assume that the setup has + # already been done. (In some cases it could be problematic to redo the + # test setup when it's not needed - e.g., by appending things to user_nl + # files multiple times. This is why we want to make sure to just do the + # test setup once.) + if os.path.exists(self._caseroot2): + self._case2 = self._case_from_existing_caseroot(self._caseroot2) + else: + try: + self._case2 = self._case1.create_clone( + self._caseroot2, + keepexe=not self._separate_builds, + cime_output_root=self._get_output_root2(), + exeroot=self._get_case2_exeroot(), + rundir=self._get_case2_rundir(), + ) + self._write_info_to_case2_output_root() + self._setup_cases() + except BaseException: + # If a problem occurred in setting up the test cases, it's + # important to remove the case2 directory: If it's kept around, + # that would signal that test setup was done successfully, and + # thus doesn't need to be redone - which is not the case. Of + # course, we'll likely be left in an inconsistent state in this + # case, but if we didn't remove the case2 directory, the next + # re-build of the test would think, "okay, setup is done, I can + # move on to the build", which would be wrong. + if os.path.isdir(self._caseroot2): + shutil.rmtree(self._caseroot2) + self._activate_case1() + logger.warning( + "WARNING: Test case setup failed. Case2 has been removed, " + "but the main case may be in an inconsistent state. " + "If you want to rerun this test, you should create " + "a new test rather than trying to rerun this one." + ) + raise + + def _case_from_existing_caseroot(self, caseroot): + """ + Returns a Case object from an existing caseroot directory + + Args: + caseroot (str): path to existing caseroot + """ + return Case(case_root=caseroot, read_only=False) + + def _activate_case1(self): + """ + Make case 1 active for upcoming calls + """ + os.chdir(self._caseroot1) + self._set_active_case(self._case1) + + def _activate_case2(self): + """ + Make case 2 active for upcoming calls + """ + os.chdir(self._caseroot2) + self._set_active_case(self._case2) + + def _write_info_to_case2_output_root(self): + """ + Writes a file with some helpful information to case2's + output_root. + + The motivation here is two-fold: + + (1) Currently, case2's output_root directory is empty. This + could be confusing. + + (2) For users who don't know where to look, it could be hard to + find case2's bld and run directories. It is somewhat easier + to stumble upon case2's output_root, so we put a file there + pointing them to the right place. + """ + + readme_path = os.path.join(self._get_output_root2(), "README") + try: + with open(readme_path, "w") as fd: + fd.write("This directory is typically empty.\n\n") + fd.write( + "case2's run dir is here: {}\n\n".format( + self._case2.get_value("RUNDIR") + ) + ) + fd.write( + "case2's bld dir is here: {}\n".format( + self._case2.get_value("EXEROOT") + ) + ) + except IOError: + # It's not a big deal if we can't write the README file + # (e.g., because the directory doesn't exist or isn't + # writeable; note that the former may be the case in unit + # tests). So just continue merrily on our way if there was a + # problem. + pass + + def _setup_cases(self): + """ + Does all test-specific set up for the two test cases. + """ + + # Set up case 1 + self._activate_case1() + self._common_setup() + self._case_one_setup() + # Flush the case so that, if errors occur later, then at least case 1 is + # in a correct, post-setup state. This is important because the mere + # existence of a case 2 directory signals that setup is done. So if the + # build fails and the user rebuilds, setup won't be redone - so it's + # important to ensure that the results of setup are flushed to disk. + # + # Note that case 1 will be in its post-setup state even if case 2 setup + # fails. Putting the case1 flush after case 2 setup doesn't seem to help + # with that (presumably some flush is called automatically), and anyway + # wouldn't help with things like appending to user_nl files (which don't + # rely on flush). So we just have to live with that possibility (but + # note that we print a warning to the log file if that happens, in the + # caller of this method). + self._case.flush() + # This assures that case one namelists are populated + # and creates the case.test script + self._case.case_setup(test_mode=False, reset=True) + fix_single_exe_case(self._case) + + # Set up case 2 + with self._case2: + self._activate_case2() + self._common_setup() + self._case_two_setup() + + fix_single_exe_case(self._case2) + + # Go back to case 1 to ensure that's where we are for any following code + self._activate_case1() + + def _link_to_case2_output(self): + """ + Looks for all files in rundir2 matching the pattern casename2*.nc.run2suffix + + For each file found, makes a link in rundir1 pointing to this file; the + link is renamed so that the original occurrence of casename2 is replaced + with casename1. + + For example: + + /glade/scratch/sacks/somecase/run/somecase.clm2.h0.nc.run2 -> + /glade/scratch/sacks/somecase.run2/run/somecase.run2.clm2.h0.nc.run2 + + If the destination link already exists and points to the correct + location, it is maintained as is. However, an exception will be raised + if the destination link is not exactly as it should be: we avoid + overwriting some existing file or link. + """ + + casename1 = self._case1.get_value("CASE") + casename2 = self._case2.get_value("CASE") + rundir1 = self._case1.get_value("RUNDIR") + rundir2 = self._case2.get_value("RUNDIR") + run2suffix = self._run_two_suffix + + pattern = "{}*.nc.{}".format(casename2, run2suffix) + case2_files = glob.glob(os.path.join(rundir2, pattern)) + for one_file in case2_files: + file_basename = os.path.basename(one_file) + modified_basename = file_basename.replace(casename2, casename1, 1) + one_link = os.path.join(rundir1, modified_basename) + if os.path.islink(one_link) and os.readlink(one_link) == one_file: + # Link is already set up correctly: do nothing + # (os.symlink raises an exception if you try to replace an + # existing file) + pass + else: + os.symlink(one_file, one_link)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/test_utils/user_nl_utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/test_utils/user_nl_utils.html new file mode 100644 index 00000000000..024f8c63047 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/test_utils/user_nl_utils.html @@ -0,0 +1,183 @@ + + + + + + CIME.SystemTests.test_utils.user_nl_utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.SystemTests.test_utils.user_nl_utils

+"""
+This module contains functions for working with user_nl files in system tests.
+"""
+
+import os
+import glob
+
+
+
+[docs] +def append_to_user_nl_files(caseroot, component, contents): + """ + Append the string(s) given by 'contents' to the end of each user_nl file for + the given component (there may be multiple such user_nl files in the case of + a multi-instance test). + + Also puts new lines before and after the appended text - so 'contents' + does not need to contain a trailing new line (but it's also okay if it + does). + + Args: + caseroot (str): Full path to the case directory + + component (str): Name of component (e.g., 'clm'). This is used to + determine which user_nl files are appended to. For example, for + component='clm', this function will operate on all user_nl files + matching the pattern 'user_nl_clm*'. (We do a wildcard match to + handle multi-instance tests.) + + contents (str or list-like): Contents to append to the end of each user_nl + file. If list-like, each item will be appended on its own line. + """ + + if isinstance(contents, str): + contents = [contents] + + files = _get_list_of_user_nl_files(caseroot, component) + + if len(files) == 0: + raise RuntimeError("No user_nl files found for component " + component) + + for one_file in files: + with open(one_file, "a") as user_nl_file: + user_nl_file.write("\n") + for c in contents: + user_nl_file.write(c + "\n")
+ + + +def _get_list_of_user_nl_files(path, component): + """Get a list of all user_nl files in the current path for the component + of interest. For a component 'foo', we match all files of the form + user_nl_foo* - with a wildcard match at the end in order to match files + in a multi-instance case. + + The list of returned files gives their full path. + """ + + file_pattern = "user_nl_" + component + "*" + file_list = glob.glob(os.path.join(path, file_pattern)) + + return file_list +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/tsc.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/tsc.html new file mode 100644 index 00000000000..86f76d31f76 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/SystemTests/tsc.html @@ -0,0 +1,404 @@ + + + + + + CIME.SystemTests.tsc — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.SystemTests.tsc

+"""
+Solution reproducibility test based on time-step convergence
+The CESM/ACME model's
+multi-instance capability is used to conduct an ensemble
+of simulations starting from different initial conditions.
+
+This class inherits from SystemTestsCommon.
+"""
+
+import os
+import json
+import logging
+
+from distutils import dir_util
+
+import CIME.test_status
+import CIME.utils
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.case.case_setup import case_setup
+from CIME.hist_utils import rename_all_hist_files
+from CIME.XML.machines import Machines
+
+import evv4esm  # pylint: disable=import-error
+from evv4esm.__main__ import main as evv  # pylint: disable=import-error
+
+evv_lib_dir = os.path.abspath(os.path.dirname(evv4esm.__file__))
+
+logger = logging.getLogger(__name__)
+
+
+NINST = 12
+SIM_LENGTH = 600  # seconds
+OUT_FREQ = 10  # seconds
+INSPECT_AT = [300, 450, 600]  # seconds
+INIT_COND_FILE_TEMPLATE = "20210915.v2.ne4_oQU240.F2010.{}.{}.0002-{:02d}-01-00000.nc"
+VAR_LIST = [
+    "T",
+    "Q",
+    "V",
+    "CLDLIQ",
+    "CLDICE",
+    "NUMLIQ",
+    "NUMICE",
+    "num_a1",
+    "num_a2",
+    "num_a3",
+]
+P_THRESHOLD = 0.005
+
+
+
+[docs] +class TSC(SystemTestsCommon): + def __init__(self, case, **kwargs): + """ + initialize an object interface to the TSC test + """ + super(TSC, self).__init__(case, **kwargs) + if self._case.get_value("MODEL") == "e3sm": + self.atmmod = "eam" + self.lndmod = "elm" + self.atmmodIC = "eam" + self.lndmodIC = "elm" + else: + self.atmmod = "cam" + self.lndmod = "clm" + self.atmmodIC = "cam" + self.lndmodIC = "clm2" + +
+[docs] + def build_phase(self, sharedlib_only=False, model_only=False): + # Only want this to happen once. It will impact the sharedlib build + # so it has to happen there. + if not model_only: + logging.warning("Starting to build multi-instance exe") + for comp in ["ATM", "OCN", "WAV", "GLC", "ICE", "ROF", "LND"]: + ntasks = self._case.get_value("NTASKS_{}".format(comp)) + self._case.set_value("ROOTPE_{}".format(comp), 0) + self._case.set_value("NINST_{}".format(comp), NINST) + self._case.set_value("NTASKS_{}".format(comp), ntasks * NINST) + + self._case.set_value("ROOTPE_CPL", 0) + self._case.set_value("NTASKS_CPL", ntasks * NINST) + self._case.flush() + + case_setup(self._case, test_mode=False, reset=True) + + self.build_indv(sharedlib_only=sharedlib_only, model_only=model_only)
+ + + def _run_with_specified_dtime(self, dtime=2): + """ + Conduct one multi-instance run with a specified time step size. + + :param dtime (int): Specified time step size in seconds + """ + coupling_frequency = 86400 // dtime + + self._case.set_value("ATM_NCPL", str(coupling_frequency)) + se_tstep = dtime / 12 + + nsteps = SIM_LENGTH // dtime + self._case.set_value("STOP_N", str(nsteps)) + self._case.set_value("STOP_OPTION", "nsteps") + + csmdata_root = self._case.get_value("DIN_LOC_ROOT") + csmdata_atm = os.path.join(csmdata_root, "atm/cam/inic/homme/ne4_v2_init") + csmdata_lnd = os.path.join(csmdata_root, "lnd/clm2/initdata/ne4_oQU240_v2_init") + + nstep_output = OUT_FREQ // dtime + for iinst in range(1, NINST + 1): + fatm_in = os.path.join( + csmdata_atm, + INIT_COND_FILE_TEMPLATE.format(self.atmmodIC, "i", iinst), + ) + flnd_in = os.path.join( + csmdata_lnd, + INIT_COND_FILE_TEMPLATE.format(self.lndmodIC, "r", iinst), + ) + + with open(f"user_nl_{self.atmmod}_{iinst:04d}", "w+") as atmnlfile: + + atmnlfile.write("ncdata = '{}' \n".format(fatm_in)) + + atmnlfile.write("dtime = {} \n".format(dtime)) + atmnlfile.write("se_tstep = {} \n".format(se_tstep)) + atmnlfile.write("iradsw = 2 \n") + atmnlfile.write("iradlw = 2 \n") + + atmnlfile.write("avgflag_pertape = 'I' \n") + atmnlfile.write("nhtfrq = {} \n".format(nstep_output)) + atmnlfile.write("mfilt = 1 \n") + atmnlfile.write("ndens = 1 \n") + atmnlfile.write("empty_htapes = .true. \n") + atmnlfile.write( + "fincl1 = 'PS','U','LANDFRAC',{} \n".format( + "".join(["'{}',".format(s) for s in VAR_LIST])[:-1] + ) + ) + + with open(f"user_nl_{self.lndmod}_{iinst:04d}", "w+") as lndnlfile: + lndnlfile.write("finidat = '{}' \n".format(flnd_in)) + lndnlfile.write("dtime = {} \n".format(dtime)) + + # Force rebuild namelists + self._skip_pnl = False + + self.run_indv() + + rename_all_hist_files(self._case, suffix="DT{:04d}".format(dtime)) + +
+[docs] + def run_phase(self): + self._run_with_specified_dtime(dtime=2) + + if self._case.get_value("GENERATE_BASELINE"): + self._run_with_specified_dtime(dtime=1)
+ + + def _compare_baseline(self): + with self._test_status as ts: + ts.set_status( + CIME.test_status.BASELINE_PHASE, CIME.test_status.TEST_FAIL_STATUS + ) + + run_dir = self._case.get_value("RUNDIR") + case_name = self._case.get_value("CASE") + base_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASECMP_CASE"), + ) + + test_name = "{}".format(case_name.split(".")[-1]) + evv_config = { + test_name: { + "module": os.path.join(evv_lib_dir, "extensions", "tsc.py"), + "test-case": case_name, + "test-dir": run_dir, + "ref-case": "Baseline", + "ref-dir": base_dir, + "time-slice": [OUT_FREQ, SIM_LENGTH], + "inspect-times": INSPECT_AT, + "variables": VAR_LIST, + "p-threshold": P_THRESHOLD, + "component": self.atmmod, + } + } + + json_file = os.path.join(run_dir, ".".join([case_name, "json"])) + with open(json_file, "w") as config_file: + json.dump(evv_config, config_file, indent=4) + + evv_out_dir = os.path.join(run_dir, ".".join([case_name, "evv"])) + evv(["-e", json_file, "-o", evv_out_dir]) + + with open(os.path.join(evv_out_dir, "index.json"), "r") as evv_f: + evv_status = json.load(evv_f) + + comments = "" + for evv_ele in evv_status["Page"]["elements"]: + if "Table" in evv_ele: + comments = "; ".join( + "{}: {}".format(key, val[0]) + for key, val in evv_ele["Table"]["data"].items() + ) + if evv_ele["Table"]["data"]["Test status"][0].lower() == "pass": + self._test_status.set_status( + CIME.test_status.BASELINE_PHASE, + CIME.test_status.TEST_PASS_STATUS, + ) + break + + status = self._test_status.get_status(CIME.test_status.BASELINE_PHASE) + mach_name = self._case.get_value("MACH") + mach_obj = Machines(machine=mach_name) + htmlroot = CIME.utils.get_htmlroot(mach_obj) + urlroot = CIME.utils.get_urlroot(mach_obj) + if htmlroot is not None: + with CIME.utils.SharedArea(): + dir_util.copy_tree( + evv_out_dir, + os.path.join(htmlroot, "evv", case_name), + preserve_mode=False, + ) + if urlroot is None: + urlroot = "[{}_URL]".format(mach_name.capitalize()) + viewing = "{}/evv/{}/index.html".format(urlroot, case_name) + else: + viewing = ( + "{}\n" + " EVV viewing instructions can be found at: " + " https://github.com/E3SM-Project/E3SM/blob/master/cime/scripts/" + "climate_reproducibility/README.md#test-passfail-and-extended-output" + "".format(evv_out_dir) + ) + + comments = ( + "{} {} for test '{}'.\n" + " {}\n" + " EVV results can be viewed at:\n" + " {}".format( + CIME.test_status.BASELINE_PHASE, + status, + test_name, + comments, + viewing, + ) + ) + + CIME.utils.append_testlog(comments, self._orig_caseroot) + + def _generate_baseline(self): + super(TSC, self)._generate_baseline() + + with CIME.utils.SharedArea(): + basegen_dir = os.path.join( + self._case.get_value("BASELINE_ROOT"), + self._case.get_value("BASEGEN_CASE"), + ) + + rundir = self._case.get_value("RUNDIR") + ref_case = self._case.get_value("RUN_REFCASE") + + env_archive = self._case.get_env("archive") + hists = env_archive.get_all_hist_files( + self._case.get_value("CASE"), + self.atmmod, + rundir, + r"DT\d*", + ref_case=ref_case, + ) + hists = [os.path.join(rundir, hist) for hist in hists] + logger.debug("TSC additional baseline files: {}".format(hists)) + for hist in hists: + basename = hist[hist.rfind(self.atmmod) :] + baseline = os.path.join(basegen_dir, basename) + if os.path.exists(baseline): + os.remove(baseline) + + CIME.utils.safe_copy(hist, baseline, preserve_meta=False)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/generate_cylc_workflow.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/generate_cylc_workflow.html new file mode 100644 index 00000000000..e546d89c618 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/generate_cylc_workflow.html @@ -0,0 +1,349 @@ + + + + + + CIME.Tools.generate_cylc_workflow — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.Tools.generate_cylc_workflow

+#!/usr/bin/env python3
+
+"""
+Generates a cylc workflow file for the case.  See https://cylc.github.io for details about cylc
+"""
+import os
+import sys
+
+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))
+
+from CIME.Tools.standard_script_setup import *
+
+from CIME.case import Case
+from CIME.utils import expect, transform_vars
+
+import argparse, re
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_command_line(args, description): + ############################################################################### + parser = argparse.ArgumentParser( + description=description, formatter_class=argparse.RawTextHelpFormatter + ) + + CIME.utils.setup_standard_logging_options(parser) + + parser.add_argument( + "caseroot", + nargs="?", + default=os.getcwd(), + help="Case directory for which namelists are generated.\n" + "Default is current directory.", + ) + + parser.add_argument( + "--cycles", default=1, help="The number of cycles to run, default is RESUBMIT" + ) + + parser.add_argument( + "--ensemble", + default=1, + help="generate suite.rc for an ensemble of cases, the case name argument must end in an integer.\n" + "for example: ./generate_cylc_workflow.py --ensemble 4 \n" + "will generate a workflow file in the current case, if that case is named case.01," + "the workflow will include case.01, case.02, case.03 and case.04", + ) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + return args.caseroot, args.cycles, int(args.ensemble)
+ + + +
+[docs] +def cylc_get_ensemble_first_and_last(case, ensemble): + if ensemble == 1: + return 1, None + casename = case.get_value("CASE") + m = re.search(r"(.*[^\d])(\d+)$", casename) + minval = int(m.group(2)) + maxval = minval + ensemble - 1 + return minval, maxval
+ + + +
+[docs] +def cylc_get_case_path_string(case, ensemble): + caseroot = case.get_value("CASEROOT") + casename = case.get_value("CASE") + if ensemble == 1: + return "{};".format(caseroot) + basepath = os.path.abspath(caseroot + "/..") + m = re.search(r"(.*[^\d])(\d+)$", casename) + + expect(m, "casename {} must end in an integer for ensemble method".format(casename)) + + return ( + '{basepath}/{basename}$(printf "%0{intlen}d"'.format( + basepath=basepath, basename=m.group(1), intlen=len(m.group(2)) + ) + + " ${CYLC_TASK_PARAM_member});" + )
+ + + +
+[docs] +def cylc_batch_job_template(job, jobname, case, ensemble): + + env_batch = case.get_env("batch") + batch_system_type = env_batch.get_batch_system_type() + batchsubmit = env_batch.get_value("batch_submit") + submit_args = env_batch.get_submit_args(case, job) + case_path_string = cylc_get_case_path_string(case, ensemble) + + return ( + """ + [[{jobname}<member>]] + script = cd {case_path_string} ./case.submit --job {job} + [[[job]]] + batch system = {batch_system_type} + batch submit command template = {batchsubmit} {submit_args} '%(job)s' + [[[directives]]] +""".format( + jobname=jobname, + job=job, + case_path_string=case_path_string, + batch_system_type=batch_system_type, + batchsubmit=batchsubmit, + submit_args=submit_args, + ) + + "{{ batchdirectives }}\n" + )
+ + + +
+[docs] +def cylc_script_job_template(job, case, ensemble): + case_path_string = cylc_get_case_path_string(case, ensemble) + return """ + [[{job}<member>]] + script = cd {case_path_string} ./case.submit --job {job} +""".format( + job=job, case_path_string=case_path_string + )
+ + + +############################################################################### +def _main_func(description): + ############################################################################### + caseroot, cycles, ensemble = parse_command_line(sys.argv, description) + + expect( + os.path.isfile(os.path.join(caseroot, "CaseStatus")), + "case.setup must be run prior to running {}".format(__file__), + ) + with Case(caseroot, read_only=True) as case: + if cycles == 1: + cycles = max(1, case.get_value("RESUBMIT")) + env_batch = case.get_env("batch") + env_workflow = case.get_env("workflow") + jobs = env_workflow.get_jobs() + casename = case.get_value("CASE") + input_template = os.path.join( + case.get_value("MACHDIR"), "cylc_suite.rc.template" + ) + + overrides = {"cycles": cycles, "casename": casename} + input_text = open(input_template).read() + + first, last = cylc_get_ensemble_first_and_last(case, ensemble) + if ensemble == 1: + overrides.update({"members": "{}".format(first)}) + overrides.update( + {"workflow_description": "case {}".format(case.get_value("CASE"))} + ) + else: + overrides.update({"members": "{}..{}".format(first, last)}) + firstcase = case.get_value("CASE") + intlen = len(str(last)) + lastcase = firstcase[:-intlen] + str(last) + overrides.update( + { + "workflow_description": "ensemble from {} to {}".format( + firstcase, lastcase + ) + } + ) + overrides.update( + {"case_path_string": cylc_get_case_path_string(case, ensemble)} + ) + + for job in jobs: + jobname = job + if job == "case.st_archive": + continue + if job == "case.run": + jobname = "run" + overrides.update(env_batch.get_job_overrides(job, case)) + overrides.update({"job_id": "run." + casename}) + input_text = input_text + cylc_batch_job_template( + job, jobname, case, ensemble + ) + else: + depends_on = env_workflow.get_value("dependency", subgroup=job) + if depends_on.startswith("case."): + depends_on = depends_on[5:] + input_text = input_text.replace( + " => " + depends_on, " => " + depends_on + "<member> => " + job + ) + + overrides.update(env_batch.get_job_overrides(job, case)) + overrides.update({"job_id": job + "." + casename}) + if "total_tasks" in overrides and overrides["total_tasks"] > 1: + input_text = input_text + cylc_batch_job_template( + job, jobname, case, ensemble + ) + else: + input_text = input_text + cylc_script_job_template( + jobname, case, ensemble + ) + + overrides.update( + { + "batchdirectives": env_batch.get_batch_directives( + case, job, overrides=overrides, output_format="cylc" + ) + } + ) + # we need to re-transform for each job to get job size correctly + input_text = transform_vars( + input_text, case=case, subgroup=job, overrides=overrides + ) + + with open("suite.rc", "w") as f: + f.write(case.get_resolved_value(input_text)) + + +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/standard_script_setup.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/standard_script_setup.html new file mode 100644 index 00000000000..32f4548d16c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/standard_script_setup.html @@ -0,0 +1,169 @@ + + + + + + CIME.Tools.standard_script_setup — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.Tools.standard_script_setup

+"""
+Encapsulate the importing of python utils and logging setup, things
+that every script should do.
+"""
+# pylint: disable=unused-import
+
+import sys, os
+import __main__ as main
+
+
+
+[docs] +def check_minimum_python_version(major, minor): + """ + Check your python version. + + >>> check_minimum_python_version(sys.version_info[0], sys.version_info[1]) + >>> + """ + msg = ( + "Python " + + str(major) + + ", minor version " + + str(minor) + + " is required, you have " + + str(sys.version_info[0]) + + "." + + str(sys.version_info[1]) + ) + assert sys.version_info[0] > major or ( + sys.version_info[0] == major and sys.version_info[1] >= minor + ), msg
+ + + +check_minimum_python_version(3, 6) + +real_file_dir = os.path.dirname(os.path.realpath(__file__)) +cimeroot = os.path.abspath(os.path.join(real_file_dir, "..", "..")) +sys.path.insert(0, cimeroot) + +# Important: Allows external tools to link up with CIME +os.environ["CIMEROOT"] = cimeroot + +import CIME.utils + +CIME.utils.stop_buffering_output() +import logging, argparse +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/testreporter.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/testreporter.html new file mode 100644 index 00000000000..75acad22e91 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/Tools/testreporter.html @@ -0,0 +1,386 @@ + + + + + + CIME.Tools.testreporter — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.Tools.testreporter

+#!/usr/bin/env python3
+
+"""
+Simple script to populate CESM test database with test results.
+"""
+import os
+import sys
+
+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))
+
+from CIME.Tools.standard_script_setup import *
+
+from CIME.XML.env_build import EnvBuild
+from CIME.XML.env_case import EnvCase
+from CIME.XML.env_test import EnvTest
+from CIME.XML.test_reporter import TestReporter
+from CIME.utils import expect
+from CIME.XML.generic_xml import GenericXML
+
+import glob
+
+###############################################################################
+
+[docs] +def parse_command_line(args): + ############################################################################### + parser = argparse.ArgumentParser() + + CIME.utils.setup_standard_logging_options(parser) + + # Parse command line options + + # parser = argparse.ArgumentParser(description='Arguements for testreporter') + parser.add_argument("--tagname", help="Name of the tag being tested.") + parser.add_argument("--testid", help="Test id, ie c2_0_a6g_ing,c2_0_b6g_gnu.") + parser.add_argument( + "--testroot", help="Root directory for tests to populate the database." + ) + parser.add_argument("--testtype", help="Type of test, prealpha or prebeta.") + parser.add_argument( + "--dryrun", + action="store_true", + help="Do a dry run, database will not be populated.", + ) + parser.add_argument( + "--dumpxml", action="store_true", help="Dump XML test results to sceen." + ) + args = parser.parse_args() + CIME.utils.parse_args_and_handle_standard_logging_options(args) + + return ( + args.testroot, + args.testid, + args.tagname, + args.testtype, + args.dryrun, + args.dumpxml, + )
+ + + +############################################################################### +
+[docs] +def get_testreporter_xml(testroot, testid, tagname, testtype): + ############################################################################### + os.chdir(testroot) + + # + # Retrieve compiler name and mpi library + # + xml_file = glob.glob("*" + testid + "/env_build.xml") + expect( + len(xml_file) > 0, + "Tests not found. It's possible your testid, {} is wrong.".format(testid), + ) + envxml = EnvBuild(".", infile=xml_file[0]) + compiler = envxml.get_value("COMPILER") + mpilib = envxml.get_value("MPILIB") + + # + # Retrieve machine name + # + xml_file = glob.glob("*" + testid + "/env_case.xml") + envxml = EnvCase(".", infile=xml_file[0]) + machine = envxml.get_value("MACH") + + # + # Retrieve baseline tag to compare to + # + xml_file = glob.glob("*" + testid + "/env_test.xml") + envxml = EnvTest(".", infile=xml_file[0]) + baseline = envxml.get_value("BASELINE_NAME_CMP") + + # + # Create XML header + # + + testxml = TestReporter() + testxml.setup_header( + tagname, machine, compiler, mpilib, testroot, testtype, baseline + ) + + # + # Create lists on tests based on the testid in the testroot directory. + # + test_names = glob.glob("*" + testid) + # + # Loop over all tests and parse the test results + # + test_status = {} + for test_name in test_names: + if not os.path.isfile(test_name + "/TestStatus"): + continue + test_status["COMMENT"] = "" + test_status["BASELINE"] = "----" + test_status["MEMCOMP"] = "----" + test_status["MEMLEAK"] = "----" + test_status["NLCOMP"] = "----" + test_status["STATUS"] = "----" + test_status["TPUTCOMP"] = "----" + # + # Check to see if TestStatus is present, if not then continue + # I might want to set the status to fail + # + try: + lines = [line.rstrip("\n") for line in open(test_name + "/TestStatus")] + except (IOError, OSError): + test_status["STATUS"] = "FAIL" + test_status["COMMENT"] = "TestStatus missing. " + continue + # + # Loop over each line of TestStatus, and check for different types of failures. + # + for line in lines: + if "NLCOMP" in line: + test_status["NLCOMP"] = line[0:4] + if "MEMLEAK" in line: + test_status["MEMLEAK"] = line[0:4] + if "MEMCOMP" in line: + test_status["MEMCOMP"] = line[0:4] + if "BASELINE" in line: + test_status["BASELINE"] = line[0:4] + if "TPUTCOMP" in line: + test_status["TPUTCOMP"] = line[0:4] + if "FAIL PFS" in line: + test_status["STATUS"] = "FAIL" + if "INIT" in line: + test_status["INIT"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "INIT fail! " + break + if "CREATE_NEWCASE" in line: + test_status["CREATE_NEWCASE"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "CREATE_NEWCASE fail! " + break + if "XML" in line: + test_status["XML"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "XML fail! " + break + if "SETUP" in line: + test_status["SETUP"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "SETUP fail! " + break + if "SHAREDLIB_BUILD" in line: + test_status["SHAREDLIB_BUILD"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "CFAIL" + test_status["COMMENT"] += "SHAREDLIB_BUILD fail! " + break + if "MODEL_BUILD" in line: + test_status["MODEL_BUILD"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "CFAIL" + test_status["COMMENT"] += "MODEL_BUILD fail! " + break + if "SUBMIT" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "SUBMIT fail! " + break + if "RUN" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "RUN fail! " + break + if "COMPARE_base_rest" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Restart fail! " + break + if "COMPARE_base_hybrid" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Hybrid fail! " + break + if "COMPARE_base_multiinst" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Multi instance fail! " + break + if "COMPARE_base_test" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Base test fail! " + break + if "COMPARE_base_single_thread" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Thread test fail! " + break + + # + # Do not include time comments. Just a preference to have cleaner comments in the test database + # + try: + if "time=" not in line and "GENERATE" not in line: + if "BASELINE" not in line: + test_status["COMMENT"] += line.split(" ", 3)[3] + " " + else: + test_status["COMMENT"] += line.split(" ", 4)[4] + " " + except Exception: # Probably want to be more specific here + pass + + # + # Fill in the xml with the test results + # + testxml.add_result(test_name, test_status) + + return testxml
+ + + +############################################################################## +def _main_func(): + ############################################################################### + + testroot, testid, tagname, testtype, dryrun, dumpxml = parse_command_line(sys.argv) + + testxml = get_testreporter_xml(testroot, testid, tagname, testtype) + + # + # Dump xml to a file. + # + if dumpxml: + GenericXML.write(testxml, outfile="TestRecord.xml") + + # + # Prompt for username and password, then post the XML string to the test database website + # + if not dryrun: + testxml.push2testdb() + + +############################################################################### + +if __name__ == "__main__": + _main_func() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/archive.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/archive.html new file mode 100644 index 00000000000..1558e9129c8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/archive.html @@ -0,0 +1,222 @@ + + + + + + CIME.XML.archive — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.archive

+"""
+Interface to the archive.xml file.  This class inherits from GenericXML.py
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.config import Config
+from CIME.XML.archive_base import ArchiveBase
+from CIME.XML.files import Files
+from copy import deepcopy
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Archive(ArchiveBase): + def __init__(self, infile=None, files=None): + """ + initialize an object + """ + if files is None: + files = Files() + schema = files.get_schema("ARCHIVE_SPEC_FILE") + super(Archive, self).__init__(infile, schema) + +
+[docs] + def setup(self, env_archive, components, files=None): + if files is None: + files = Files() + + components_node = env_archive.make_child( + "components", attributes={"version": "2.0"} + ) + + arch_components = deepcopy(components) + + config = Config.instance() + + for comp in config.additional_archive_components: + if comp not in arch_components: + arch_components.append(comp) + + for comp in arch_components: + infile = files.get_value("ARCHIVE_SPEC_FILE", {"component": comp}) + + if infile is not None and os.path.isfile(infile): + arch = Archive(infile=infile, files=files) + specs = arch.get_optional_child( + name="comp_archive_spec", attributes={"compname": comp} + ) + else: + if infile is None: + logger.debug( + "No archive file defined for component {}".format(comp) + ) + else: + logger.debug( + "Archive file {} for component {} not found".format( + infile, comp + ) + ) + + specs = self.get_optional_child( + name="comp_archive_spec", attributes={"compname": comp} + ) + + if specs is None: + logger.debug("No archive specs found for component {}".format(comp)) + else: + logger.debug("adding archive spec for {}".format(comp)) + env_archive.add_child(specs, root=components_node)
+ + +
+[docs] + def get_all_config_archive_files(self, files): + """ + Returns the list of ARCHIVE_SPEC_FILES that exist on disk as defined in config_files.xml + """ + archive_spec_node = files.get_child("entry", {"id": "ARCHIVE_SPEC_FILE"}) + component_nodes = files.get_children( + "value", root=files.get_child("values", root=archive_spec_node) + ) + config_archive_files = [] + for comp in component_nodes: + attr = self.get(comp, "component") + if attr: + compval = files.get_value( + "ARCHIVE_SPEC_FILE", attribute={"component": attr} + ) + else: + compval = self.get_resolved_value(self.text(comp)) + + if os.path.isfile(compval): + config_archive_files.append(compval) + + config_archive_files = list(set(config_archive_files)) + return config_archive_files
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/archive_base.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/archive_base.html new file mode 100644 index 00000000000..1712ab9af86 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/archive_base.html @@ -0,0 +1,423 @@ + + + + + + CIME.XML.archive_base — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.archive_base

+"""
+Base class for archive files.  This class inherits from generic_xml.py
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.utils import convert_to_type
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ArchiveBase(GenericXML): +
+[docs] + def exclude_testing(self, compname): + """ + Checks if component should be excluded from testing. + """ + value = self._get_attribute(compname, "exclude_testing") + + if value is None: + return False + + return convert_to_type(value, "logical")
+ + + def _get_attribute(self, compname, attr_name): + attrib = self.get_entry_attributes(compname) + + if attrib is None: + return None + + return attrib.get(attr_name, None) + +
+[docs] + def get_entry_attributes(self, compname): + entry = self.get_entry(compname) + + if entry is None: + return None + + return self.attrib(entry)
+ + +
+[docs] + def get_entry(self, compname): + """ + Returns an xml node corresponding to compname in comp_archive_spec + """ + return self.scan_optional_child( + "comp_archive_spec", attributes={"compname": compname} + )
+ + + def _get_file_node_text(self, attnames, archive_entry): + """ + get the xml text associated with each of the attnames + based at root archive_entry + returns a list of text entries or + an empty list if no entries are found + """ + nodes = [] + textvals = [] + for attname in attnames: + nodes.extend(self.get_children(attname, root=archive_entry)) + for node in nodes: + textvals.append(self.text(node)) + return textvals + +
+[docs] + def get_rest_file_extensions(self, archive_entry): + """ + get the xml text associated with each of the rest_file_extensions + based at root archive_entry (root is based on component name) + returns a list of text entries or + an empty list if no entries are found + """ + return self._get_file_node_text(["rest_file_extension"], archive_entry)
+ + +
+[docs] + def get_hist_file_extensions(self, archive_entry): + """ + get the xml text associated with each of the hist_file_extensions + based at root archive_entry (root is based on component name) + returns a list of text entries or + an empty list if no entries are found + """ + return self._get_file_node_text(["hist_file_extension"], archive_entry)
+ + +
+[docs] + def get_hist_file_ext_regexes(self, archive_entry): + """ + get the xml text associated with each of the hist_file_ext_regex entries + based at root archive_entry (root is based on component name) + returns a list of text entries or + an empty list if no entries are found + """ + return self._get_file_node_text(["hist_file_ext_regex"], archive_entry)
+ + +
+[docs] + def get_entry_value(self, name, archive_entry): + """ + get the xml text associated with name under root archive_entry + returns None if no entry is found, expects only one entry + """ + node = self.get_optional_child(name, root=archive_entry) + if node is not None: + return self.text(node) + return None
+ + +
+[docs] + def get_latest_hist_files( + self, casename, model, from_dir, suffix="", ref_case=None + ): + """ + get the most recent history files in directory from_dir with suffix if provided + """ + test_hists = self.get_all_hist_files( + casename, model, from_dir, suffix=suffix, ref_case=ref_case + ) + ext_regexes = self.get_hist_file_ext_regexes( + self.get_entry(self._get_compname(model)) + ) + latest_files = {} + histlist = [] + for hist in test_hists: + ext = _get_extension(model, hist, ext_regexes) + latest_files[ext] = hist + + for key in latest_files.keys(): + histlist.append(latest_files[key]) + return histlist
+ + +
+[docs] + def get_all_hist_files(self, casename, model, from_dir, suffix="", ref_case=None): + """ + gets all history files in directory from_dir with suffix (if provided) + ignores files with ref_case in the name if ref_case is provided + """ + dmodel = self._get_compname(model) + # remove when component name is changed + if model == "fv3gfs": + model = "fv3" + if model == "cice5": + model = "cice" + if model == "ww3dev": + model = "ww3" + + hist_files = [] + extensions = self.get_hist_file_extensions(self.get_entry(dmodel)) + if suffix and len(suffix) > 0: + has_suffix = True + else: + has_suffix = False + + # Strip any trailing $ if suffix is present and add it back after the suffix + for ext in extensions: + if ext.endswith("$") and has_suffix: + ext = ext[:-1] + string = model + r"\d?_?(\d{4})?\." + ext + if has_suffix: + if not suffix in string: + string += r"\." + suffix + "$" + + if not string.endswith("$"): + string += "$" + + logger.debug("Regex is {}".format(string)) + pfile = re.compile(string) + hist_files.extend( + [ + f + for f in os.listdir(from_dir) + if pfile.search(f) + and ( + (f.startswith(casename) or f.startswith(model)) + and not f.endswith("cprnc.out") + ) + ] + ) + + if ref_case: + expect( + ref_case not in casename, + "ERROR: ref_case name {} conflicts with casename {}".format( + ref_case, casename + ), + ) + hist_files = [ + h for h in hist_files if not (ref_case in os.path.basename(h)) + ] + + hist_files = list(set(hist_files)) + hist_files.sort() + logger.debug( + "get_all_hist_files returns {} for model {}".format(hist_files, model) + ) + + return hist_files
+ + + @staticmethod + def _get_compname(model): + """ + Given a model name, return a possibly-modified name for use as the compname argument + to get_entry + """ + if model == "cpl": + return "drv" + return model
+ + + +def _get_extension(model, filepath, ext_regexes): + r""" + For a hist file for the given model, return what we call the "extension" + + model - The component model + filepath - The path of the hist file + ext_regexes - A list of model-specific regexes that are matched before falling back on + the general-purpose regex, r'\w+'. In many cases this will be an empty list, + signifying that we should just use the general-purpose regex. + + >>> _get_extension("cpl", "cpl.hi.nc", []) + 'hi' + >>> _get_extension("cpl", "cpl.h.nc", []) + 'h' + >>> _get_extension("cpl", "cpl.h1.nc.base", []) + 'h1' + >>> _get_extension("cpl", "TESTRUNDIFF.cpl.hi.0.nc.base", []) + 'hi' + >>> _get_extension("cpl", "TESTRUNDIFF_Mmpi-serial.f19_g16_rx1.A.melvin_gnu.C.fake_testing_only_20160816_164150-20160816_164240.cpl.h.nc", []) + 'h' + >>> _get_extension("clm","clm2_0002.h0.1850-01-06-00000.nc", []) + '0002.h0' + >>> _get_extension("pop","PFS.f09_g16.B1850.cheyenne_intel.allactive-default.GC.c2_0_b1f2_int.pop.h.ecosys.nday1.0001-01-02.nc", []) + 'h' + >>> _get_extension("mom", "ga0xnw.mom6.frc._0001_001.nc", []) + 'frc' + >>> _get_extension("mom", "ga0xnw.mom6.sfc.day._0001_001.nc", []) + 'sfc.day' + >>> _get_extension("mom", "bixmc5.mom6.prog._0001_01_05_84600.nc", []) + 'prog' + >>> _get_extension("mom", "bixmc5.mom6.hm._0001_01_03_42300.nc", []) + 'hm' + >>> _get_extension("mom", "bixmc5.mom6.hmz._0001_01_03_42300.nc", []) + 'hmz' + >>> _get_extension("pop", "casename.pop.dd.0001-01-02-00000", []) + 'dd' + >>> _get_extension("cism", "casename.cism.gris.h.0002-01-01-0000.nc", [r"\w+\.\w+"]) + 'gris.h' + """ + # Remove with component namechange + if model == "fv3gfs": + model = "fv3" + if model == "cice5": + model = "cice" + if model == "ww3dev": + model = "ww3" + basename = os.path.basename(filepath) + m = None + if ext_regexes is None: + ext_regexes = [] + + # First add any model-specific extension regexes; these will be checked before the + # general regex + if model == "mom": + # Need to check 'sfc.day' specially: the embedded '.' messes up the + # general-purpose regex + ext_regexes.append(r"sfc\.day") + + # Now add the general-purpose extension regex + ext_regexes.append(r"\w+") + + for ext_regex in ext_regexes: + full_regex_str = model + r"\d?_?(\d{4})?\.(" + ext_regex + r")[-\w\.]*" + full_regex = re.compile(full_regex_str) + m = full_regex.search(basename) + if m is not None: + if m.group(1) is not None: + result = m.group(1) + "." + m.group(2) + else: + result = m.group(2) + return result + + expect(m, "Failed to get extension for file '{}'".format(filepath)) + + return result +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/batch.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/batch.html new file mode 100644 index 00000000000..d26cc5311d7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/batch.html @@ -0,0 +1,295 @@ + + + + + + CIME.XML.batch — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.batch

+"""
+Interface to the config_batch.xml file.  This class inherits from GenericXML.py
+
+The batch_system type="foo" blocks define most things. Machine-specific overrides
+can be defined by providing a batch_system MACH="mach" block.
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import expect
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Batch(GenericXML): + def __init__( + self, + batch_system=None, + machine=None, + infile=None, + files=None, + extra_machines_dir=None, + ): + """ + initialize an object + + If extra_machines_dir is provided, it should be a string giving a path to an + additional directory that will be searched for a config_batch.xml file; if + found, the contents of this file will be appended to the standard + config_batch.xml. An empty string is treated the same as None. + """ + if files is None: + files = Files() + if infile is None: + infile = files.get_value("BATCH_SPEC_FILE") + + schema = files.get_schema("BATCH_SPEC_FILE") + + GenericXML.__init__(self, infile, schema=schema) + + self.batch_system_node = None + self.machine_node = None + self.batch_system = batch_system + self.machine = machine + + # Append the contents of $HOME/.cime/config_batch.xml if it exists. + # + # Also append the contents of a config_batch.xml file in the directory given by + # extra_machines_dir, if present. + # + # This could cause problems if node matches are repeated when only one is expected. + infile = os.path.join(os.environ.get("HOME"), ".cime", "config_batch.xml") + if os.path.exists(infile): + GenericXML.read(self, infile) + if extra_machines_dir: + infile = os.path.join(extra_machines_dir, "config_batch.xml") + if os.path.exists(infile): + GenericXML.read(self, infile) + + if self.batch_system is not None: + self.set_batch_system(self.batch_system, machine=machine) + +
+[docs] + def get_batch_system(self): + """ + Return the name of the batch system + """ + return self.batch_system
+ + +
+[docs] + def get_optional_batch_node(self, nodename, attributes=None): + """ + Return data on a node for a batch system + """ + expect( + self.batch_system_node is not None, + "Batch system not set, use parent get_node?", + ) + + if self.machine_node is not None: + result = self.get_optional_child( + nodename, attributes, root=self.machine_node + ) + if result is None: + return self.get_optional_child( + nodename, attributes, root=self.batch_system_node + ) + else: + return result + else: + return self.get_optional_child( + nodename, attributes, root=self.batch_system_node + )
+ + +
+[docs] + def set_batch_system(self, batch_system, machine=None): + """ + Sets the batch system block in the Batch object + """ + machine = machine if machine is not None else self.machine + if self.batch_system != batch_system or self.batch_system_node is None: + nodes = self.get_children("batch_system", {"type": batch_system}) + for node in nodes: + mach = self.get(node, "MACH") + if mach is None: + self.batch_system_node = node + elif mach == machine: + self.machine = machine + self.machine_node = node + + expect( + self.batch_system_node is not None, + "No batch system '{}' found".format(batch_system), + ) + + return batch_system
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, name, attribute=None, resolved=True, subgroup=None): + """ + Get Value of fields in the config_batch.xml file + """ + expect( + self.batch_system_node is not None, + "Batch object has no batch system defined", + ) + expect(subgroup is None, "This class does not support subgroups") + value = None + + node = self.get_optional_batch_node(name) + if node is not None: + value = self.text(node) + + if resolved: + if value is not None: + value = self.get_resolved_value(value) + elif name in os.environ: + value = os.environ[name] + + return value
+ + +
+[docs] + def get_batch_jobs(self): + """ + Return a list of jobs with the first element the name of the case script + and the second a dict of qualifiers for the job + """ + jobs = [] + bnode = self.get_optional_child("batch_jobs") + if bnode: + for jnode in self.get_children(root=bnode): + if self.name(jnode) == "job": + name = self.get(jnode, "name") + jdict = {} + for child in self.get_children(root=jnode): + jdict[self.name(child)] = self.text(child) + + jobs.append((name, jdict)) + + return jobs
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/component.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/component.html new file mode 100644 index 00000000000..f4b7d1bd41f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/component.html @@ -0,0 +1,508 @@ + + + + + + CIME.XML.component — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.component

+"""
+Interface to the config_component.xml files.  This class inherits from EntryID.py
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.entry_id import EntryID
+from CIME.XML.files import Files
+from CIME.utils import get_cime_root
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Component(EntryID): + def __init__(self, infile, comp_class): + """ + initialize a Component obect from the component xml file in infile + associate the component class with comp_class if provided. + """ + self._comp_class = comp_class + if infile == "testingonly": + self.filename = infile + return + files = Files() + schema = None + EntryID.__init__(self, infile) + schema = files.get_schema( + "CONFIG_{}_FILE".format(comp_class), + attributes={"version": "{}".format(self.get_version())}, + ) + + if schema is not None: + self.validate_xml_file(infile, schema) + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, name, attribute=None, resolved=False, subgroup=None): + expect(subgroup is None, "This class does not support subgroups") + return EntryID.get_value(self, name, attribute, resolved)
+ + +
+[docs] + def get_valid_model_components(self): + """ + return a list of all possible valid generic (e.g. atm, clm, ...) model components + from the entries in the model CONFIG_CPL_FILE + """ + components = [] + comps_node = self.get_child("entry", {"id": "COMP_CLASSES"}) + comps = self.get_default_value(comps_node) + components = comps.split(",") + return components
+ + + def _get_value_match(self, node, attributes=None, exact_match=False): + """ + return the best match for the node <values> entries + Note that a component object uses a different matching algorithm than an entryid object + For a component object the _get_value_match used is below and is not the one in entry_id.py + """ + match_value = None + match_max = 0 + match_count = 0 + match_values = [] + expect(not exact_match, " exact_match not implemented in this method") + expect(node is not None, " Empty node in _get_value_match") + values = self.get_optional_child("values", root=node) + if values is None: + return + + # determine match_type if there is a tie + # ASSUME a default of "last" if "match" attribute is not there + match_type = self.get(values, "match", default="last") + + # use the default_value if present + val_node = self.get_optional_child("default_value", root=node) + if val_node is None: + logger.debug("No default_value for {}".format(self.get(node, "id"))) + return val_node + value = self.text(val_node) + if value is not None and len(value) > 0 and value != "UNSET": + match_values.append(value) + + for valnode in self.get_children("value", root=values): + # loop through all the keys in valnode (value nodes) attributes + for key, value in self.attrib(valnode).items(): + # determine if key is in attributes dictionary + match_count = 0 + if attributes is not None and key in attributes: + if re.search(value, attributes[key]): + logger.debug( + "Value {} and key {} match with value {}".format( + value, key, attributes[key] + ) + ) + match_count += 1 + else: + match_count = 0 + break + + # a match is found + if match_count > 0: + # append the current result + if self.get(values, "modifier") == "additive": + match_values.append(self.text(valnode)) + + # replace the current result if it already contains the new value + # otherwise append the current result + elif self.get(values, "modifier") == "merge": + if self.text(valnode) in match_values: + del match_values[:] + match_values.append(self.text(valnode)) + + else: + if match_type == "last": + # take the *last* best match + if match_count >= match_max: + del match_values[:] + match_max = match_count + match_value = self.text(valnode) + elif match_type == "first": + # take the *first* best match + if match_count > match_max: + del match_values[:] + match_max = match_count + match_value = self.text(valnode) + else: + expect( + False, + "match attribute can only have a value of 'last' or 'first'", + ) + + if len(match_values) > 0: + match_value = " ".join(match_values) + + return match_value + + # pylint: disable=arguments-differ +
+[docs] + def get_description(self, compsetname): + if self.get_version() == 3.0: + return self._get_description_v3(compsetname, self._comp_class) + else: + return self._get_description_v2(compsetname)
+ + +
+[docs] + def get_forcing_description(self, compsetname): + if self.get_version() == 3.0: + return self._get_description_v3(compsetname, "forcing") + else: + return ""
+ + + def _get_description_v3(self, compsetname, comp_class): + """ + version 3 of the config_component.xml file has the description section at the top of the file + the description field has one attribute 'modifier_mode' which has allowed values + '*' 0 or more modifiers (default) + '1' exactly 1 modifier + '?' 0 or 1 modifiers + '+' 1 or more modifiers + + modifiers are fields in the component section of the compsetname following the % symbol. + + The desc field can have an attribute which is the component class ('cpl', 'atm', 'lnd' etc) + or it can have an attribute 'option' which provides descriptions of each optional modifier + or (in the config_component_{model}.xml in the driver only) it can have the attribute 'forcing' + + component descriptions are matched to the compsetname using a set method + """ + expect( + comp_class is not None, "comp_class argument required for version3 files" + ) + comp_class = comp_class.lower() + rootnode = self.get_child("description") + desc = "" + desc_nodes = self.get_children("desc", root=rootnode) + + modifier_mode = self.get(rootnode, "modifier_mode") + if modifier_mode is None: + modifier_mode = "*" + expect( + modifier_mode in ("*", "1", "?", "+"), + "Invalid modifier_mode {} in file {}".format(modifier_mode, self.filename), + ) + optiondesc = {} + if comp_class == "forcing": + for node in desc_nodes: + forcing = self.get(node, "forcing") + if forcing is not None and compsetname.startswith(forcing + "_"): + expect( + len(desc) == 0, + "Too many matches on forcing field {} in file {}".format( + forcing, self.filename + ), + ) + desc = self.text(node) + if desc is None: + desc = compsetname.split("_")[0] + return desc + + # first pass just make a hash of the option descriptions + for node in desc_nodes: + option = self.get(node, "option") + if option is not None: + optiondesc[option] = self.text(node) + + # second pass find a comp_class match + desc = "" + for node in desc_nodes: + compdesc = self.get(node, comp_class) + + if compdesc is not None: + opt_parts = [x.rstrip("]") for x in compdesc.split("[%")] + parts = opt_parts.pop(0).split("%") + reqset = set(parts) + fullset = set(parts + opt_parts) + + match, complist = self._get_description_match( + compsetname, reqset, fullset, modifier_mode + ) + if match: + desc = self.text(node) + for opt in complist: + if opt in optiondesc: + desc += optiondesc[opt] + + # cpl and esp components may not have a description + if comp_class not in ["cpl", "esp"]: + expect( + len(desc) > 0, + "No description found for comp_class {} matching compsetname {} in file {}, expected match in {} % {}".format( + comp_class, + compsetname, + self.filename, + list(reqset), + list(opt_parts), + ), + ) + return desc + + def _get_description_match(self, compsetname, reqset, fullset, modifier_mode): + """ + + >>> obj = Component('testingonly', 'ATM') + >>> obj._get_description_match("1850_DATM%CRU_FRED",set(["DATM"]), set(["DATM","CRU","HSI"]), "*") + (True, ['DATM', 'CRU']) + >>> obj._get_description_match("1850_DATM%FRED_Barn",set(["DATM"]), set(["DATM","CRU","HSI"]), "*") + (False, None) + >>> obj._get_description_match("1850_DATM_Barn",set(["DATM"]), set(["DATM","CRU","HSI"]), "?") + (True, ['DATM']) + >>> obj._get_description_match("1850_DATM_Barn",set(["DATM"]), set(["DATM","CRU","HSI"]), "1") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Expected exactly one modifer found 0 in ['DATM'] + >>> obj._get_description_match("1850_DATM%CRU%HSI_Barn",set(["DATM"]), set(["DATM","CRU","HSI"]), "1") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Expected exactly one modifer found 2 in ['DATM', 'CRU', 'HSI'] + >>> obj._get_description_match("1850_CAM50%WCCM%RCO2_Barn",set(["CAM50", "WCCM"]), set(["CAM50","WCCM","RCO2"]), "*") + (True, ['CAM50', 'WCCM', 'RCO2']) + + # The following is not allowed because the required WCCM field is missing + >>> obj._get_description_match("1850_CAM50%RCO2_Barn",set(["CAM50", "WCCM"]), set(["CAM50","WCCM","RCO2"]), "*") + (False, None) + >>> obj._get_description_match("1850_CAM50_Barn",set(["CAM50", "WCCM"]), set(["CAM50","WCCM","RCO2"]), "+") + (False, None) + >>> obj._get_description_match("1850_CAM50%WCCM_Barn",set(["CAM50", "WCCM"]), set(["CAM50","WCCM","RCO2"]), "+") + (True, ['CAM50', 'WCCM']) + >>> obj._get_description_match("scn:1850_atm:CAM50%WCCM_Barn",set(["CAM50", "WCCM"]), set(["CAM50","WCCM","RCO2"]), "+") + (True, ['CAM50', 'WCCM']) + """ + match = False + comparts = compsetname.split("_") + matchcomplist = None + for comp in comparts: + if ":" in comp: + comp = comp.split(":")[1] + complist = comp.split("%") + cset = set(complist) + + if cset == reqset or (cset > reqset and cset <= fullset): + if modifier_mode == "1": + expect( + len(complist) == 2, + "Expected exactly one modifer found {} in {}".format( + len(complist) - 1, complist + ), + ) + elif modifier_mode == "+": + expect( + len(complist) >= 2, + "Expected one or more modifers found {} in {}".format( + len(complist) - 1, list(reqset) + ), + ) + elif modifier_mode == "?": + expect( + len(complist) <= 2, + "Expected 0 or one modifers found {} in {}".format( + len(complist) - 1, complist + ), + ) + expect( + not match, + "Found multiple matches in file {} for {}".format( + self.filename, comp + ), + ) + match = True + matchcomplist = complist + # found a match + + return match, matchcomplist + + def _get_description_v2(self, compsetname): + rootnode = self.get_child("description") + desc = "" + desc_nodes = self.get_children("desc", root=rootnode) + for node in desc_nodes: + compsetmatch = self.get(node, "compset") + if compsetmatch is not None and re.search(compsetmatch, compsetname): + desc += self.text(node) + + return desc + +
+[docs] + def print_values(self): + """ + print values for help and description in target config_component.xml file + """ + helpnode = self.get_child("help") + helptext = self.text(helpnode) + logger.info(" {}".format(helptext)) + entries = self.get_children("entry") + for entry in entries: + name = self.get(entry, "id") + text = self.text(self.get_child("desc", root=entry)) + logger.info(" {:20s} : {}".format(name, text.encode("utf-8")))
+ + +
+[docs] + def return_values(self): + """ + return a list of hashes from target config_component.xml file + This routine is used by external tools in https://github.com/NCAR/CESM_xml2html + """ + entry_dict = dict() + items = list() + helpnode = self.get_optional_child("help") + if helpnode: + helptext = self.text(helpnode) + else: + helptext = "" + entries = self.get_children("entry") + for entry in entries: + item = dict() + name = self.get(entry, "id") + datatype = self.text(self.get_child("type", root=entry)) + valid_values = self.get_valid_values(name) + default_value = self.get_default_value(node=entry) + group = self.text(self.get_child("group", root=entry)) + filename = self.text(self.get_child("file", root=entry)) + text = self.text(self.get_child("desc", root=entry)) + item = { + "name": name, + "datatype": datatype, + "valid_values": valid_values, + "value": default_value, + "group": group, + "filename": filename, + "desc": text.encode("utf-8"), + } + items.append(item) + entry_dict = {"items": items} + + return helptext, entry_dict
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/compsets.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/compsets.html new file mode 100644 index 00000000000..4e66c48b03c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/compsets.html @@ -0,0 +1,249 @@ + + + + + + CIME.XML.compsets — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.compsets

+"""
+Common interface to XML files which follow the compsets format,
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.entry_id import EntryID
+from CIME.XML.files import Files
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Compsets(GenericXML): + def __init__(self, infile=None, files=None): + if files is None: + files = Files() + schema = files.get_schema("COMPSETS_SPEC_FILE") + GenericXML.__init__(self, infile, schema=schema) + +
+[docs] + def get_compset_match(self, name): + """ + science support is used in cesm to determine if this compset and grid + is scientifically supported. science_support is returned as an array of grids for this compset + """ + nodes = self.get_children("compset") + alias = None + lname = None + + science_support = [] + + for node in nodes: + alias = self.get_element_text("alias", root=node) + lname = self.get_element_text("lname", root=node) + if alias == name or lname == name: + science_support_nodes = self.get_children("science_support", root=node) + for snode in science_support_nodes: + science_support.append(self.get(snode, "grid")) + logger.debug( + "Found node match with alias: {} and lname: {}".format(alias, lname) + ) + return (lname, alias, science_support) + return (None, None, [False])
+ + +
+[docs] + def get_compset_var_settings(self, compset, grid): + """ + Variables can be set in config_compsets.xml in entry id settings with compset and grid attributes + find and return id value pairs here + """ + entries = self.get_optional_child("entries") + result = [] + if entries is not None: + nodes = self.get_children("entry", root=entries) + # Get an empty entryid obj to use + entryidobj = EntryID() + for node in nodes: + value = entryidobj.get_default_value( + node, {"grid": grid, "compset": compset} + ) + if value is not None: + result.append((self.get(node, "id"), value)) + + return result
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, name, attribute=None, resolved=False, subgroup=None): + expect(subgroup is None, "This class does not support subgroups") + if name == "help": + rootnode = self.get_child("help") + helptext = self.text(rootnode) + return helptext + else: + compsets = {} + nodes = self.get_children("compset") + for node in nodes: + for child in node: + logger.debug( + "Here child is {} with value {}".format( + self.name(child), self.text(child) + ) + ) + if self.name(child) == "alias": + alias = self.text(child) + if self.name(child) == "lname": + lname = self.text(child) + compsets[alias] = lname + return compsets
+ + +
+[docs] + def print_values(self, arg_help=True): + help_text = self.get_value(name="help") + compsets = self.get_children("compset") + if arg_help: + logger.info(" {} ".format(help_text)) + + logger.info(" --------------------------------------") + logger.info(" Compset Alias: Compset Long Name ") + logger.info(" --------------------------------------") + for compset in compsets: + logger.info( + " {:20} : {}".format( + self.text(self.get_child("alias", root=compset)), + self.text(self.get_child("lname", root=compset)), + ) + )
+ + +
+[docs] + def get_compset_longnames(self): + compset_nodes = self.get_children("compset") + longnames = [] + for comp in compset_nodes: + longnames.append(self.text(self.get_child("lname", root=comp))) + return longnames
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/entry_id.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/entry_id.html new file mode 100644 index 00000000000..6d0c140145e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/entry_id.html @@ -0,0 +1,742 @@ + + + + + + CIME.XML.entry_id — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.entry_id

+"""
+Common interface to XML files which follow the entry id format,
+this is an abstract class and is expected to
+be used by other XML interface modules and not directly.
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.utils import expect, convert_to_string, convert_to_type
+from CIME.XML.generic_xml import GenericXML
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EntryID(GenericXML): + def __init__(self, infile=None, schema=None, read_only=True): + GenericXML.__init__(self, infile, schema, read_only=read_only) + self.groups = {} + +
+[docs] + def get_default_value(self, node, attributes=None): + """ + Set the value of an entry to the default value for that entry + """ + value = self._get_value_match(node, attributes) + if value is None: + # Fall back to default value + value = self.get_element_text("default_value", root=node) + else: + logger.debug("node is {} value is {}".format(self.get(node, "id"), value)) + + if value is None: + logger.debug("For vid {} value is none".format(self.get(node, "id"))) + value = "" + + return value
+ + +
+[docs] + def set_default_value(self, vid, val): + node = self.get_optional_child("entry", {"id": vid}) + if node is not None: + val = self.set_element_text("default_value", val, root=node) + if val is None: + logger.warning( + "Called set_default_value on a node without default_value field" + ) + + return val
+ + +
+[docs] + def get_value_match( + self, + vid, + attributes=None, + exact_match=False, + entry_node=None, + replacement_for_none=None, + ): + """Handle this case: + <entry id ...> + <values> + <value A="a1">X</value> + <value A="a2">Y</value> + <value A="a3" B="b1">Z</value> + </values> + </entry> + + If replacement_for_none is provided, then: if the found text value would give a + None value, instead replace it with the value given by the replacement_for_none + argument. (However, still return None if no match is found.) This may or may not + be needed, but is in place to maintain some old logic. + + """ + + if entry_node is not None: + value = self._get_value_match( + entry_node, + attributes, + exact_match, + replacement_for_none=replacement_for_none, + ) + else: + node = self.get_optional_child("entry", {"id": vid}) + value = None + if node is not None: + value = self._get_value_match( + node, + attributes, + exact_match, + replacement_for_none=replacement_for_none, + ) + logger.debug("(get_value_match) vid {} value {}".format(vid, value)) + return value
+ + + def _get_value_match( + self, node, attributes=None, exact_match=False, replacement_for_none=None + ): + """ + Note that the component class has a specific version of this function + + If replacement_for_none is provided, then: if the found text value would give a + None value, instead replace it with the value given by the replacement_for_none + argument. (However, still return None if no match is found.) This may or may not + be needed, but is in place to maintain some old logic. + """ + # if there is a <values> element - check to see if there is a match attribute + # if there is NOT a match attribute, then set the default to "first" + # this is different than the component class _get_value_match where the default is "last" + values_node = self.get_optional_child("values", root=node) + if values_node is not None: + match_type = self.get(values_node, "match", default="first") + node = values_node + else: + match_type = "first" + + # Store nodes that match the attributes and their scores. + matches = [] + nodes = self.get_children("value", root=node) + for vnode in nodes: + # For each node in the list start a score. + score = 0 + if attributes: + for attribute in self.attrib(vnode).keys(): + # For each attribute, add to the score. + score += 1 + # If some attribute is specified that we don't know about, + # or the values don't match, it's not a match we want. + if exact_match: + if attribute not in attributes or attributes[ + attribute + ] != self.get(vnode, attribute): + score = -1 + break + else: + if attribute not in attributes or not re.search( + self.get(vnode, attribute), attributes[attribute] + ): + score = -1 + break + + # Add valid matches to the list. + if score >= 0: + matches.append((score, vnode)) + + if not matches: + return None + + # Get maximum score using either a "last" or "first" match in case of a tie + max_score = -1 + mnode = None + for score, node in matches: + if match_type == "last": + # take the *last* best match + if score >= max_score: + max_score = score + mnode = node + elif match_type == "first": + # take the *first* best match + if score > max_score: + max_score = score + mnode = node + else: + expect( + False, + "match attribute can only have a value of 'last' or 'first', value is %s" + % match_type, + ) + + text = self.text(mnode) + if text is None: + # NOTE(wjs, 2021-06-03) I'm not sure when (if ever) this can happen, but I'm + # putting this logic here to maintain some old logic, to be safe. + text = replacement_for_none + return text + +
+[docs] + def get_node_element_info(self, vid, element_name): + node = self.get_optional_child("entry", {"id": vid}) + if node is None: + return None + else: + return self._get_node_element_info(node, element_name)
+ + + def _get_node_element_info(self, node, element_name): + return self.get_element_text(element_name, root=node) + + def _get_type_info(self, node): + if node is None: + return None + val = self._get_node_element_info(node, "type") + if val is None: + return "char" + return val + +
+[docs] + def get_type_info(self, vid): + vid, _, _ = self.check_if_comp_var(vid) + node = self.scan_optional_child("entry", {"id": vid}) + return self._get_type_info(node)
+ + + # pylint: disable=unused-argument +
+[docs] + def check_if_comp_var(self, vid, attribute=None, node=None): + # handled in classes + return vid, None, False
+ + + def _get_default(self, node): + return self._get_node_element_info(node, "default_value") + + # Get description , expect child with tag "description" for parent node +
+[docs] + def get_description(self, node): + return self._get_node_element_info(node, "desc")
+ + + # Get group , expect node with tag "group" + # entry id nodes are children of group nodes +
+[docs] + def get_groups(self, node): + groups = self.get_children("group") + result = [] + nodes = [] + vid = self.get(node, "id") + for group in groups: + nodes = self.get_children("entry", attributes={"id": vid}, root=group) + if nodes: + result.append(self.get(group, "id")) + + return result
+ + +
+[docs] + def get_valid_values(self, vid): + node = self.scan_optional_child("entry", {"id": vid}) + if node is None: + return None + return self._get_valid_values(node)
+ + + def _get_valid_values(self, node): + valid_values = self.get_element_text("valid_values", root=node) + valid_values_list = [] + if valid_values: + valid_values_list = [item.lstrip() for item in valid_values.split(",")] + return valid_values_list + +
+[docs] + def set_valid_values(self, vid, new_valid_values): + node = self.scan_optional_child("entry", {"id": vid}) + if node is None: + return None + return self._set_valid_values(node, new_valid_values)
+ + +
+[docs] + def get_nodes_by_id(self, vid): + return self.scan_children("entry", {"id": vid})
+ + + def _set_valid_values(self, node, new_valid_values): + old_vv = self._get_valid_values(node) + if old_vv is None: + self.make_child("valid_values", text=new_valid_values) + logger.debug( + "Adding valid_values {} for {}".format( + new_valid_values, self.get(node, "id") + ) + ) + else: + vv_text = self.set_element_text("valid_values", new_valid_values, root=node) + logger.debug( + "Replacing valid_values {} with {} for {}".format( + old_vv, vv_text, self.get(node, "id") + ) + ) + + current_value = self.get(node, "value") + valid_values_list = self._get_valid_values(node) + if current_value is not None and current_value not in valid_values_list: + logger.warning( + 'WARNING: Current setting for {} not in new valid values. Updating setting to "{}"'.format( + self.get(node, "id"), valid_values_list[0] + ) + ) + self._set_value(node, valid_values_list[0]) + return new_valid_values + + def _set_value(self, node, value, vid=None, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + expect(subgroup is None, "Subgroup not supported") + str_value = self.get_valid_value_string(node, value, vid, ignore_type) + self.set(node, "value", str_value) + return value + +
+[docs] + def get_valid_value_string(self, node, value, vid=None, ignore_type=False): + valid_values = self._get_valid_values(node) + if ignore_type: + expect( + isinstance(value, str), + "Value must be type string if ignore_type is true", + ) + str_value = value + return str_value + type_str = self._get_type_info(node) + str_value = convert_to_string(value, type_str, vid) + + if valid_values and not str_value.startswith("$"): + expect( + str_value in valid_values, + "Did not find {} in valid values for {}: {}".format( + value, vid, valid_values + ), + ) + return str_value
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + val = None + root = ( + self.root + if subgroup is None + else self.get_optional_child("group", {"id": subgroup}) + ) + node = self.get_optional_child("entry", {"id": vid}, root=root) + if node is not None: + val = self._set_value(node, value, vid, subgroup, ignore_type) + return val
+ + +
+[docs] + def get_values(self, vid, attribute=None, resolved=True, subgroup=None): + """ + Same functionality as get_value but it returns a list, if the + value in xml contains commas the list have multiple elements split on + commas + """ + results = [] + node = self.scan_optional_child("entry", {"id": vid}) + if node is None: + return results + str_result = self._get_value( + node, attribute=attribute, resolved=resolved, subgroup=subgroup + ) + str_results = str_result.split(",") + for result in str_results: + # Return value as right type if we were able to fully resolve + # otherwise, we have to leave as string. + if "$" in result: + results.append(result) + else: + type_str = self._get_type_info(node) + results.append(convert_to_type(result, type_str, vid)) + return results
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, vid, attribute=None, resolved=True, subgroup=None): + """ + Get a value for entry with id attribute vid. + or from the values field if the attribute argument is provided + and matches + """ + root = ( + self.root + if subgroup is None + else self.get_optional_child("group", {"id": subgroup}) + ) + node = self.scan_optional_child("entry", {"id": vid}, root=root) + if node is None: + return + + val = self._get_value( + node, attribute=attribute, resolved=resolved, subgroup=subgroup + ) + # Return value as right type if we were able to fully resolve + # otherwise, we have to leave as string. + if val is None: + return val + elif "$" in val: + return val + else: + type_str = self._get_type_info(node) + return convert_to_type(val, type_str, vid)
+ + + def _get_value(self, node, attribute=None, resolved=True, subgroup=None): + """ + internal get_value, does not convert to type + """ + logger.debug("(_get_value) ({}, {}, {})".format(attribute, resolved, subgroup)) + val = None + if node is None: + logger.debug("No node") + return val + + logger.debug( + "Found node {} with attributes {}".format( + self.name(node), self.attrib(node) + ) + ) + if attribute: + vals = self.get_optional_child("values", root=node) + node = vals if vals is not None else node + val = self.get_element_text("value", attributes=attribute, root=node) + elif self.get(node, "value") is not None: + val = self.get(node, "value") + else: + val = self.get_default_value(node) + + if resolved: + val = self.get_resolved_value(val) + + return val + +
+[docs] + def get_child_content(self, vid, childname): + val = None + node = self.get_optional_child("entry", {"id": vid}) + if node is not None: + val = self.get_element_text(childname, root=node) + return val
+ + +
+[docs] + def get_elements_from_child_content(self, childname, childcontent): + nodes = self.get_children("entry") + elements = [] + for node in nodes: + content = self.get_element_text(childname, root=node) + expect( + content is not None, + "No childname {} for id {}".format(childname, self.get(node, "id")), + ) + if content == childcontent: + elements.append(node) + + return elements
+ + +
+[docs] + def add_elements_by_group(self, srcobj, attributes=None, infile=None): + """ + Add elements from srcobj to self under the appropriate + group element, entries to be added must have a child element + <file> with value "infile" + """ + if infile is None: + infile = os.path.basename(self.filename) + + # First get the list of entries in srcobj with matching file children + nodelist = srcobj.get_elements_from_child_content("file", infile) + + # For matchs found: Remove {<group>, <file>, <values>} + # children from each entry and set the default value for the + # new entries in self - putting the entries as children of + # group elements in file $file + for src_node in nodelist: + node = self.copy(src_node) + gname = srcobj.get_element_text("group", root=src_node) + if gname is None: + gname = "group_not_set" + + # If group with id=$gname does not exist in self.groups + # then create the group node and add it to infile file + if gname not in self.groups.keys(): + # initialize an empty list + newgroup = self.make_child(name="group", attributes={"id": gname}) + self.groups[gname] = newgroup + + # Remove {<group>, <file>, <values>} from the entry element + self.cleanupnode(node) + + # Add the entry element to the group + self.add_child(node, root=self.groups[gname]) + + # Set the default value, it may be determined by a regular + # expression match to a dictionary value in attributes matching a + # value attribute in node + value = srcobj.get_default_value(src_node, attributes) + if value is not None and len(value): + self._set_value(node, value) + + logger.debug("Adding to group " + gname) + + return nodelist
+ + +
+[docs] + def cleanupnode(self, node): + """ + in env_base.py, not expected to get here + """ + expect(False, " Not expected to be here {}".format(self.get(node, "id")))
+ + +
+[docs] + def compare_xml(self, other, root=None, otherroot=None): + xmldiffs = {} + if root is not None: + expect(otherroot is not None, " inconsistant request") + f1nodes = self.scan_children("entry", root=root) + for node in f1nodes: + vid = self.get(node, "id") + logger.debug("Compare vid {}".format(vid)) + f2match = other.scan_optional_child( + "entry", attributes={"id": vid}, root=otherroot + ) + expect(f2match is not None, "Could not find {} in Locked file".format(vid)) + if node != f2match: + f1val = self.get_value(vid, resolved=False) + if f1val is not None: + f2val = other.get_value(vid, resolved=False) + if f1val != f2val: + xmldiffs[vid] = [f1val, f2val] + elif hasattr(self, "_components"): + # pylint: disable=no-member + for comp in self._components: + f1val = self.get_value( + "{}_{}".format(vid, comp), resolved=False + ) + if f1val is not None: + f2val = other.get_value( + "{}_{}".format(vid, comp), resolved=False + ) + if f1val != f2val: + xmldiffs[vid] = [f1val, f2val] + else: + if node != f2match: + f1value_nodes = self.get_children("value", root=node) + for valnode in f1value_nodes: + f2valnodes = other.get_children( + "value", + root=f2match, + attributes=self.attrib(valnode), + ) + for f2valnode in f2valnodes: + if ( + self.attrib(valnode) is None + and self.attrib(f2valnode) is None + or self.attrib(f2valnode) + == self.attrib(valnode) + ): + if other.get_resolved_value( + self.text(f2valnode) + ) != self.get_resolved_value( + self.text(valnode) + ): + xmldiffs[ + "{}:{}".format( + vid, self.attrib(valnode) + ) + ] = [ + self.text(valnode), + self.text(f2valnode), + ] + return xmldiffs
+ + +
+[docs] + def overwrite_existing_entries(self): + # if there exist two nodes with the same id delete the first one. + for node in self.get_children("entry"): + vid = self.get(node, "id") + samenodes = self.get_nodes_by_id(vid) + if len(samenodes) > 1: + expect( + len(samenodes) == 2, + "Too many matchs for id {} in file {}".format(vid, self.filename), + ) + logger.debug("Overwriting node {}".format(vid)) + read_only = self.read_only + if read_only: + self.read_only = False + self.remove_child(samenodes[0]) + self.read_only = read_only
+ + + def __iter__(self): + for node in self.scan_children("entry"): + vid = self.get(node, "id") + yield vid, self.get_value(vid)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_archive.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_archive.html new file mode 100644 index 00000000000..105a3211388 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_archive.html @@ -0,0 +1,173 @@ + + + + + + CIME.XML.env_archive — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_archive

+"""
+Interface to the env_archive.xml file.  This class inherits from EnvBase
+"""
+from CIME.XML.standard_module_setup import *
+from CIME import utils
+from CIME.XML.archive_base import ArchiveBase
+from CIME.XML.env_base import EnvBase
+
+logger = logging.getLogger(__name__)
+# pylint: disable=super-init-not-called
+
+[docs] +class EnvArchive(ArchiveBase, EnvBase): + def __init__(self, case_root=None, infile="env_archive.xml", read_only=False): + """ + initialize an object interface to file env_archive.xml in the case directory + """ + schema = os.path.join(utils.get_schema_path(), "env_archive.xsd") + EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only) + +
+[docs] + def get_entries(self): + return self.get_children("comp_archive_spec")
+ + +
+[docs] + def get_entry_info(self, archive_entry): + compname = self.get(archive_entry, "compname") + compclass = self.get(archive_entry, "compclass") + return compname, compclass
+ + +
+[docs] + def get_rpointer_contents(self, archive_entry): + rpointer_items = [] + rpointer_nodes = self.get_children("rpointer", root=archive_entry) + for rpointer_node in rpointer_nodes: + file_node = self.get_child("rpointer_file", root=rpointer_node) + content_node = self.get_child("rpointer_content", root=rpointer_node) + rpointer_items.append([self.text(file_node), self.text(content_node)]) + return rpointer_items
+ + +
+[docs] + def get_type_info(self, vid): + return "char"
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_base.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_base.html new file mode 100644 index 00000000000..a704b942239 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_base.html @@ -0,0 +1,430 @@ + + + + + + CIME.XML.env_base — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_base

+"""
+Base class for env files.  This class inherits from EntryID.py
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.entry_id import EntryID
+from CIME.XML.headers import Headers
+from CIME.utils import convert_to_type
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EnvBase(EntryID): + def __init__(self, case_root, infile, schema=None, read_only=False): + if case_root is None: + case_root = os.getcwd() + self._caseroot = case_root + if os.path.isabs(infile): + fullpath = infile + else: + fullpath = os.path.join(case_root, infile) + + EntryID.__init__(self, fullpath, schema=schema, read_only=read_only) + + self._id_map = None + self._group_map = None + + if not os.path.isfile(fullpath): + headerobj = Headers() + headernode = headerobj.get_header_node(os.path.basename(fullpath)) + self.add_child(headernode) + else: + self._setup_cache() + + def _setup_cache(self): + self._id_map = {} # map id directly to nodes + self._group_map = {} # map group name to entry id dict + + group_elems = self.get_children("group") + for group_elem in group_elems: + group_name = self.get(group_elem, "id") + expect( + group_name not in self._group_map, + "Repeat group '{}'".format(group_name), + ) + group_map = {} + self._group_map[group_name] = group_map + entry_elems = self.get_children("entry", root=group_elem) + for entry_elem in entry_elems: + entry_id = self.get(entry_elem, "id") + expect( + entry_id not in group_map, + "Repeat entry '{}' in group '{}'".format(entry_id, group_name), + ) + group_map[entry_id] = entry_elem + if entry_id in self._id_map: + self._id_map[entry_id].append(entry_elem) + else: + self._id_map[entry_id] = [entry_elem] + + self.lock() + +
+[docs] + def change_file(self, newfile, copy=False): + self.unlock() + EntryID.change_file(self, newfile, copy=copy) + self._setup_cache()
+ + +
+[docs] + def get_children(self, name=None, attributes=None, root=None): + if ( + self.locked + and name == "entry" + and attributes is not None + and attributes.keys() == ["id"] + ): + entry_id = attributes["id"] + if root is None or self.name(root) == "file": + if entry_id in self._id_map: + return self._id_map[entry_id] + else: + return [] + else: + expect( + self.name(root) == "group", + "Unexpected elem '{}' for {}, attrs {}".format( + self.name(root), self.filename, self.attrib(root) + ), + ) + group_id = self.get(root, "id") + if ( + group_id in self._group_map + and entry_id in self._group_map[group_id] + ): + return [self._group_map[group_id][entry_id]] + else: + return [] + + else: + # Non-compliant look up + return EntryID.get_children( + self, name=name, attributes=attributes, root=root + )
+ + +
+[docs] + def scan_children(self, nodename, attributes=None, root=None): + if ( + self.locked + and nodename == "entry" + and attributes is not None + and attributes.keys() == ["id"] + ): + return EnvBase.get_children( + self, name=nodename, attributes=attributes, root=root + ) + else: + return EntryID.scan_children( + self, nodename, attributes=attributes, root=root + )
+ + +
+[docs] + def set_components(self, components): + if hasattr(self, "_components"): + # pylint: disable=attribute-defined-outside-init + self._components = components
+ + +
+[docs] + def check_if_comp_var(self, vid, attribute=None, node=None): + comp = None + if node is None: + nodes = self.scan_children("entry", {"id": vid}) + if len(nodes): + node = nodes[0] + + if node: + valnodes = self.scan_children( + "value", attributes={"compclass": None}, root=node + ) + if len(valnodes) == 0: + logger.debug("vid {} is not a compvar".format(vid)) + return vid, None, False + else: + logger.debug("vid {} is a compvar".format(vid)) + if attribute is not None: + comp = attribute["compclass"] + return vid, comp, True + else: + if hasattr(self, "_components") and self._components: + new_vid = None + for comp in self._components: + if vid.endswith("_" + comp): + new_vid = vid.replace("_" + comp, "", 1) + elif vid.startswith(comp + "_"): + new_vid = vid.replace(comp + "_", "", 1) + elif "_" + comp + "_" in vid: + new_vid = vid.replace(comp + "_", "", 1) + if new_vid is not None: + break + if new_vid is not None: + logger.debug("vid {} is a compvar with comp {}".format(vid, comp)) + return new_vid, comp, True + + return vid, None, False
+ + +
+[docs] + def get_value(self, vid, attribute=None, resolved=True, subgroup=None): + """ + Get a value for entry with id attribute vid. + or from the values field if the attribute argument is provided + and matches + """ + value = None + vid, comp, iscompvar = self.check_if_comp_var(vid, attribute) + logger.debug("vid {} comp {} iscompvar {}".format(vid, comp, iscompvar)) + if iscompvar: + if comp is None: + if subgroup is not None: + comp = subgroup + else: + logger.debug("Not enough info to get value for {}".format(vid)) + return value + if attribute is None: + attribute = {"compclass": comp} + else: + attribute["compclass"] = comp + node = self.scan_optional_child("entry", {"id": vid}) + if node is not None: + type_str = self._get_type_info(node) + values = self.get_optional_child("values", root=node) + node = values if values is not None else node + val = self.get_element_text("value", attribute, root=node) + if val is not None: + if val.startswith("$"): + value = val + else: + value = convert_to_type(val, type_str, vid) + return value + + return EntryID.get_value( + self, vid, attribute=attribute, resolved=resolved, subgroup=subgroup + )
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + vid, comp, iscompvar = self.check_if_comp_var(vid, None) + val = None + root = ( + self.root + if subgroup is None + else self.get_optional_child("group", {"id": subgroup}) + ) + node = self.scan_optional_child("entry", {"id": vid}, root=root) + if node is not None: + if iscompvar and comp is None: + # pylint: disable=no-member + for comp in self._components: + val = self._set_value( + node, value, vid, subgroup, ignore_type, compclass=comp + ) + else: + val = self._set_value( + node, value, vid, subgroup, ignore_type, compclass=comp + ) + return val
+ + + # pylint: disable=arguments-differ + def _set_value( + self, node, value, vid=None, subgroup=None, ignore_type=False, compclass=None + ): + if vid is None: + vid = self.get(node, "id") + vid, _, iscompvar = self.check_if_comp_var(vid, node=node) + + if iscompvar: + expect(compclass is not None, "compclass must be specified if is comp var") + attribute = {"compclass": compclass} + str_value = self.get_valid_value_string(node, value, vid, ignore_type) + values = self.get_optional_child("values", root=node) + node = values if values is not None else node + val = self.set_element_text("value", str_value, attribute, root=node) + else: + val = EntryID._set_value(self, node, value, vid, subgroup, ignore_type) + return val + +
+[docs] + def get_nodes_by_id(self, varid): + varid, _, _ = self.check_if_comp_var(varid, None) + return EntryID.get_nodes_by_id(self, varid)
+ + +
+[docs] + def cleanupnode(self, node): + """ + Remove the <group>, <file>, <values> and <value> childnodes from node + """ + fnode = self.get_child("file", root=node) + self.remove_child(fnode, node) + gnode = self.get_child("group", root=node) + self.remove_child(gnode, node) + dnode = self.get_optional_child("default_value", root=node) + if dnode is not None: + self.remove_child(dnode, node) + + vnode = self.get_optional_child("values", root=node) + if vnode is not None: + componentatt = self.get_children( + "value", attributes={"component": "ATM"}, root=vnode + ) + # backward compatibility (compclasses and component were mixed + # now we seperated into component and compclass) + if len(componentatt) > 0: + for ccnode in self.get_children( + "value", attributes={"component": None}, root=vnode + ): + val = self.get(ccnode, "component") + self.pop(ccnode, "component") + self.set(ccnode, "compclass", val) + + compclassatt = self.get_children( + "value", attributes={"compclass": None}, root=vnode + ) + if len(compclassatt) == 0: + self.remove_child(vnode, root=node) + + return node
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_batch.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_batch.html new file mode 100644 index 00000000000..f18524cf5f7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_batch.html @@ -0,0 +1,1637 @@ + + + + + + CIME.XML.env_batch — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_batch

+"""
+Interface to the env_batch.xml file.  This class inherits from EnvBase
+"""
+
+import os
+from CIME.XML.standard_module_setup import *
+from CIME.XML.env_base import EnvBase
+from CIME import utils
+from CIME.utils import (
+    transform_vars,
+    get_cime_root,
+    convert_to_seconds,
+    convert_to_babylonian_time,
+    get_cime_config,
+    get_batch_script_for_job,
+    get_logging_options,
+    format_time,
+    add_flag_to_cmd,
+)
+from CIME.locked_files import lock_file, unlock_file
+from collections import OrderedDict
+import stat, re, math
+import pathlib
+
+logger = logging.getLogger(__name__)
+
+# pragma pylint: disable=attribute-defined-outside-init
+
+
+
+[docs] +class EnvBatch(EnvBase): + def __init__(self, case_root=None, infile="env_batch.xml", read_only=False): + """ + initialize an object interface to file env_batch.xml in the case directory + """ + self._batchtype = None + # This arbitrary setting should always be overwritten + self._default_walltime = "00:20:00" + schema = os.path.join(utils.get_schema_path(), "env_batch.xsd") + super(EnvBatch, self).__init__( + case_root, infile, schema=schema, read_only=read_only + ) + self._batchtype = self.get_batch_system_type() + + # pylint: disable=arguments-differ +
+[docs] + def set_value(self, item, value, subgroup=None, ignore_type=False): + """ + Override the entry_id set_value function with some special cases for this class + """ + val = None + + if item == "JOB_QUEUE": + expect( + value in self._get_all_queue_names() or ignore_type, + "Unknown Job Queue specified use --force to set", + ) + + # allow the user to set item for all jobs if subgroup is not provided + if subgroup is None: + gnodes = self.get_children("group") + for gnode in gnodes: + node = self.get_optional_child("entry", {"id": item}, root=gnode) + if node is not None: + self._set_value(node, value, vid=item, ignore_type=ignore_type) + val = value + else: + group = self.get_optional_child("group", {"id": subgroup}) + if group is not None: + node = self.get_optional_child("entry", {"id": item}, root=group) + if node is not None: + val = self._set_value( + node, value, vid=item, ignore_type=ignore_type + ) + + return val
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, item, attribute=None, resolved=True, subgroup=None): + """ + Must default subgroup to something in order to provide single return value + """ + value = None + node = self.get_optional_child(item, attribute) + if item in ("BATCH_SYSTEM", "PROJECT_REQUIRED"): + return super(EnvBatch, self).get_value(item, attribute, resolved) + + if not node: + # this will take the last instance of item listed in all batch_system elements + bs_nodes = self.get_children("batch_system") + for bsnode in bs_nodes: + cnode = self.get_optional_child(item, attribute, root=bsnode) + if cnode: + node = cnode + if node: + value = self.text(node) + if resolved: + value = self.get_resolved_value(value) + + return value
+ + +
+[docs] + def get_type_info(self, vid): + gnodes = self.get_children("group") + for gnode in gnodes: + nodes = self.get_children("entry", {"id": vid}, root=gnode) + type_info = None + for node in nodes: + new_type_info = self._get_type_info(node) + if type_info is None: + type_info = new_type_info + else: + expect( + type_info == new_type_info, + "Inconsistent type_info for entry id={} {} {}".format( + vid, new_type_info, type_info + ), + ) + return type_info
+ + +
+[docs] + def get_jobs(self): + groups = self.get_children("group") + results = [] + for group in groups: + if self.get(group, "id") not in ["job_submission", "config_batch"]: + results.append(self.get(group, "id")) + + return results
+ + +
+[docs] + def create_job_groups(self, batch_jobs, is_test): + # Subtle: in order to support dynamic batch jobs, we need to remove the + # job_submission group and replace with job-based groups + + orig_group = self.get_child( + "group", + {"id": "job_submission"}, + err_msg="Looks like job groups have already been created", + ) + orig_group_children = super(EnvBatch, self).get_children(root=orig_group) + + childnodes = [] + for child in reversed(orig_group_children): + childnodes.append(child) + + self.remove_child(orig_group) + + for name, jdict in batch_jobs: + if name == "case.run" and is_test: + pass # skip + elif name == "case.test" and not is_test: + pass # skip + elif name == "case.run.sh": + pass # skip + else: + new_job_group = self.make_child("group", {"id": name}) + for field in jdict.keys(): + val = jdict[field] + node = self.make_child( + "entry", {"id": field, "value": val}, root=new_job_group + ) + self.make_child("type", root=node, text="char") + + for child in childnodes: + self.add_child(self.copy(child), root=new_job_group)
+ + +
+[docs] + def cleanupnode(self, node): + if self.get(node, "id") == "batch_system": + fnode = self.get_child(name="file", root=node) + self.remove_child(fnode, root=node) + gnode = self.get_child(name="group", root=node) + self.remove_child(gnode, root=node) + vnode = self.get_optional_child(name="values", root=node) + if vnode is not None: + self.remove_child(vnode, root=node) + else: + node = super(EnvBatch, self).cleanupnode(node) + return node
+ + +
+[docs] + def set_batch_system(self, batchobj, batch_system_type=None): + if batch_system_type is not None: + self.set_batch_system_type(batch_system_type) + + if batchobj.batch_system_node is not None and batchobj.machine_node is not None: + for node in batchobj.get_children("", root=batchobj.machine_node): + name = self.name(node) + if name != "directives": + oldnode = batchobj.get_optional_child( + name, root=batchobj.batch_system_node + ) + if oldnode is not None: + logger.debug("Replacing {}".format(self.name(oldnode))) + batchobj.remove_child(oldnode, root=batchobj.batch_system_node) + + if batchobj.batch_system_node is not None: + self.add_child(self.copy(batchobj.batch_system_node)) + if batchobj.machine_node is not None: + self.add_child(self.copy(batchobj.machine_node)) + if os.path.exists(os.path.join(self._caseroot, "LockedFiles", "env_batch.xml")): + unlock_file(os.path.basename(batchobj.filename), caseroot=self._caseroot) + self.set_value("BATCH_SYSTEM", batch_system_type) + if os.path.exists(os.path.join(self._caseroot, "LockedFiles")): + lock_file(os.path.basename(batchobj.filename), caseroot=self._caseroot)
+ + +
+[docs] + def get_job_overrides(self, job, case): + env_workflow = case.get_env("workflow") + ( + total_tasks, + num_nodes, + tasks_per_node, + thread_count, + ngpus_per_node, + ) = env_workflow.get_job_specs(case, job) + overrides = {} + + if total_tasks: + overrides["total_tasks"] = total_tasks + overrides["num_nodes"] = num_nodes + overrides["tasks_per_node"] = tasks_per_node + if thread_count: + overrides["thread_count"] = thread_count + else: + total_tasks = case.get_value("TOTALPES") * int(case.thread_count) + thread_count = case.thread_count + if int(total_tasks) * int(thread_count) < case.get_value("MAX_TASKS_PER_NODE"): + overrides["max_tasks_per_node"] = int(total_tasks) + + overrides["ngpus_per_node"] = ngpus_per_node + overrides["mpirun"] = case.get_mpirun_cmd(job=job, overrides=overrides) + return overrides
+ + +
+[docs] + def make_batch_script(self, input_template, job, case, outfile=None): + expect( + os.path.exists(input_template), + "input file '{}' does not exist".format(input_template), + ) + overrides = self.get_job_overrides(job, case) + ext = os.path.splitext(job)[-1] + if len(ext) == 0: + ext = job + if ext.startswith("."): + ext = ext[1:] + + # A job name or job array name can be at most 230 characters. It must consist only of alphabetic, numeric, plus + # sign ("+"), dash or minus or hyphen ("-"), underscore ("_"), and dot or period (".") characters + # most of these are checked in utils:check_name, but % is not one of them. + + overrides["job_id"] = ext + "." + case.get_value("CASE").replace("%", "") + + overrides["batchdirectives"] = self.get_batch_directives( + case, job, overrides=overrides + ) + output_text = transform_vars( + open(input_template, "r").read(), + case=case, + subgroup=job, + overrides=overrides, + ) + output_name = get_batch_script_for_job(job) if outfile is None else outfile + logger.info("Creating file {}".format(output_name)) + with open(output_name, "w") as fd: + fd.write(output_text) + + # make sure batch script is exectuble + os.chmod( + output_name, + os.stat(output_name).st_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH, + )
+ + +
+[docs] + def set_job_defaults(self, batch_jobs, case): + if self._batchtype is None: + self._batchtype = self.get_batch_system_type() + + if self._batchtype == "none": + return + env_workflow = case.get_env("workflow") + known_jobs = env_workflow.get_jobs() + + for job, jsect in batch_jobs: + if job not in known_jobs: + continue + + walltime = ( + case.get_value("USER_REQUESTED_WALLTIME", subgroup=job) + if case.get_value("USER_REQUESTED_WALLTIME", subgroup=job) + else None + ) + force_queue = ( + case.get_value("USER_REQUESTED_QUEUE", subgroup=job) + if case.get_value("USER_REQUESTED_QUEUE", subgroup=job) + else None + ) + walltime_format = ( + case.get_value("walltime_format", subgroup=job) + if case.get_value("walltime_format", subgroup=job) + else None + ) + logger.info( + "job is {} USER_REQUESTED_WALLTIME {} USER_REQUESTED_QUEUE {} WALLTIME_FORMAT {}".format( + job, walltime, force_queue, walltime_format + ) + ) + task_count = ( + int(jsect["task_count"]) if "task_count" in jsect else case.total_tasks + ) + + if "walltime" in jsect and walltime is None: + walltime = jsect["walltime"] + + logger.debug( + "Using walltime {!r} from batch job " "spec".format(walltime) + ) + + if "task_count" in jsect: + # job is using custom task_count, need to compute a node_count based on this + node_count = int( + math.ceil(float(task_count) / float(case.tasks_per_node)) + ) + else: + node_count = case.num_nodes + + queue = self.select_best_queue( + node_count, task_count, name=force_queue, walltime=walltime, job=job + ) + if queue is None and walltime is not None: + # Try to see if walltime was the holdup + queue = self.select_best_queue( + node_count, task_count, name=force_queue, walltime=None, job=job + ) + if queue is not None: + # It was, override the walltime if a test, otherwise just warn the user + new_walltime = self.get_queue_specs(queue)[5] + expect(new_walltime is not None, "Should never make it here") + logger.warning( + "WARNING: Requested walltime '{}' could not be matched by any {} queue".format( + walltime, force_queue + ) + ) + if case.get_value("TEST"): + logger.warning( + " Using walltime '{}' instead".format(new_walltime) + ) + walltime = new_walltime + else: + logger.warning( + " Continuing with suspect walltime, batch submission may fail" + ) + + if queue is None: + logger.warning( + "WARNING: No queue on this system met the requirements for this job. Falling back to defaults" + ) + queue = self.get_default_queue() + walltime = self.get_queue_specs(queue)[5] + + ( + _, + _, + _, + walltimedef, + walltimemin, + walltimemax, + _, + _, + _, + ) = self.get_queue_specs(queue) + + if walltime is None: + # Use default walltime if available for queue + if walltimedef is not None: + walltime = walltimedef + else: + # Last chance to figure out a walltime + # No default for queue, take max if available + if walltime is None and walltimemax is not None: + walltime = walltimemax + + # Still no walltime, try max from the default queue + if walltime is None: + # Queue is unknown, use specs from default queue + walltime = self.get(self.get_default_queue(), "walltimemax") + + logger.debug( + "Using walltimemax {!r} from default " + "queue {!r}".format(walltime, self.text(queue)) + ) + + # Still no walltime, use the hardcoded default + if walltime is None: + walltime = self._default_walltime + + logger.debug( + "Last resort using default walltime " + "{!r}".format(walltime) + ) + + # only enforce when not running a test + if not case.get_value("TEST"): + walltime_seconds = convert_to_seconds(walltime) + + # walltime must not be less than walltimemin + if walltimemin is not None: + walltimemin_seconds = convert_to_seconds(walltimemin) + + if walltime_seconds < walltimemin_seconds: + logger.warning( + "WARNING: Job {!r} walltime " + "{!r} is less than queue " + "{!r} minimum walltime " + "{!r}, job might fail".format( + job, walltime, self.text(queue), walltimemin + ) + ) + + # walltime must not be more than walltimemax + if walltimemax is not None: + walltimemax_seconds = convert_to_seconds(walltimemax) + + if walltime_seconds > walltimemax_seconds: + logger.warning( + "WARNING: Job {!r} walltime " + "{!r} is more than queue " + "{!r} maximum walltime " + "{!r}, job might fail".format( + job, walltime, self.text(queue), walltimemax + ) + ) + + walltime_format = self.get_value("walltime_format") + if walltime_format: + seconds = convert_to_seconds(walltime) + full_bab_time = convert_to_babylonian_time(seconds) + walltime = format_time(walltime_format, "%H:%M:%S", full_bab_time) + + env_workflow.set_value( + "JOB_QUEUE", self.text(queue), subgroup=job, ignore_type=False + ) + env_workflow.set_value("JOB_WALLCLOCK_TIME", walltime, subgroup=job) + logger.debug( + "Job {} queue {} walltime {}".format(job, self.text(queue), walltime) + )
+ + + def _match_attribs(self, attribs, case, queue): + # check for matches with case-vars + for attrib in attribs: + if attrib in ["default", "prefix"]: + # These are not used for matching + continue + + elif attrib == "queue": + if not self._match(queue, attribs["queue"]): + return False + + else: + val = case.get_value(attrib.upper()) + expect( + val is not None, + "Cannot match attrib '%s', case has no value for it" + % attrib.upper(), + ) + if not self._match(val, attribs[attrib]): + return False + + return True + + def _match(self, my_value, xml_value): + if xml_value.startswith("!"): + result = re.match(xml_value[1:], str(my_value)) is None + elif isinstance(my_value, bool): + if my_value: + result = xml_value == "TRUE" + else: + result = xml_value == "FALSE" + else: + result = re.match(xml_value + "$", str(my_value)) is not None + + logger.debug( + "(env_mach_specific) _match {} {} {}".format(my_value, xml_value, result) + ) + return result + +
+[docs] + def get_batch_directives(self, case, job, overrides=None, output_format="default"): + """ """ + result = [] + directive_prefix = None + + roots = self.get_children("batch_system") + queue = case.get_value("JOB_QUEUE", subgroup=job) + if self._batchtype != "none" and not queue in self._get_all_queue_names(): + unknown_queue = True + qnode = self.get_default_queue() + default_queue = self.text(qnode) + else: + unknown_queue = False + + for root in roots: + if root is not None: + if directive_prefix is None: + if output_format == "default": + directive_prefix = self.get_element_text( + "batch_directive", root=root + ) + elif output_format == "cylc": + directive_prefix = " " + if unknown_queue: + unknown_queue_directives = self.get_element_text( + "unknown_queue_directives", root=root + ) + if unknown_queue_directives is None: + queue = default_queue + else: + queue = unknown_queue_directives + + dnodes = self.get_children("directives", root=root) + for dnode in dnodes: + nodes = self.get_children("directive", root=dnode) + if self._match_attribs(self.attrib(dnode), case, queue): + for node in nodes: + directive = self.get_resolved_value( + "" if self.text(node) is None else self.text(node) + ) + if output_format == "cylc": + if self._batchtype == "pbs": + # cylc includes the -N itself, no need to add + if directive.startswith("-N"): + directive = "" + continue + m = re.match(r"\s*(-[\w])", directive) + if m: + directive = re.sub( + r"(-[\w]) ", + "{} = ".format(m.group(1)), + directive, + ) + + default = self.get(node, "default") + if default is None: + directive = transform_vars( + directive, + case=case, + subgroup=job, + default=default, + overrides=overrides, + ) + else: + directive = transform_vars(directive, default=default) + + custom_prefix = self.get(node, "prefix") + prefix = ( + directive_prefix + if custom_prefix is None + else custom_prefix + ) + + result.append( + "{}{}".format( + "" if not prefix else (prefix + " "), directive + ) + ) + + return "\n".join(result)
+ + +
+[docs] + def get_submit_args(self, case, job, resolve=True): + """ + return a list of touples (flag, name) + """ + bs_nodes = self.get_children("batch_system") + + submit_arg_nodes = self._get_arg_nodes(case, bs_nodes) + + submitargs = self._process_args(case, submit_arg_nodes, job, resolve=resolve) + + return submitargs
+ + + def _get_arg_nodes(self, case, bs_nodes): + submit_arg_nodes = [] + + for node in bs_nodes: + sanode = self.get_optional_child("submit_args", root=node) + if sanode is not None: + arg_nodes = self.get_children("arg", root=sanode) + + if len(arg_nodes) > 0: + check_paths = [case.get_value("BATCH_SPEC_FILE")] + + user_config_path = os.path.join( + pathlib.Path().home(), ".cime", "config_batch.xml" + ) + + if os.path.exists(user_config_path): + check_paths.append(user_config_path) + + logger.warning( + 'Deprecated "arg" node detected in {}, check files {}'.format( + self.filename, ", ".join(check_paths) + ) + ) + + submit_arg_nodes += arg_nodes + + submit_arg_nodes += self.get_children("argument", root=sanode) + + return submit_arg_nodes + + def _process_args(self, case, submit_arg_nodes, job, resolve=True): + submitargs = " " + + for arg in submit_arg_nodes: + name = None + flag = None + try: + flag, name = self._get_argument(case, arg) + except ValueError: + continue + + if self._batchtype == "cobalt" and job == "case.st_archive": + if flag == "-n": + name = "task_count" + + if flag == "--mode": + continue + + if name is None: + if " " in flag: + flag, name = flag.split() + if name: + if resolve and "$" in name: + rflag = self._resolve_argument(case, flag, name, job) + # This is to prevent -gpu_type=none in qsub args + if rflag.endswith("=none"): + continue + if len(rflag) > len(flag): + submitargs += " {}".format(rflag) + else: + submitargs += " " + add_flag_to_cmd(flag, name) + else: + submitargs += " {}".format(flag) + else: + if resolve: + try: + submitargs += self._resolve_argument(case, flag, name, job) + except ValueError: + continue + else: + submitargs += " " + add_flag_to_cmd(flag, name) + + return submitargs + + def _get_argument(self, case, arg): + flag = self.get(arg, "flag") + + name = self.get(arg, "name") + + # if flag is None then we dealing with new `argument` + if flag is None: + flag = self.text(arg) + job_queue_restriction = self.get(arg, "job_queue") + + if ( + job_queue_restriction is not None + and job_queue_restriction != case.get_value("JOB_QUEUE") + ): + raise ValueError() + + return flag, name + + def _resolve_argument(self, case, flag, name, job): + submitargs = "" + logger.debug("name is {}".format(name)) + # if name.startswith("$"): + # name = name[1:] + + if "$" in name: + parts = name.split("$") + logger.debug("parts are {}".format(parts)) + val = "" + for part in parts: + if part != "": + logger.debug("part is {}".format(part)) + resolved = case.get_value(part, subgroup=job) + if resolved: + val += resolved + else: + val += part + logger.debug("val is {}".format(name)) + val = case.get_resolved_value(val) + else: + val = case.get_value(name, subgroup=job) + + if val is not None and len(str(val)) > 0 and val != "None": + # Try to evaluate val if it contains any whitespace + if " " in val: + try: + rval = eval(val) + except Exception: + rval = val + else: + rval = val + + # We don't want floating-point data (ignore anything else) + if str(rval).replace(".", "", 1).isdigit(): + rval = int(round(float(rval))) + + # need a correction for tasks per node + if flag == "-n" and rval <= 0: + rval = 1 + + if flag == "-q" and rval == "batch" and case.get_value("MACH") == "blues": + # Special case. Do not provide '-q batch' for blues + raise ValueError() + + submitargs = " " + add_flag_to_cmd(flag, rval) + + return submitargs + +
+[docs] + def submit_jobs( + self, + case, + no_batch=False, + job=None, + user_prereq=None, + skip_pnl=False, + allow_fail=False, + resubmit_immediate=False, + mail_user=None, + mail_type=None, + batch_args=None, + dry_run=False, + workflow=True, + ): + """ + no_batch indicates that the jobs should be run directly rather that submitted to a queueing system + job is the first job in the workflow sequence to start + user_prereq is a batch system prerequisite as requested by the user + skip_pnl indicates that the preview_namelist should not be run by this job + allow_fail indicates that the prereq job need only complete not nessasarily successfully to start the next job + resubmit_immediate indicates that all jobs indicated by the RESUBMIT option should be submitted at the same time instead of + waiting to resubmit at the end of the first sequence + workflow is a logical indicating whether only "job" is submitted or the workflow sequence starting with "job" is submitted + """ + env_workflow = case.get_env("workflow") + external_workflow = case.get_value("EXTERNAL_WORKFLOW") + alljobs = env_workflow.get_jobs() + alljobs = [ + j + for j in alljobs + if os.path.isfile(os.path.join(self._caseroot, get_batch_script_for_job(j))) + ] + + startindex = 0 + jobs = [] + firstjob = job + if job is not None: + expect(job in alljobs, "Do not know about batch job {}".format(job)) + startindex = alljobs.index(job) + for index, job in enumerate(alljobs): + logger.debug( + "Index {:d} job {} startindex {:d}".format(index, job, startindex) + ) + if index < startindex: + continue + try: + prereq = env_workflow.get_value("prereq", subgroup=job, resolved=False) + if ( + external_workflow + or prereq is None + or job == firstjob + or (dry_run and prereq == "$BUILD_COMPLETE") + ): + prereq = True + else: + prereq = case.get_resolved_value(prereq) + prereq = eval(prereq) + except Exception: + expect( + False, + "Unable to evaluate prereq expression '{}' for job '{}'".format( + self.get_value("prereq", subgroup=job), job + ), + ) + if prereq: + jobs.append((job, env_workflow.get_value("dependency", subgroup=job))) + + if self._batchtype == "cobalt": + break + + depid = OrderedDict() + jobcmds = [] + + if workflow and resubmit_immediate: + num_submit = case.get_value("RESUBMIT") + 1 + case.set_value("RESUBMIT", 0) + if num_submit <= 0: + num_submit = 1 + else: + num_submit = 1 + + prev_job = None + batch_job_id = None + for _ in range(num_submit): + for job, dependency in jobs: + dep_jobs = get_job_deps(dependency, depid, prev_job, user_prereq) + + logger.debug("job {} depends on {}".format(job, dep_jobs)) + + result = self._submit_single_job( + case, + job, + skip_pnl=skip_pnl, + resubmit_immediate=resubmit_immediate, + dep_jobs=dep_jobs, + allow_fail=allow_fail, + no_batch=no_batch, + mail_user=mail_user, + mail_type=mail_type, + batch_args=batch_args, + dry_run=dry_run, + workflow=workflow, + ) + batch_job_id = str(alljobs.index(job)) if dry_run else result + depid[job] = batch_job_id + jobcmds.append((job, result)) + + if self._batchtype == "cobalt" or external_workflow or not workflow: + break + + if not external_workflow and not no_batch: + expect(batch_job_id, "No result from jobs {}".format(jobs)) + prev_job = batch_job_id + + if dry_run: + return jobcmds + else: + return depid
+ + + @staticmethod + def _get_supported_args(job, no_batch): + """ + Returns a map of the supported parameters and their arguments to the given script + TODO: Maybe let each script define this somewhere? + + >>> EnvBatch._get_supported_args("", False) + {} + >>> EnvBatch._get_supported_args("case.test", False) + {'skip_pnl': '--skip-preview-namelist'} + >>> EnvBatch._get_supported_args("case.st_archive", True) + {'resubmit': '--resubmit'} + """ + supported = {} + if job in ["case.run", "case.test"]: + supported["skip_pnl"] = "--skip-preview-namelist" + if job == "case.run": + supported["set_continue_run"] = "--completion-sets-continue-run" + if job in ["case.st_archive", "case.run"]: + if job == "case.st_archive" and no_batch: + supported["resubmit"] = "--resubmit" + else: + supported["submit_resubmits"] = "--resubmit" + return supported + + @staticmethod + def _build_run_args(job, no_batch, **run_args): + """ + Returns a map of the filtered parameters for the given script, + as well as the values passed and the equivalent arguments for calling the script + + >>> EnvBatch._build_run_args("case.run", False, skip_pnl=True, cthulu="f'taghn") + {'skip_pnl': (True, '--skip-preview-namelist')} + >>> EnvBatch._build_run_args("case.run", False, skip_pnl=False, cthulu="f'taghn") + {} + """ + supported_args = EnvBatch._get_supported_args(job, no_batch) + args = {} + for arg_name, arg_value in run_args.items(): + if arg_value and (arg_name in supported_args.keys()): + args[arg_name] = (arg_value, supported_args[arg_name]) + return args + + def _build_run_args_str(self, job, no_batch, **run_args): + """ + Returns a string of the filtered arguments for the given script, + based on the arguments passed + """ + args = self._build_run_args(job, no_batch, **run_args) + run_args_str = " ".join(param for _, param in args.values()) + logging_options = get_logging_options() + if logging_options: + run_args_str += " {}".format(logging_options) + + batch_env_flag = self.get_value("batch_env", subgroup=None) + if not batch_env_flag: + return run_args_str + elif len(run_args_str) > 0: + batch_system = self.get_value("BATCH_SYSTEM", subgroup=None) + logger.debug("batch_system: {}: ".format(batch_system)) + if batch_system == "lsf": + return '{} "all, ARGS_FOR_SCRIPT={}"'.format( + batch_env_flag, run_args_str + ) + else: + return "{} ARGS_FOR_SCRIPT='{}'".format(batch_env_flag, run_args_str) + else: + return "" + + def _submit_single_job( + self, + case, + job, + dep_jobs=None, + allow_fail=False, + no_batch=False, + skip_pnl=False, + mail_user=None, + mail_type=None, + batch_args=None, + dry_run=False, + resubmit_immediate=False, + workflow=True, + ): + + if not dry_run: + logger.warning("Submit job {}".format(job)) + batch_system = self.get_value("BATCH_SYSTEM", subgroup=None) + if batch_system is None or batch_system == "none" or no_batch: + logger.info("Starting job script {}".format(job)) + function_name = job.replace(".", "_") + job_name = "." + job + if not dry_run: + args = self._build_run_args( + job, + True, + skip_pnl=skip_pnl, + set_continue_run=resubmit_immediate, + submit_resubmits=workflow and not resubmit_immediate, + ) + try: + if hasattr(case, function_name): + getattr(case, function_name)( + **{k: v for k, (v, _) in args.items()} + ) + else: + expect( + os.path.isfile(job_name), + "Could not find file {}".format(job_name), + ) + run_cmd_no_fail( + os.path.join(self._caseroot, job_name), + combine_output=True, + verbose=True, + from_dir=self._caseroot, + ) + except Exception as e: + # We don't want exception from the run phases getting into submit phase + logger.warning( + "Exception from {}: {}".format(function_name, str(e)) + ) + + return + + submitargs = case.get_value("BATCH_COMMAND_FLAGS", subgroup=job, resolved=False) + + project = case.get_value("PROJECT", subgroup=job) + + if not project: + # If there is no project then we need to remove the project flag + if ( + batch_system == "pbs" or batch_system == "cobalt" + ) and " -A " in submitargs: + submitargs = submitargs.replace("-A", "") + elif batch_system == "lsf" and " -P " in submitargs: + submitargs = submitargs.replace("-P", "") + elif batch_system == "slurm" and " --account " in submitargs: + submitargs = submitargs.replace("--account", "") + + if dep_jobs is not None and len(dep_jobs) > 0: + logger.debug("dependencies: {}".format(dep_jobs)) + if allow_fail: + dep_string = self.get_value("depend_allow_string", subgroup=None) + if dep_string is None: + logger.warning( + "'depend_allow_string' is not defined for this batch system, " + + "falling back to the 'depend_string'" + ) + dep_string = self.get_value("depend_string", subgroup=None) + else: + dep_string = self.get_value("depend_string", subgroup=None) + expect( + dep_string is not None, + "'depend_string' is not defined for this batch system", + ) + + separator_string = self.get_value("depend_separator", subgroup=None) + expect(separator_string is not None, "depend_separator string not defined") + + expect( + "jobid" in dep_string, + "depend_string is missing jobid for prerequisite jobs", + ) + dep_ids_str = str(dep_jobs[0]) + for dep_id in dep_jobs[1:]: + dep_ids_str += separator_string + str(dep_id) + dep_string = dep_string.replace( + "jobid", dep_ids_str.strip() + ) # pylint: disable=maybe-no-member + submitargs += " " + dep_string + + if batch_args is not None: + submitargs += " " + batch_args + + cime_config = get_cime_config() + + if mail_user is None and cime_config.has_option("main", "MAIL_USER"): + mail_user = cime_config.get("main", "MAIL_USER") + + if mail_user is not None: + mail_user_flag = self.get_value("batch_mail_flag", subgroup=None) + if mail_user_flag is not None: + submitargs += " " + mail_user_flag + " " + mail_user + + if mail_type is None: + if job == "case.test" and cime_config.has_option( + "create_test", "MAIL_TYPE" + ): + mail_type = cime_config.get("create_test", "MAIL_TYPE") + elif cime_config.has_option("main", "MAIL_TYPE"): + mail_type = cime_config.get("main", "MAIL_TYPE") + else: + mail_type = self.get_value("batch_mail_default") + + if mail_type: + mail_type = mail_type.split(",") # pylint: disable=no-member + + if mail_type: + mail_type_flag = self.get_value("batch_mail_type_flag", subgroup=None) + if mail_type_flag is not None: + mail_type_args = [] + for indv_type in mail_type: + mail_type_arg = self.get_batch_mail_type(indv_type) + mail_type_args.append(mail_type_arg) + + if mail_type_flag == "-m": + # hacky, PBS-type systems pass multiple mail-types differently + submitargs += " {} {}".format( + mail_type_flag, "".join(mail_type_args) + ) + else: + submitargs += " {} {}".format( + mail_type_flag, + " {} ".format(mail_type_flag).join(mail_type_args), + ) + batchsubmit = self.get_value("batch_submit", subgroup=None) + expect( + batchsubmit is not None, + "Unable to determine the correct command for batch submission.", + ) + batchredirect = self.get_value("batch_redirect", subgroup=None) + batch_env_flag = self.get_value("batch_env", subgroup=None) + run_args = self._build_run_args_str( + job, + False, + skip_pnl=skip_pnl, + set_continue_run=resubmit_immediate, + submit_resubmits=workflow and not resubmit_immediate, + ) + if batch_system == "lsf" and not batch_env_flag: + sequence = ( + run_args, + batchsubmit, + submitargs, + batchredirect, + get_batch_script_for_job(job), + ) + elif batch_env_flag: + sequence = ( + batchsubmit, + submitargs, + run_args, + batchredirect, + get_batch_script_for_job(job), + ) + else: + sequence = ( + batchsubmit, + submitargs, + batchredirect, + get_batch_script_for_job(job), + run_args, + ) + + submitcmd = " ".join(s.strip() for s in sequence if s is not None) + if submitcmd.startswith("ssh"): + # add ` before cd $CASEROOT and at end of command + submitcmd = submitcmd.replace("cd $CASEROOT", "'cd $CASEROOT") + "'" + + if dry_run: + return submitcmd + else: + submitcmd = case.get_resolved_value(submitcmd) + logger.info("Submitting job script {}".format(submitcmd)) + output = run_cmd_no_fail(submitcmd, combine_output=True) + jobid = self.get_job_id(output) + logger.info("Submitted job id is {}".format(jobid)) + return jobid + +
+[docs] + def get_batch_mail_type(self, mail_type): + raw = self.get_value("batch_mail_type", subgroup=None) + mail_types = [ + item.strip() for item in raw.split(",") + ] # pylint: disable=no-member + idx = ["never", "all", "begin", "end", "fail"].index(mail_type) + + return mail_types[idx] if idx < len(mail_types) else None
+ + +
+[docs] + def get_batch_system_type(self): + nodes = self.get_children("batch_system") + for node in nodes: + type_ = self.get(node, "type") + if type_ is not None: + self._batchtype = type_ + return self._batchtype
+ + +
+[docs] + def set_batch_system_type(self, batchtype): + self._batchtype = batchtype
+ + +
+[docs] + def get_job_id(self, output): + jobid_pattern = self.get_value("jobid_pattern", subgroup=None) + if self._batchtype and self._batchtype != "none": + expect( + jobid_pattern is not None, + "Could not find jobid_pattern in env_batch.xml", + ) + else: + return output + search_match = re.search(jobid_pattern, output) + expect( + search_match is not None, + "Couldn't match jobid_pattern '{}' within submit output:\n '{}'".format( + jobid_pattern, output + ), + ) + jobid = search_match.group(1) + return jobid
+ + +
+[docs] + def queue_meets_spec(self, queue, num_nodes, num_tasks, walltime=None, job=None): + specs = self.get_queue_specs(queue) + + nodemin, nodemax, jobname, _, _, walltimemax, jobmin, jobmax, strict = specs + + # A job name match automatically meets spec + if job is not None and jobname is not None: + return jobname == job + + if ( + nodemin is not None + and num_nodes < nodemin + or nodemax is not None + and num_nodes > nodemax + or jobmin is not None + and num_tasks < jobmin + or jobmax is not None + and num_tasks > jobmax + ): + return False + + if walltime is not None and walltimemax is not None and strict: + walltime_s = convert_to_seconds(walltime) + walltimemax_s = convert_to_seconds(walltimemax) + if walltime_s > walltimemax_s: + return False + + return True
+ + + def _get_all_queue_names(self): + all_queues = [] + all_queues = self.get_all_queues() + + queue_names = [] + for queue in all_queues: + queue_names.append(self.text(queue)) + + return queue_names + +
+[docs] + def select_best_queue( + self, num_nodes, num_tasks, name=None, walltime=None, job=None + ): + logger.debug( + "Selecting best queue with criteria nodes={!r}, " + "tasks={!r}, name={!r}, walltime={!r}, job={!r}".format( + num_nodes, num_tasks, name, walltime, job + ) + ) + + # Make sure to check default queue first. + qnodes = self.get_all_queues(name=name) + for qnode in qnodes: + if self.queue_meets_spec( + qnode, num_nodes, num_tasks, walltime=walltime, job=job + ): + logger.debug("Selected queue {!r}".format(self.text(qnode))) + + return qnode + + return None
+ + +
+[docs] + def get_queue_specs(self, qnode): + """ + Get queue specifications from node. + + Returns (nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, is_strict) + """ + nodemin = self.get(qnode, "nodemin") + nodemin = None if nodemin is None else int(nodemin) + nodemax = self.get(qnode, "nodemax") + nodemax = None if nodemax is None else int(nodemax) + + jobmin = self.get(qnode, "jobmin") + jobmin = None if jobmin is None else int(jobmin) + jobmax = self.get(qnode, "jobmax") + jobmax = None if jobmax is None else int(jobmax) + + expect( + nodemin is None or jobmin is None, + "Cannot specify both nodemin and jobmin for a queue", + ) + expect( + nodemax is None or jobmax is None, + "Cannot specify both nodemax and jobmax for a queue", + ) + + jobname = self.get(qnode, "jobname") + walltimedef = self.get(qnode, "walltimedef") + walltimemin = self.get(qnode, "walltimemin") + walltimemax = self.get(qnode, "walltimemax") + strict = self.get(qnode, "strict") == "true" + + return ( + nodemin, + nodemax, + jobname, + walltimedef, + walltimemin, + walltimemax, + jobmin, + jobmax, + strict, + )
+ + +
+[docs] + def get_default_queue(self): + bs_nodes = self.get_children("batch_system") + node = None + for bsnode in bs_nodes: + qnodes = self.get_children("queues", root=bsnode) + for qnode in qnodes: + node = self.get_optional_child( + "queue", attributes={"default": "true"}, root=qnode + ) + if node is None: + node = self.get_optional_child("queue", root=qnode) + + expect(node is not None, "No queues found") + return node
+ + +
+[docs] + def get_all_queues(self, name=None): + bs_nodes = self.get_children("batch_system") + nodes = [] + default_idx = None + for bsnode in bs_nodes: + qsnode = self.get_optional_child("queues", root=bsnode) + if qsnode is not None: + qnodes = self.get_children("queue", root=qsnode) + for qnode in qnodes: + if name is None or self.text(qnode) == name: + nodes.append(qnode) + if self.get(qnode, "default", default="false") == "true": + default_idx = len(nodes) - 1 + + # Queues are selected by first match, so we want the queue marked + # as default to come first. + if default_idx is not None: + def_node = nodes.pop(default_idx) + nodes.insert(0, def_node) + + return nodes
+ + +
+[docs] + def get_children(self, name=None, attributes=None, root=None): + if name == "PROJECT_REQUIRED": + nodes = super(EnvBatch, self).get_children( + "entry", attributes={"id": name}, root=root + ) + else: + nodes = super(EnvBatch, self).get_children( + name, attributes=attributes, root=root + ) + + return nodes
+ + +
+[docs] + def get_status(self, jobid): + batch_query = self.get_optional_child("batch_query") + if batch_query is None: + logger.warning("Batch queries not supported on this platform") + else: + cmd = self.text(batch_query) + " " + if self.has(batch_query, "per_job_arg"): + cmd += self.get(batch_query, "per_job_arg") + " " + + cmd += jobid + + status, out, err = run_cmd(cmd) + if status != 0: + logger.warning( + "Batch query command '{}' failed with error '{}'".format(cmd, err) + ) + else: + return out.strip()
+ + +
+[docs] + def cancel_job(self, jobid): + batch_cancel = self.get_optional_child("batch_cancel") + if batch_cancel is None: + logger.warning("Batch cancellation not supported on this platform") + return False + else: + cmd = self.text(batch_cancel) + " " + str(jobid) + + status, out, err = run_cmd(cmd) + if status != 0: + logger.warning( + "Batch cancel command '{}' failed with error '{}'".format( + cmd, out + "\n" + err + ) + ) + else: + return True
+ + +
+[docs] + def compare_xml(self, other): + xmldiffs = {} + f1batchnodes = self.get_children("batch_system") + for bnode in f1batchnodes: + f2bnodes = other.get_children("batch_system", attributes=self.attrib(bnode)) + f2bnode = None + if len(f2bnodes): + f2bnode = f2bnodes[0] + f1batchnodes = self.get_children(root=bnode) + for node in f1batchnodes: + name = self.name(node) + text1 = self.text(node) + text2 = "" + attribs = self.attrib(node) + f2matches = other.scan_children(name, attributes=attribs, root=f2bnode) + foundmatch = False + for chkmatch in f2matches: + name2 = other.name(chkmatch) + attribs2 = other.attrib(chkmatch) + text2 = other.text(chkmatch) + if name == name2 and attribs == attribs2 and text1 == text2: + foundmatch = True + break + if not foundmatch: + xmldiffs[name] = [text1, text2] + + f1groups = self.get_children("group") + for node in f1groups: + group = self.get(node, "id") + f2group = other.get_child("group", attributes={"id": group}) + xmldiffs.update( + super(EnvBatch, self).compare_xml(other, root=node, otherroot=f2group) + ) + return xmldiffs
+ + +
+[docs] + def make_all_batch_files(self, case): + machdir = case.get_value("MACHDIR") + env_workflow = case.get_env("workflow") + logger.info("Creating batch scripts") + jobs = env_workflow.get_jobs() + for job in jobs: + template = case.get_resolved_value( + env_workflow.get_value("template", subgroup=job) + ) + + if os.path.isabs(template): + input_batch_script = template + else: + input_batch_script = os.path.join(machdir, template) + if os.path.isfile(input_batch_script): + logger.info( + "Writing {} script from input template {}".format( + job, input_batch_script + ) + ) + self.make_batch_script(input_batch_script, job, case) + else: + logger.warning( + "Input template file {} for job {} does not exist or cannot be read.".format( + input_batch_script, job + ) + )
+
+ + + +
+[docs] +def get_job_deps(dependency, depid, prev_job=None, user_prereq=None): + """ + Gather list of job batch ids that a job depends on. + + Parameters + ---------- + dependency : str + List of dependent job names. + depid : dict + Lookup where keys are job names and values are the batch id. + user_prereq : str + User requested dependency. + + Returns + ------- + list + List of batch ids that job depends on. + """ + deps = [] + dep_jobs = [] + + if user_prereq is not None: + dep_jobs.append(user_prereq) + + if dependency is not None: + # Match all words, excluding "and" and "or" + deps = re.findall(r"\b(?!and\b|or\b)\w+(?:\.\w+)?\b", dependency) + + for dep in deps: + if dep in depid and depid[dep] is not None: + dep_jobs.append(str(depid[dep])) + + if prev_job is not None: + dep_jobs.append(prev_job) + + return dep_jobs
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_build.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_build.html new file mode 100644 index 00000000000..102d92a4668 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_build.html @@ -0,0 +1,163 @@ + + + + + + CIME.XML.env_build — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_build

+"""
+Interface to the env_build.xml file.  This class inherits from EnvBase
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME import utils
+from CIME.XML.env_base import EnvBase
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EnvBuild(EnvBase): + # pylint: disable=unused-argument + def __init__( + self, case_root=None, infile="env_build.xml", components=None, read_only=False + ): + """ + initialize an object interface to file env_build.xml in the case directory + """ + schema = os.path.join(utils.get_schema_path(), "env_entry_id.xsd") + self._caseroot = case_root + EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only) + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + # Do not allow any of these to be the same as CASEROOT + if vid in ("EXEROOT", "OBJDIR", "LIBROOT"): + utils.expect(value != self._caseroot, f"Cannot set {vid} to CASEROOT") + + return super(EnvBuild, self).set_value( + vid, value, subgroup=subgroup, ignore_type=ignore_type + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_case.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_case.html new file mode 100644 index 00000000000..9c6a39af4d3 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_case.html @@ -0,0 +1,145 @@ + + + + + + CIME.XML.env_case — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_case

+"""
+Interface to the env_case.xml file.  This class inherits from EnvBase
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME import utils
+from CIME.XML.env_base import EnvBase
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EnvCase(EnvBase): + # pylint: disable=unused-argument + def __init__( + self, case_root=None, infile="env_case.xml", components=None, read_only=False + ): + """ + initialize an object interface to file env_case.xml in the case directory + """ + schema = os.path.join(utils.get_schema_path(), "env_entry_id.xsd") + EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_mach_pes.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_mach_pes.html new file mode 100644 index 00000000000..41eea586979 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_mach_pes.html @@ -0,0 +1,372 @@ + + + + + + CIME.XML.env_mach_pes — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_mach_pes

+"""
+Interface to the env_mach_pes.xml file.  This class inherits from EntryID
+"""
+from CIME.XML.standard_module_setup import *
+from CIME import utils
+from CIME.XML.env_base import EnvBase
+import math
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EnvMachPes(EnvBase): + def __init__( + self, + case_root=None, + infile="env_mach_pes.xml", + components=None, + read_only=False, + comp_interface="mct", + ): + """ + initialize an object interface to file env_mach_pes.xml in the case directory + """ + self._components = components + self._comp_interface = comp_interface + + schema = os.path.join(utils.get_schema_path(), "env_mach_pes.xsd") + EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only) + +
+[docs] + def add_comment(self, comment): + if comment is not None: + node = self.make_child("comment", text=comment) + # make_child adds to the end of the file but we want it to follow the header + # so we need to remove it and add it in the correct position + self.remove_child(node) + self.add_child(node, position=1)
+ + +
+[docs] + def get_value( + self, + vid, + attribute=None, + resolved=True, + subgroup=None, + max_mpitasks_per_node=None, + max_cputasks_per_gpu_node=None, + ngpus_per_node=None, + ): # pylint: disable=arguments-differ + # Special variable NINST_MAX is used to determine the number of + # drivers in multi-driver mode. + if vid == "NINST_MAX": + # in the nuopc driver there is only a single NINST value + value = 1 + for comp in self._components: + if comp != "CPL": + value = max(value, self.get_value("NINST_{}".format(comp))) + return value + + value = EnvBase.get_value(self, vid, attribute, resolved, subgroup) + + if "NTASKS" in vid or "ROOTPE" in vid: + if max_mpitasks_per_node is None: + max_mpitasks_per_node = self.get_value("MAX_MPITASKS_PER_NODE") + if max_cputasks_per_gpu_node is None: + max_cputasks_per_gpu_node = self.get_value("MAX_CPUTASKS_PER_GPU_NODE") + if ngpus_per_node is None: + ngpus_per_node = self.get_value("NGPUS_PER_NODE") + if (ngpus_per_node and value) and value < 0: + value = -1 * value * max_cputasks_per_gpu_node + elif value and value < 0: + value = -1 * value * max_mpitasks_per_node + # in the nuopc driver there is only one NINST value + # so that NINST_{comp} = NINST + if "NINST_" in vid and value is None: + value = self.get_value("NINST") + return value
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + if vid == "MULTI_DRIVER" and value: + ninst_max = self.get_value("NINST_MAX") + for comp in self._components: + if comp == "CPL": + continue + ninst = self.get_value("NINST_{}".format(comp)) + expect( + ninst == ninst_max, + "All components must have the same NINST value in multi_driver mode. NINST_{}={} shoud be {}".format( + comp, ninst, ninst_max + ), + ) + + if ("NTASKS" in vid or "NTHRDS" in vid) and vid != "PIO_ASYNCIO_NTASKS": + expect(value != 0, f"Cannot set NTASKS or NTHRDS to 0 {vid}") + + return EnvBase.set_value( + self, vid, value, subgroup=subgroup, ignore_type=ignore_type + )
+ + +
+[docs] + def get_max_thread_count(self, comp_classes): + """Find the maximum number of openmp threads for any component in the case""" + max_threads = 1 + for comp in comp_classes: + threads = self.get_value("NTHRDS", attribute={"compclass": comp}) + expect( + threads is not None, + "Error no thread count found for component class {}".format(comp), + ) + if threads > max_threads: + max_threads = threads + return max_threads
+ + +
+[docs] + def get_total_tasks(self, comp_classes, async_interface=False): + total_tasks = 0 + maxinst = self.get_value("NINST") + asyncio_ntasks = 0 + asyncio_rootpe = 0 + asyncio_stride = 0 + asyncio_tasks = [] + if maxinst: + comp_interface = "nuopc" + if async_interface: + asyncio_ntasks = self.get_value("PIO_ASYNCIO_NTASKS") + asyncio_rootpe = self.get_value("PIO_ASYNCIO_ROOTPE") + asyncio_stride = self.get_value("PIO_ASYNCIO_STRIDE") + logger.debug( + "asyncio ntasks {} rootpe {} stride {}".format( + asyncio_ntasks, asyncio_rootpe, asyncio_stride + ) + ) + if asyncio_ntasks and asyncio_stride: + for i in range( + asyncio_rootpe, + asyncio_rootpe + (asyncio_ntasks * asyncio_stride), + asyncio_stride, + ): + asyncio_tasks.append(i) + else: + comp_interface = "unknown" + maxinst = 1 + tt = 0 + maxrootpe = 0 + for comp in comp_classes: + ntasks = self.get_value("NTASKS", attribute={"compclass": comp}) + rootpe = self.get_value("ROOTPE", attribute={"compclass": comp}) + pstrid = self.get_value("PSTRID", attribute={"compclass": comp}) + + esmf_aware_threading = self.get_value("ESMF_AWARE_THREADING") + # mct is unaware of threads and they should not be counted here + # if esmf is thread aware they are included + if comp_interface == "nuopc" and esmf_aware_threading: + nthrds = self.get_value("NTHRDS", attribute={"compclass": comp}) + else: + nthrds = 1 + + if comp != "CPL" and comp_interface != "nuopc": + ninst = self.get_value("NINST", attribute={"compclass": comp}) + maxinst = max(maxinst, ninst) + tt = rootpe + nthrds * ((ntasks - 1) * pstrid + 1) + maxrootpe = max(maxrootpe, rootpe) + total_tasks = max(tt, total_tasks) + + if asyncio_tasks: + total_tasks = total_tasks + len(asyncio_tasks) + if self.get_value("MULTI_DRIVER"): + total_tasks *= maxinst + logger.debug("asyncio_tasks {}".format(asyncio_tasks)) + return total_tasks
+ + +
+[docs] + def get_tasks_per_node(self, total_tasks, max_thread_count): + expect( + total_tasks > 0, + "totaltasks > 0 expected, totaltasks = {}".format(total_tasks), + ) + if self._comp_interface == "nuopc" and self.get_value("ESMF_AWARE_THREADING"): + if self.get_value("NGPUS_PER_NODE") > 0: + tasks_per_node = self.get_value("MAX_CPUTASKS_PER_GPU_NODE") + else: + tasks_per_node = self.get_value("MAX_MPITASKS_PER_NODE") + else: + ngpus_per_node = self.get_value("NGPUS_PER_NODE") + if ngpus_per_node and ngpus_per_node > 0: + tasks_per_node = min( + self.get_value("MAX_TASKS_PER_NODE") // max_thread_count, + self.get_value("MAX_CPUTASKS_PER_GPU_NODE"), + total_tasks, + ) + else: + tasks_per_node = min( + self.get_value("MAX_TASKS_PER_NODE") // max_thread_count, + self.get_value("MAX_MPITASKS_PER_NODE"), + total_tasks, + ) + return tasks_per_node if tasks_per_node > 0 else 1
+ + +
+[docs] + def get_total_nodes(self, total_tasks, max_thread_count): + """ + Return (num_active_nodes, num_spare_nodes) + """ + # threads have already been included in nuopc interface + if self._comp_interface == "nuopc" and self.get_value("ESMF_AWARE_THREADING"): + max_thread_count = 1 + tasks_per_node = self.get_tasks_per_node(total_tasks, max_thread_count) + num_nodes = int(math.ceil(float(total_tasks) / tasks_per_node)) + return num_nodes, self.get_spare_nodes(num_nodes)
+ + +
+[docs] + def get_spare_nodes(self, num_nodes): + force_spare_nodes = self.get_value("FORCE_SPARE_NODES") + if force_spare_nodes != -999: + return force_spare_nodes + + if self.get_value("ALLOCATE_SPARE_NODES"): + ten_pct = int(math.ceil(float(num_nodes) * 0.1)) + if ten_pct < 1: + return 1 # Always provide at lease one spare node + elif ten_pct > 10: + return 10 # Never provide more than 10 spare nodes + else: + return ten_pct + else: + return 0
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_mach_specific.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_mach_specific.html new file mode 100644 index 00000000000..7479d3d30a0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_mach_specific.html @@ -0,0 +1,915 @@ + + + + + + CIME.XML.env_mach_specific — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_mach_specific

+"""
+Interface to the env_mach_specific.xml file.  This class inherits from EnvBase
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.env_base import EnvBase
+from CIME import utils
+from CIME.utils import transform_vars, get_cime_root
+import string, resource
+from collections import OrderedDict
+
+logger = logging.getLogger(__name__)
+
+# Is not of type EntryID but can use functions from EntryID (e.g
+# get_type) otherwise need to implement own functions and make GenericXML parent class
+
+[docs] +class EnvMachSpecific(EnvBase): + # pylint: disable=unused-argument + def __init__( + self, + caseroot=None, + infile="env_mach_specific.xml", + components=None, + unit_testing=False, + read_only=False, + standalone_configure=False, + comp_interface=None, + ): + """ + initialize an object interface to file env_mach_specific.xml in the case directory + + Notes on some arguments: + standalone_configure: logical - whether this is being called from the standalone + configure utility, outside of a case + """ + schema = os.path.join(utils.get_schema_path(), "env_mach_specific.xsd") + EnvBase.__init__(self, caseroot, infile, schema=schema, read_only=read_only) + self._allowed_mpi_attributes = ( + "compiler", + "mpilib", + "threaded", + "unit_testing", + "queue", + "comp_interface", + ) + self._comp_interface = comp_interface + self._unit_testing = unit_testing + self._standalone_configure = standalone_configure + +
+[docs] + def populate(self, machobj, attributes=None): + """Add entries to the file using information from a Machines object. + mpilib must match attributes if set + """ + items = ("module_system", "environment_variables", "resource_limits", "mpirun") + default_run_suffix = machobj.get_child("default_run_suffix", root=machobj.root) + + group_node = self.make_child("group", {"id": "compliant_values"}) + settings = {"run_exe": None, "run_misc_suffix": None} + + for item in items: + nodes = machobj.get_first_child_nodes(item) + if item == "environment_variables": + if len(nodes) == 0: + example_text = """This section is for the user to specify any additional machine-specific env var, or to overwite existing ones.\n <environment_variables>\n <env name="NAME">ARGUMENT</env>\n </environment_variables>\n """ + self.make_child_comment(text=example_text) + + if item == "mpirun": + for node in nodes: + mpirunnode = machobj.copy(node) + match = True + # We pull the run_exe and run_misc_suffix from the mpirun node if attributes match and use it + # otherwise we use the default. + if attributes: + for attrib in attributes: + val = self.get(mpirunnode, attrib) + if val and attributes[attrib] != val: + match = False + + for subnode in machobj.get_children(root=mpirunnode): + subname = machobj.name(subnode) + if subname == "run_exe" or subname == "run_misc_suffix": + if match: + settings[subname] = self.text(subnode) + self.remove_child(subnode, root=mpirunnode) + + self.add_child(mpirunnode) + else: + for node in nodes: + self.add_child(node) + + for item in ("run_exe", "run_misc_suffix"): + if settings[item]: + value = settings[item] + else: + value = self.text( + machobj.get_child("default_" + item, root=default_run_suffix) + ) + + entity_node = self.make_child( + "entry", {"id": item, "value": value}, root=group_node + ) + self.make_child("type", root=entity_node, text="char") + self.make_child( + "desc", + root=entity_node, + text=( + "executable name" + if item == "run_exe" + else "redirect for job output" + ), + )
+ + + def _get_modules_for_case(self, case, job=None): + module_nodes = self.get_children( + "modules", root=self.get_child("module_system") + ) + modules_to_load = None + if module_nodes is not None: + modules_to_load = self._compute_module_actions(module_nodes, case, job=job) + + return modules_to_load + + def _get_envs_for_case(self, case, job=None): + env_nodes = self.get_children("environment_variables") + + envs_to_set = None + if env_nodes is not None: + envs_to_set = self._compute_env_actions(env_nodes, case, job=job) + + return envs_to_set + +
+[docs] + def load_env(self, case, force_method=None, job=None, verbose=False): + """ + Should only be called by case.load_env + """ + # Do the modules so we can refer to env vars set by the modules + # in the environment_variables block + modules_to_load = self._get_modules_for_case(case) + if modules_to_load is not None: + self._load_modules( + modules_to_load, force_method=force_method, verbose=verbose + ) + + envs_to_set = self._get_envs_for_case(case, job=job) + if envs_to_set is not None: + self._load_envs(envs_to_set, verbose=verbose) + + self._get_resources_for_case(case) + + return [] if envs_to_set is None else envs_to_set
+ + + def _get_resources_for_case(self, case): + resource_nodes = self.get_children("resource_limits") + if resource_nodes is not None: + nodes = self._compute_resource_actions(resource_nodes, case) + for name, val in nodes: + attr = getattr(resource, name) + limits = resource.getrlimit(attr) + logger.info( + "Setting resource.{} to {} from {}".format(name, val, limits) + ) + limits = (int(val), limits[1]) + resource.setrlimit(attr, limits) + + def _load_modules(self, modules_to_load, force_method=None, verbose=False): + module_system = ( + self.get_module_system_type() if force_method is None else force_method + ) + if module_system == "module": + self._load_module_modules(modules_to_load, verbose=verbose) + elif module_system == "soft": + self._load_modules_generic(modules_to_load, verbose=verbose) + elif module_system == "generic": + self._load_modules_generic(modules_to_load, verbose=verbose) + elif module_system == "none": + self._load_none_modules(modules_to_load) + else: + expect(False, "Unhandled module system '{}'".format(module_system)) + +
+[docs] + def list_modules(self): + module_system = self.get_module_system_type() + + # If the user's login shell is not sh, it's possible that modules + # won't be configured so we need to be sure to source the module + # setup script if it exists. + init_path = self.get_module_system_init_path("sh") + if init_path: + source_cmd = ". {} && ".format(init_path) + else: + source_cmd = "" + + if module_system in ["module"]: + return run_cmd_no_fail( + "{}module list".format(source_cmd), combine_output=True + ) + elif module_system == "soft": + # Does soft really not provide this capability? + return "" + elif module_system == "generic": + return run_cmd_no_fail("{}use -lv".format(source_cmd)) + elif module_system == "none": + return "" + else: + expect(False, "Unhandled module system '{}'".format(module_system))
+ + +
+[docs] + def save_all_env_info(self, filename): + """ + Get a string representation of all current environment info and + save it to file. + """ + with open(filename, "w") as f: + f.write(self.list_modules()) + run_cmd_no_fail("echo -e '\n' && env", arg_stdout=filename)
+ + +
+[docs] + def get_overrides_nodes(self, case): + overrides = {} + overrides["num_nodes"] = case.num_nodes + fnm = "env_mach_specific.xml" + output_text = transform_vars( + open(fnm, "r").read(), case=case, subgroup=None, overrides=overrides + ) + logger.info("Updating file {}".format(fnm)) + with open(fnm, "w") as fd: + fd.write(output_text) + return overrides
+ + +
+[docs] + def make_env_mach_specific_file(self, shell, case, output_dir=""): + """Writes .env_mach_specific.sh or .env_mach_specific.csh + + Args: + shell: string - 'sh' or 'csh' + case: case object + output_dir: string - path to output directory (if empty string, uses current directory) + """ + source_cmd = "." if shell == "sh" else "source" + module_system = self.get_module_system_type() + sh_init_cmd = self.get_module_system_init_path(shell) + sh_mod_cmd = self.get_module_system_cmd_path(shell) + lines = [ + "# This file is for user convenience only and is not used by the model" + ] + + lines.append("# Changes to this file will be ignored and overwritten") + lines.append( + "# Changes to the environment should be made in env_mach_specific.xml" + ) + lines.append("# Run ./case.setup --reset to regenerate this file") + if sh_init_cmd: + lines.append("{} {}".format(source_cmd, sh_init_cmd)) + + if "SOFTENV_ALIASES" in os.environ: + lines.append("{} $SOFTENV_ALIASES".format(source_cmd)) + if "SOFTENV_LOAD" in os.environ: + lines.append("{} $SOFTENV_LOAD".format(source_cmd)) + + if self._unit_testing or self._standalone_configure: + job = None + else: + job = case.get_primary_job() + modules_to_load = self._get_modules_for_case(case, job=job) + envs_to_set = self._get_envs_for_case(case, job=job) + filename = ".env_mach_specific.{}".format(shell) + if modules_to_load is not None: + if module_system == "module": + lines.extend(self._get_module_commands(modules_to_load, shell)) + else: + for action, argument in modules_to_load: + lines.append( + "{} {} {}".format( + sh_mod_cmd, action, "" if argument is None else argument + ) + ) + + if envs_to_set is not None: + for env_name, env_value in envs_to_set: + if shell == "sh": + if env_name == "source": + if env_value.startswith("sh"): + lines.append("{}".format(env_name)) + else: + lines.append("export {}={}".format(env_name, env_value)) + + elif shell == "csh": + if env_name == "source": + if env_value.startswith("csh"): + lines.append("{}".format(env_name)) + else: + lines.append("setenv {} {}".format(env_name, env_value)) + else: + expect(False, "Unknown shell type: '{}'".format(shell)) + + with open(os.path.join(output_dir, filename), "w") as fd: + fd.write("\n".join(lines))
+ + + # Private API + + def _load_envs(self, envs_to_set, verbose=False): + for env_name, env_value in envs_to_set: + logger_func = logger.warning if verbose else logger.debug + if env_value is None and env_name in os.environ: + del os.environ[env_name] + logger_func("Unsetting Environment {}".format(env_name)) + elif env_value is not None: + if env_name == "source": + shell, cmd = env_value.split(" ", 1) + self._source_shell_file("source " + cmd, shell, verbose=verbose) + else: + if verbose: + print("Setting Environment {}={}".format(env_name, env_value)) + logger_func("Setting Environment {}={}".format(env_name, env_value)) + os.environ[env_name] = env_value + + def _compute_module_actions(self, module_nodes, case, job=None): + return self._compute_actions(module_nodes, "command", case, job=job) + + def _compute_env_actions(self, env_nodes, case, job=None): + return self._compute_actions(env_nodes, "env", case, job=job) + + def _compute_resource_actions(self, resource_nodes, case, job=None): + return self._compute_actions(resource_nodes, "resource", case, job=job) + + def _compute_actions(self, nodes, child_tag, case, job=None): + result = [] # list of tuples ("name", "argument") + compiler = case.get_value("COMPILER") + mpilib = case.get_value("MPILIB") + + for node in nodes: + if self._match_attribs(self.attrib(node), case, job=job): + for child in self.get_children(root=node): + expect( + self.name(child) == child_tag, + "Expected {} element".format(child_tag), + ) + if self._match_attribs(self.attrib(child), case, job=job): + val = self.text(child) + if val is not None: + # We allow a couple special substitutions for these fields + for repl_this, repl_with in [ + ("$COMPILER", compiler), + ("$MPILIB", mpilib), + ]: + val = val.replace(repl_this, repl_with) + + val = self.get_resolved_value(val) + expect( + "$" not in val, + "Not safe to leave unresolved items in env var value: '{}'".format( + val + ), + ) + + # intentional unindent, result is appended even if val is None + name = self.get(child, "name") + if name: + result.append((name, val)) + else: + result.append( + ("source", self.get(child, "source") + " " + val) + ) + + return result + + def _match_attribs(self, attribs, case, job=None): + # check for matches with case-vars + for attrib in attribs: + if attrib == "unit_testing": # special case + if not self._match(self._unit_testing, attribs["unit_testing"].upper()): + return False + elif attrib == "queue": + if job is not None: + val = case.get_value("JOB_QUEUE", subgroup=job) + expect( + val is not None, + "Cannot match attrib '%s', case has no value for it" + % attrib.upper(), + ) + if not self._match(val, attribs[attrib]): + return False + elif attrib == "name": + pass + elif attrib == "source": + pass + else: + val = case.get_value(attrib.upper()) + expect( + val is not None, + "Cannot match attrib '%s', case has no value for it" + % attrib.upper(), + ) + if not self._match(val, attribs[attrib]): + return False + + return True + + def _match(self, my_value, xml_value): + if xml_value.startswith("!"): + result = re.match(xml_value[1:] + "$", str(my_value)) is None + elif isinstance(my_value, bool): + if my_value: + result = xml_value == "TRUE" + else: + result = xml_value == "FALSE" + else: + result = re.match(xml_value + "$", str(my_value)) is not None + + logger.debug( + "(env_mach_specific) _match {} {} {}".format(my_value, xml_value, result) + ) + return result + + def _get_module_commands(self, modules_to_load, shell): + # Note this is independent of module system type + mod_cmd = self.get_module_system_cmd_path(shell) + cmds = [] + last_action = None + last_cmd = None + + # Normally, we will try to combine or batch module commands together... + # + # module load X + # module load Y + # module load Z + # + # is the same as ... + # + # module load X Y Z + # + # ... except the latter is significatly faster due to performing 1/3 as + # many forks. + # + # Not all module commands support batching though and we enurmerate those + # here. + actions_that_cannot_be_batched = ["swap", "switch"] + + for action, argument in modules_to_load: + if argument is None: + argument = "" + + if action == last_action and action not in actions_that_cannot_be_batched: + last_cmd = "{} {}".format(last_cmd, argument) + else: + if last_cmd is not None: + cmds.append(last_cmd) + + last_cmd = "{} {} {}".format( + mod_cmd, action, "" if argument is None else argument + ) + last_action = action + + if last_cmd: + cmds.append(last_cmd) + + return cmds + + def _load_module_modules(self, modules_to_load, verbose=False): + logger_func = logger.warning if verbose else logger.debug + for cmd in self._get_module_commands(modules_to_load, "python"): + logger_func("module command is {}".format(cmd)) + stat, py_module_code, errout = run_cmd(cmd) + expect( + stat == 0 and (len(errout) == 0 or self.allow_error()), + "module command {} failed with message:\n{}".format(cmd, errout), + ) + exec(py_module_code) + + def _load_modules_generic(self, modules_to_load, verbose=False): + sh_init_cmd = self.get_module_system_init_path("sh") + sh_mod_cmd = self.get_module_system_cmd_path("sh") + + # Purpose is for environment management system that does not have + # a python interface and therefore can only determine what they + # do by running shell command and looking at the changes + # in the environment. + + cmd = ". {}".format(sh_init_cmd) + + if "SOFTENV_ALIASES" in os.environ: + cmd += " && . $SOFTENV_ALIASES" + if "SOFTENV_LOAD" in os.environ: + cmd += " && . $SOFTENV_LOAD" + + for action, argument in modules_to_load: + cmd += " && {} {} {}".format( + sh_mod_cmd, action, "" if argument is None else argument + ) + + self._source_shell_file(cmd, verbose=verbose) + + def _source_shell_file(self, cmd, shell="sh", verbose=False): + # Use null terminated lines to give us something more definitive to split on. + # Env vars can contain newlines, so splitting on newlines can be ambiguous + logger_func = logger.warning if verbose else logger.debug + cmd += " && env -0" + logger_func("cmd: {}".format(cmd)) + output = run_cmd_no_fail(cmd, executable=shell, verbose=verbose) + + ################################################### + # Parse the output to set the os.environ dictionary + ################################################### + newenv = OrderedDict() + for line in output.split("\0"): + if "=" in line: + key, val = line.split("=", 1) + newenv[key] = val + + # resolve variables + for key, val in newenv.items(): + newenv[key] = string.Template(val).safe_substitute(newenv) + + # Set environment with new or updated values + for key in newenv: + if key in os.environ and os.environ[key] == newenv[key]: + pass + else: + os.environ[key] = newenv[key] + + for oldkey in list(os.environ.keys()): + if oldkey not in newenv: + del os.environ[oldkey] + + def _load_none_modules(self, modules_to_load): + """ + No Action required + """ + expect( + not modules_to_load, + "Module system was specified as 'none' yet there are modules that need to be loaded?", + ) + + def _mach_specific_header(self, shell): + """ + write a shell module file for this case. + """ + header = """ +#!/usr/bin/env {} +#=============================================================================== +# Automatically generated module settings for $self->{{machine}} +# DO NOT EDIT THIS FILE DIRECTLY! Please edit env_mach_specific.xml +# in your CASEROOT. This file is overwritten every time modules are loaded! +#=============================================================================== +""".format( + shell + ) + source_cmd = "." if shell == "sh" else "source" + header += "{} {}".format(source_cmd, self.get_module_system_init_path(shell)) + return header + +
+[docs] + def get_module_system_type(self): + """ + Return the module system used on this machine + """ + module_system = self.get_child("module_system") + return self.get(module_system, "type")
+ + +
+[docs] + def allow_error(self): + """ + Return True if stderr output from module commands should be assumed + to be an error. Default False. This is necessary since implementations + of environment modules are highlty variable and some systems produce + stderr output even when things are working fine. + """ + module_system = self.get_child("module_system") + value = self.get(module_system, "allow_error") + return value.upper() == "TRUE" if value is not None else False
+ + +
+[docs] + def get_module_system_init_path(self, lang): + init_nodes = self.get_optional_child( + "init_path", attributes={"lang": lang}, root=self.get_child("module_system") + ) + return ( + self.get_resolved_value(self.text(init_nodes)) + if init_nodes is not None + else None + )
+ + +
+[docs] + def get_module_system_cmd_path(self, lang): + cmd_nodes = self.get_optional_child( + "cmd_path", attributes={"lang": lang}, root=self.get_child("module_system") + ) + return ( + self.get_resolved_value(self.text(cmd_nodes)) + if cmd_nodes is not None + else None + )
+ + + def _find_best_mpirun_match(self, attribs): + mpirun_nodes = self.get_children("mpirun") + best_match = None + best_num_matched = -1 + default_match = None + best_num_matched_default = -1 + for mpirun_node in mpirun_nodes: + xml_attribs = self.attrib(mpirun_node) + all_match = True + matches = 0 + is_default = False + + for key, value in attribs.items(): + expect( + key in self._allowed_mpi_attributes, + "Unexpected key {} in mpirun attributes".format(key), + ) + if key in xml_attribs: + if xml_attribs[key].lower() == "false": + xml_attrib = False + elif xml_attribs[key].lower() == "true": + xml_attrib = True + else: + xml_attrib = xml_attribs[key] + + if xml_attrib == value: + matches += 1 + elif ( + key == "mpilib" + and value != "mpi-serial" + and xml_attrib == "default" + ): + is_default = True + else: + all_match = False + break + + if all_match: + if is_default: + if matches > best_num_matched_default: + default_match = mpirun_node + best_num_matched_default = matches + else: + if matches > best_num_matched: + best_match = mpirun_node + best_num_matched = matches + + # if there are no special arguments required for mpi-serial it need not have an entry in config_machines.xml + if ( + "mpilib" in attribs + and attribs["mpilib"] == "mpi-serial" + and best_match is None + ): + raise ValueError() + + expect( + best_match is not None or default_match is not None, + "Could not find a matching MPI for attributes: {}".format(attribs), + ) + + return best_match if best_match is not None else default_match + +
+[docs] + def get_aprun_mode(self, attribs): + default_mode = "default" + valid_modes = ("ignore", "default", "override") + + try: + the_match = self._find_best_mpirun_match(attribs) + except ValueError: + return default_mode + + mode_node = self.get_children("aprun_mode", root=the_match) + + if len(mode_node) == 0: + return default_mode + + expect(len(mode_node) == 1, 'Found multiple "aprun_mode" elements.') + + # should have only one element to select from + mode = self.text(mode_node[0]) + + expect( + mode in valid_modes, + f"Value {mode!r} for \"aprun_mode\" is not valid, options are {', '.join(valid_modes)!r}", + ) + + return mode
+ + +
+[docs] + def get_aprun_args(self, case, attribs, job, overrides=None): + args = {} + + try: + the_match = self._find_best_mpirun_match(attribs) + except ValueError: + return None + + arg_node = self.get_optional_child("arguments", root=the_match) + + if arg_node: + arg_nodes = self.get_children("arg", root=arg_node) + + for arg_node in arg_nodes: + position = self.get(arg_node, "position") + + if position is None: + position = "per" + + arg_value = transform_vars( + self.text(arg_node), + case=case, + subgroup=job, + overrides=overrides, + default=self.get(arg_node, "default"), + ) + + args[arg_value] = dict(position=position) + + return args
+ + +
+[docs] + def get_mpirun(self, case, attribs, job, exe_only=False, overrides=None): + """ + Find best match, return (executable, {arg_name : text}) + """ + args = [] + + try: + the_match = self._find_best_mpirun_match(attribs) + except ValueError: + return "", [], None, None + + # Now that we know the best match, compute the arguments + if not exe_only: + arg_node = self.get_optional_child("arguments", root=the_match) + if arg_node: + arg_nodes = self.get_children("arg", root=arg_node) + for arg_node in arg_nodes: + arg_value = transform_vars( + self.text(arg_node), + case=case, + subgroup=job, + overrides=overrides, + default=self.get(arg_node, "default"), + ) + args.append(arg_value) + + exec_node = self.get_child("executable", root=the_match) + expect(exec_node is not None, "No executable found") + executable = self.text(exec_node) + run_exe = None + run_misc_suffix = None + + run_exe_node = self.get_optional_child("run_exe", root=the_match) + if run_exe_node: + run_exe = self.text(run_exe_node) + + run_misc_suffix_node = self.get_optional_child( + "run_misc_suffix", root=the_match + ) + if run_misc_suffix_node: + run_misc_suffix = self.text(run_misc_suffix_node) + + return executable, args, run_exe, run_misc_suffix
+ + +
+[docs] + def get_type_info(self, vid): + return "char"
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_run.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_run.html new file mode 100644 index 00000000000..8904c5d72bc --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_run.html @@ -0,0 +1,200 @@ + + + + + + CIME.XML.env_run — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_run

+"""
+Interface to the env_run.xml file.  This class inherits from EnvBase
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.env_base import EnvBase
+
+from CIME import utils
+from CIME.utils import convert_to_type
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EnvRun(EnvBase): + def __init__( + self, case_root=None, infile="env_run.xml", components=None, read_only=False + ): + """ + initialize an object interface to file env_run.xml in the case directory + """ + self._components = components + self._pio_async_interface = {} + + if components: + for comp in components: + self._pio_async_interface[comp] = False + + schema = os.path.join(utils.get_schema_path(), "env_entry_id.xsd") + + EnvBase.__init__(self, case_root, infile, schema=schema, read_only=read_only) + +
+[docs] + def get_value(self, vid, attribute=None, resolved=True, subgroup=None): + """ + Get a value for entry with id attribute vid. + or from the values field if the attribute argument is provided + and matches. Special case for pio variables when PIO_ASYNC_INTERFACE is True. + """ + if any(self._pio_async_interface.values()): + vid, comp, iscompvar = self.check_if_comp_var(vid, attribute) + if vid.startswith("PIO") and iscompvar: + if comp and comp != "CPL": + logger.warning("Only CPL settings are used for PIO in async mode") + subgroup = "CPL" + + return EnvBase.get_value(self, vid, attribute, resolved, subgroup)
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + Set the value of an entry-id field to value + Returns the value or None if not found + subgroup is ignored in the general routine and applied in specific methods + """ + comp = None + if any(self._pio_async_interface.values()): + vid, comp, iscompvar = self.check_if_comp_var(vid, None) + if vid.startswith("PIO") and iscompvar: + if comp and comp != "CPL": + logger.warning("Only CPL settings are used for PIO in async mode") + subgroup = "CPL" + + if vid == "PIO_ASYNC_INTERFACE": + if comp: + if type(value) == type(True): + self._pio_async_interface[comp] = value + else: + self._pio_async_interface[comp] = convert_to_type( + value, "logical", vid + ) + + return EnvBase.set_value(self, vid, value, subgroup, ignore_type)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_test.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_test.html new file mode 100644 index 00000000000..e2f01cb9296 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_test.html @@ -0,0 +1,287 @@ + + + + + + CIME.XML.env_test — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_test

+"""
+Interface to the env_test.xml file.  This class inherits from EnvBase
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.env_base import EnvBase
+from CIME.utils import convert_to_type
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class EnvTest(EnvBase): + # pylint: disable=unused-argument + def __init__( + self, case_root=None, infile="env_test.xml", components=None, read_only=False + ): + """ + initialize an object interface to file env_test.xml in the case directory + """ + EnvBase.__init__(self, case_root, infile, read_only=read_only) + +
+[docs] + def add_test(self, testnode): + self.add_child(testnode) + self.write()
+ + +
+[docs] + def set_initial_values(self, case): + """ + The values to initialize a test are defined in env_test.xml + copy them to the appropriate case env files to initialize a test + ignore fields set in the BUILD and RUN clauses, they are set in + the appropriate build and run phases. + """ + tnode = self.get_child("test") + for child in self.get_children(root=tnode): + if self.text(child) is not None: + logger.debug( + "Setting {} to {} for test".format( + self.name(child), self.text(child) + ) + ) + if "$" in self.text(child): + case.set_value(self.name(child), self.text(child), ignore_type=True) + else: + item_type = case.get_type_info(self.name(child)) + if item_type: + value = convert_to_type( + self.text(child), item_type, self.name(child) + ) + case.set_value(self.name(child), value) + case.flush() + return
+ + +
+[docs] + def set_test_parameter(self, name, value): + """ + If a node already exists update the value + otherwise create a node and initialize it to value + """ + case = self.get_value("TESTCASE") + tnode = self.get_child("test", {"NAME": case}) + idnode = self.get_optional_child(name, root=tnode) + + if idnode is None: + self.make_child(name, root=tnode, text=value) + else: + self.set_text(idnode, value)
+ + +
+[docs] + def get_test_parameter(self, name): + case = self.get_value("TESTCASE") + tnode = self.get_child("test", {"NAME": case}) + value = None + idnode = self.get_optional_child(name, root=tnode) + if idnode is not None: + value = self.text(idnode) + return value
+ + +
+[docs] + def get_step_phase_cnt(self, step): + bldnodes = self.get_children(step) + cnt = 0 + for node in bldnodes: + cnt = max(cnt, int(self.get(node, "phase"))) + return cnt
+ + +
+[docs] + def get_settings_for_phase(self, name, cnt): + node = self.get_optional_child(name, attributes={"phase": cnt}) + settings = [] + if node is not None: + for child in node: + logger.debug( + "Here child is {} with value {}".format( + self.name(child), self.text(child) + ) + ) + settings.append((self.name(child), self.text(child))) + + return settings
+ + +
+[docs] + def run_phase_get_clone_name(self, phase): + node = self.get_child("RUN", attributes={"phase": str(phase)}) + if self.has(node, "clone"): + return self.get(node, "clone") + return None
+ + +
+[docs] + def cleanupnode(self, node): + """ + keep the values component set + """ + fnode = self.get_child(name="file", root=node) + self.remove_child(fnode, root=node) + gnode = self.get_child(name="group", root=node) + self.remove_child(gnode, root=node) + dnode = self.get_optional_child(name="default_value", root=node) + if dnode is not None: + self.remove_child(dnode, root=node) + return node
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + """ + check if vid is in test section of file + """ + newval = EnvBase.set_value(self, vid, value, subgroup, ignore_type) + if newval is None: + tnode = self.get_optional_child("test") + if tnode is not None: + newval = self.set_element_text(vid, value, root=tnode) + return newval
+ + +
+[docs] + def get_value(self, vid, attribute=None, resolved=True, subgroup=None): + value = EnvBase.get_value(self, vid, attribute, resolved, subgroup) + if value is None: + tnode = self.get_optional_child("test") + if tnode is not None: + value = self.get_element_text(vid, root=tnode) + return value
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_workflow.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_workflow.html new file mode 100644 index 00000000000..69a33800698 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/env_workflow.html @@ -0,0 +1,321 @@ + + + + + + CIME.XML.env_workflow — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.env_workflow

+"""
+Interface to the env_workflow.xml file.  This class inherits from EnvBase
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.XML.env_base import EnvBase
+from CIME.utils import get_cime_root
+import re, math
+
+logger = logging.getLogger(__name__)
+
+# pragma pylint: disable=attribute-defined-outside-init
+
+
+
+[docs] +class EnvWorkflow(EnvBase): + def __init__(self, case_root=None, infile="env_workflow.xml", read_only=False): + """ + initialize an object interface to file env_workflow.xml in the case directory + """ + # This arbitrary setting should always be overwritten + # schema = os.path.join(get_cime_root(), "CIME", "config", "xml_schemas", "env_workflow.xsd") + # TODO: define schema for this file + schema = None + super(EnvWorkflow, self).__init__( + case_root, infile, schema=schema, read_only=read_only + ) + +
+[docs] + def create_job_groups(self, batch_jobs, is_test): + # Subtle: in order to support dynamic batch jobs, we need to remove the + # job_submission group and replace with job-based groups + orig_group = self.get_optional_child( + "group", + {"id": "job_submission"}, + err_msg="Looks like job groups have already been created", + ) + expect(orig_group, "No workflow groups found") + orig_group_children = super(EnvWorkflow, self).get_children(root=orig_group) + + childnodes = [] + for child in reversed(orig_group_children): + childnodes.append(child) + + self.remove_child(orig_group) + + for name, jdict in batch_jobs: + if name == "case.run" and is_test: + pass # skip + elif name == "case.test" and not is_test: + pass # skip + elif name == "case.run.sh": + pass # skip + else: + new_job_group = self.make_child("group", {"id": name}) + for field in jdict.keys(): + if field == "runtime_parameters": + continue + val = jdict[field] + node = self.make_child( + "entry", {"id": field, "value": val}, root=new_job_group + ) + self.make_child("type", root=node, text="char") + + for child in childnodes: + self.add_child(self.copy(child), root=new_job_group)
+ + +
+[docs] + def get_jobs(self): + groups = self.get_children("group") + results = [] + for group in groups: + results.append(self.get(group, "id")) + return results
+ + +
+[docs] + def get_type_info(self, vid): + gnodes = self.get_children("group") + type_info = None + for gnode in gnodes: + nodes = self.get_children("entry", {"id": vid}, root=gnode) + type_info = None + for node in nodes: + new_type_info = self._get_type_info(node) + if type_info is None: + type_info = new_type_info + else: + expect( + type_info == new_type_info, + "Inconsistent type_info for entry id={} {} {}".format( + vid, new_type_info, type_info + ), + ) + return type_info
+ + +
+[docs] + def get_job_specs(self, case, job): + task_count = case.get_resolved_value(self.get_value("task_count", subgroup=job)) + tasks_per_node = case.get_resolved_value( + self.get_value("tasks_per_node", subgroup=job) + ) + thread_count = case.get_resolved_value( + self.get_value("thread_count", subgroup=job) + ) + max_gpus_per_node = case.get_value("MAX_GPUS_PER_NODE") + ngpus_per_node = case.get_value("NGPUS_PER_NODE") + num_nodes = None + if not ngpus_per_node: + max_gpus_per_node = 0 + ngpus_per_node = 0 + if task_count is not None and tasks_per_node is not None: + task_count = int(task_count) + num_nodes = int(math.ceil(float(task_count) / float(tasks_per_node))) + tasks_per_node = task_count // num_nodes + if not thread_count: + thread_count = 1 + if ngpus_per_node > max_gpus_per_node: + ngpus_per_node = max_gpus_per_node + + return task_count, num_nodes, tasks_per_node, thread_count, ngpus_per_node
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, item, attribute=None, resolved=True, subgroup="PRIMARY"): + """ + Must default subgroup to something in order to provide single return value + """ + value = None + if subgroup == "PRIMARY": + subgroup = "case.test" if "case.test" in self.get_jobs() else "case.run" + + # pylint: disable=assignment-from-none + if value is None: + value = super(EnvWorkflow, self).get_value( + item, attribute=attribute, resolved=resolved, subgroup=subgroup + ) + + return value
+ + + # pylint: disable=arguments-differ +
+[docs] + def set_value(self, item, value, subgroup=None, ignore_type=False): + """ + Override the entry_id set_value function with some special cases for this class + """ + val = None + + # allow the user to set item for all jobs if subgroup is not provided + if subgroup is None: + gnodes = self.get_children("group") + for gnode in gnodes: + node = self.get_optional_child("entry", {"id": item}, root=gnode) + if node is not None: + self._set_value(node, value, vid=item, ignore_type=ignore_type) + val = value + else: + group = self.get_optional_child("group", {"id": subgroup}) + if group is not None: + node = self.get_optional_child("entry", {"id": item}, root=group) + if node is not None: + val = self._set_value( + node, value, vid=item, ignore_type=ignore_type + ) + + return val
+ + +
+[docs] + def get_children(self, name=None, attributes=None, root=None): + if name in ( + "JOB_WALLCLOCK_TIME", + "PROJECT", + "CHARGE_ACCOUNT", + "JOB_QUEUE", + "BATCH_COMMAND_FLAGS", + ): + nodes = super(EnvWorkflow, self).get_children( + "entry", attributes={"id": name}, root=root + ) + else: + nodes = super(EnvWorkflow, self).get_children( + name, attributes=attributes, root=root + ) + + return nodes
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/expected_fails_file.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/expected_fails_file.html new file mode 100644 index 00000000000..32e0a44439e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/expected_fails_file.html @@ -0,0 +1,202 @@ + + + + + + CIME.XML.expected_fails_file — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.expected_fails_file

+"""Interface to an expected failure xml file
+
+Here is an example:
+
+<?xml version= "1.0"?>
+
+<expectedFails version="1.1">
+  <test name="ERP_D_Ld10_P36x2.f10_f10_musgs.IHistClm50BgcCrop.cheyenne_intel.clm-ciso_decStart">
+    <phase name="RUN">
+      <status>FAIL</status>
+      <issue>#404</issue>
+    </phase>
+    <phase name="COMPARE_base_rest">
+      <status>PEND</status>
+      <issue>#404</issue>
+      <comment>Because of the RUN failure, this phase is listed as PEND</comment>
+    </phase>
+  </test>
+  <test name="PFS_Ld20.f09_g17.I2000Clm50BgcCrop.cheyenne_intel">
+    <phase name="GENERATE">
+      <status>FAIL</status>
+      <issue>ESMCI/cime#2917</issue>
+    </phase>
+    <phase name="BASELINE">
+      <status>FAIL</status>
+      <issue>ESMCI/cime#2917</issue>
+    </phase>
+  </test>
+</expectedFails>
+
+However, many of the above elements are optional, for human consumption only (i.e., not
+parsed here). The only required elements are given by this example:
+
+<?xml version= "1.0"?>
+
+<expectedFails version="1.1">
+  <test name="...">
+    <phase name="...">
+      <status>...</status>
+    </phase>
+  </test>
+</expectedFails>
+"""
+
+from CIME.XML.standard_module_setup import *
+
+from CIME import utils
+from CIME.XML.generic_xml import GenericXML
+from CIME.expected_fails import ExpectedFails
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ExpectedFailsFile(GenericXML): + def __init__(self, infile): + schema = os.path.join(utils.get_schema_path(), "expected_fails_file.xsd") + GenericXML.__init__(self, infile, schema=schema) + +
+[docs] + def get_expected_fails(self): + """Returns a dictionary of ExpectedFails objects, where the keys are test names""" + xfails = {} + test_nodes = self.get_children("test") + for tnode in test_nodes: + test_name = self.attrib(tnode)["name"] + phase_nodes = self.get_children("phase", root=tnode) + for pnode in phase_nodes: + phase_name = self.attrib(pnode)["name"] + status_node = self.get_child("status", root=pnode) + status = self.text(status_node) + # issue and comment elements are not currently parsed + if test_name not in xfails: + xfails[test_name] = ExpectedFails() + xfails[test_name].add_failure(phase_name, status) + + return xfails
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/files.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/files.html new file mode 100644 index 00000000000..3ec212906eb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/files.html @@ -0,0 +1,293 @@ + + + + + + CIME.XML.files — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.files

+"""
+Interface to the config_files.xml file.  This class inherits from EntryID.py
+"""
+import re
+import os
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.entry_id import EntryID
+from CIME.utils import (
+    expect,
+    get_cime_root,
+    get_config_path,
+    get_schema_path,
+    get_model,
+)
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Files(EntryID): + def __init__(self, comp_interface=None): + """ + initialize an object + + >>> files = Files() + >>> files.get_value('CASEFILE_HEADERS',resolved=False) + '$CIMEROOT/CIME/data/config/config_headers.xml' + """ + if comp_interface is None: + comp_interface = "mct" + cimeroot = get_cime_root() + cimeroot_parent = os.path.dirname(cimeroot) + config_path = get_config_path() + schema_path = get_schema_path() + + infile = os.path.join(config_path, get_model(), "config_files.xml") + expect(os.path.isfile(infile), "Could not find or open file {}".format(infile)) + + schema = os.path.join(schema_path, "entry_id.xsd") + + EntryID.__init__(self, infile, schema=schema) + + config_files_override = os.path.join(cimeroot_parent, ".config_files.xml") + # variables COMP_ROOT_DIR_{} are mutable, all other variables are read only + self.COMP_ROOT_DIR = {} + self._comp_interface = comp_interface + self._cpl_comp = {} + # .config_file.xml at the top level may overwrite COMP_ROOT_DIR_ nodes in config_files + + if os.path.isfile(config_files_override): + self.read(config_files_override) + self.overwrite_existing_entries() + elif self.get_version() >= 3.0: + model_config_files = self.get_value("MODEL_CONFIG_FILES") + self.read(model_config_files) + self.overwrite_existing_entries() + +
+[docs] + def get_value(self, vid, attribute=None, resolved=True, subgroup=None): + if vid == "COMP_ROOT_DIR_CPL": + if self._cpl_comp: + attribute = self._cpl_comp + elif attribute: + self._cpl_comp = attribute + else: + self._cpl_comp["component"] = "cpl" + if "COMP_ROOT_DIR" in vid: + if vid in self.COMP_ROOT_DIR: + if attribute is not None: + if vid + attribute["component"] in self.COMP_ROOT_DIR: + return self.COMP_ROOT_DIR[vid + attribute["component"]] + else: + return self.COMP_ROOT_DIR[vid] + + newatt = {"comp_interface": self._comp_interface} + if attribute: + newatt.update(attribute) + value = super(Files, self).get_value( + vid, attribute=newatt, resolved=False, subgroup=subgroup + ) + if value is None and attribute is not None: + value = super(Files, self).get_value( + vid, attribute=attribute, resolved=False, subgroup=subgroup + ) + if value is None: + value = super(Files, self).get_value( + vid, attribute=None, resolved=False, subgroup=subgroup + ) + + if ( + "COMP_ROOT_DIR" not in vid + and value is not None + and "COMP_ROOT_DIR" in value + ): + m = re.search("(COMP_ROOT_DIR_[^/]+)/", value) + comp_root_dir_var_name = m.group(1) + newatt = {"comp_interface": self._comp_interface} + if attribute: + newatt.update(attribute) + + crd_node = self.scan_optional_child( + comp_root_dir_var_name, attributes=newatt + ) + if crd_node: + comp_root_dir = self.get_value( + comp_root_dir_var_name, + attribute=newatt, + resolved=False, + subgroup=subgroup, + ) + else: + comp_root_dir = self.get_value( + comp_root_dir_var_name, + attribute=attribute, + resolved=False, + subgroup=subgroup, + ) + self.set_value(comp_root_dir_var_name, comp_root_dir, subgroup=attribute) + if resolved: + value = value.replace("$" + comp_root_dir_var_name, comp_root_dir) + + if resolved and value is not None: + value = value.replace("$COMP_INTERFACE", self._comp_interface) + value = self.get_resolved_value(value) + return value
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=False): + if "COMP_ROOT_DIR" in vid: + if subgroup is not None: + self.COMP_ROOT_DIR[vid + subgroup["component"]] = value + else: + self.COMP_ROOT_DIR[vid] = value + + else: + expect(False, "Attempt to set a nonmutable variable {}".format(vid)) + return value
+ + +
+[docs] + def get_schema(self, nodename, attributes=None): + node = self.get_optional_child("entry", {"id": nodename}) + schemanode = self.get_optional_child("schema", root=node, attributes=attributes) + if schemanode is not None: + logger.debug("Found schema for {}".format(nodename)) + return self.get_resolved_value(self.text(schemanode)) + return None
+ + +
+[docs] + def get_components(self, nodename): + node = self.get_optional_child("entry", {"id": nodename}) + if node is not None: + valnodes = self.get_children( + "value", root=self.get_child("values", root=node) + ) + values = [] + for valnode in valnodes: + value = self.get(valnode, "component") + values.append(value) + return values + + return None
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/generic_xml.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/generic_xml.html new file mode 100644 index 00000000000..ef95ff10f23 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/generic_xml.html @@ -0,0 +1,971 @@ + + + + + + CIME.XML.generic_xml — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.generic_xml

+"""
+Common interface to XML files, this is an abstract class and is expected to
+be used by other XML interface modules and not directly.
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.utils import safe_copy, get_src_root
+
+import xml.etree.ElementTree as ET
+
+# pylint: disable=import-error
+from distutils.spawn import find_executable
+import getpass
+from copy import deepcopy
+from collections import namedtuple
+
+logger = logging.getLogger(__name__)
+
+
+class _Element(
+    object
+):  # private class, don't want users constructing directly or calling methods on it
+    def __init__(self, xml_element):
+        self.xml_element = xml_element
+
+    def __eq__(self, rhs):
+        expect(isinstance(rhs, _Element), "Wrong type")
+        return self.xml_element == rhs.xml_element  # pylint: disable=protected-access
+
+    def __ne__(self, rhs):
+        expect(isinstance(rhs, _Element), "Wrong type")
+        return self.xml_element != rhs.xml_element  # pylint: disable=protected-access
+
+    def __hash__(self):
+        return hash(self.xml_element)
+
+    def __deepcopy__(self, _):
+        return _Element(deepcopy(self.xml_element))
+
+
+
+[docs] +class GenericXML(object): + + _FILEMAP = {} + DISABLE_CACHING = False + CacheEntry = namedtuple("CacheEntry", ["tree", "root", "modtime"]) + +
+[docs] + @classmethod + def invalidate(cls, filename): + if filename in cls._FILEMAP: + del cls._FILEMAP[filename]
+ + + def __init__( + self, + infile=None, + schema=None, + root_name_override=None, + root_attrib_override=None, + read_only=True, + ): + """ + Initialize an object + """ + logger.debug("Initializing {}".format(infile)) + self.tree = None + self.root = None + self.locked = False + self.read_only = read_only + self.filename = infile + self.needsrewrite = False + if infile is None: + return + + if ( + os.path.isfile(infile) + and os.access(infile, os.R_OK) + and os.stat(infile).st_size > 0 + ): + # If file is defined and exists, read it + self.read(infile, schema) + else: + # if file does not exist create a root xml element + # and set it's id to file + expect( + not self.read_only, + "Makes no sense to have empty read-only file: {}".format(infile), + ) + logger.debug("File {} does not exist.".format(infile)) + expect("$" not in infile, "File path not fully resolved: {}".format(infile)) + + root = _Element(ET.Element("xml")) + + if root_name_override: + self.root = self.make_child( + root_name_override, root=root, attributes=root_attrib_override + ) + else: + self.root = self.make_child( + "file", + root=root, + attributes={"id": os.path.basename(infile), "version": "2.0"}, + ) + + self.tree = ET.ElementTree(root) + + self._FILEMAP[infile] = self.CacheEntry(self.tree, self.root, 0.0) + +
+[docs] + def read(self, infile, schema=None): + """ + Read and parse an xml file into the object + """ + cached_read = False + if not self.DISABLE_CACHING and infile in self._FILEMAP: + timestamp_cache = self._FILEMAP[infile].modtime + timestamp_file = os.path.getmtime(infile) + if timestamp_file == timestamp_cache: + logger.debug("read (cached): {}".format(infile)) + expect( + self.read_only or not self.filename or not self.needsrewrite, + "Reading into object marked for rewrite, file {}".format( + self.filename + ), + ) + self.tree, self.root, _ = self._FILEMAP[infile] + cached_read = True + + if not cached_read: + logger.debug("read: {}".format(infile)) + with open(infile, "r", encoding="utf-8") as fd: + self.read_fd(fd) + + if schema is not None and self.get_version() > 1.0: + self.validate_xml_file(infile, schema) + + logger.debug("File version is {}".format(str(self.get_version()))) + + self._FILEMAP[infile] = self.CacheEntry( + self.tree, self.root, os.path.getmtime(infile) + )
+ + +
+[docs] + def read_fd(self, fd): + expect( + self.read_only or not self.filename or not self.needsrewrite, + "Reading into object marked for rewrite, file {}".format(self.filename), + ) + read_only = self.read_only + if self.tree: + addroot = _Element(ET.parse(fd).getroot()) + # we need to override the read_only mechanism here to append the xml object + self.read_only = False + if addroot.xml_element.tag == self.name(self.root): + for child in self.get_children(root=addroot): + self.add_child(child) + else: + self.add_child(addroot) + self.read_only = read_only + else: + self.tree = ET.parse(fd) + self.root = _Element(self.tree.getroot()) + include_elems = self.scan_children("xi:include") + # First remove all includes found from the list + for elem in include_elems: + self.read_only = False + self.remove_child(elem) + self.read_only = read_only + # Then recursively add the included files. + for elem in include_elems: + path = os.path.abspath( + os.path.join( + os.getcwd(), os.path.dirname(self.filename), self.get(elem, "href") + ) + ) + logger.debug("Include file {}".format(path)) + self.read(path)
+ + +
+[docs] + def lock(self): + """ + A subclass is doing caching, we need to lock the tree structure + in order to avoid invalidating cache. + """ + self.locked = True
+ + +
+[docs] + def unlock(self): + self.locked = False
+ + +
+[docs] + def change_file(self, newfile, copy=False): + if copy: + new_case = os.path.dirname(newfile) + if not os.path.exists(new_case): + os.makedirs(new_case) + safe_copy(self.filename, newfile) + + self.tree = None + self.filename = newfile + self.read(newfile)
+ + + # + # API for individual node operations + # + +
+[docs] + def get(self, node, attrib_name, default=None): + return node.xml_element.get(attrib_name, default=default)
+ + +
+[docs] + def has(self, node, attrib_name): + return attrib_name in node.xml_element.attrib
+ + +
+[docs] + def set(self, node, attrib_name, value): + if self.get(node, attrib_name) != value: + expect( + not self.read_only, + "read_only: cannot set attrib[{}]={} for node {} in file {}".format( + attrib_name, value, self.name(node), self.filename + ), + ) + if attrib_name == "id": + expect( + not self.locked, + "locked: cannot set attrib[{}]={} for node {} in file {}".format( + attrib_name, value, self.name(node), self.filename + ), + ) + self.needsrewrite = True + return node.xml_element.set(attrib_name, value)
+ + +
+[docs] + def pop(self, node, attrib_name): + expect( + not self.read_only, + "read_only: cannot pop attrib[{}] for node {} in file {}".format( + attrib_name, self.name(node), self.filename + ), + ) + if attrib_name == "id": + expect( + not self.locked, + "locked: cannot pop attrib[{}] for node {} in file {}".format( + attrib_name, self.name(node), self.filename + ), + ) + self.needsrewrite = True + return node.xml_element.attrib.pop(attrib_name)
+ + +
+[docs] + def attrib(self, node): + # Return a COPY. We do not want clients making changes directly + return ( + None if node.xml_element.attrib is None else dict(node.xml_element.attrib) + )
+ + +
+[docs] + def set_name(self, node, name): + expect( + not self.read_only, + "read_only: set node name {} in file {}".format(name, self.filename), + ) + if node.xml_element.tag != name: + self.needsrewrite = True + node.xml_element.tag = name
+ + +
+[docs] + def set_text(self, node, text): + expect( + not self.read_only, + "read_only: set node text {} for node {} in file {}".format( + text, self.name(node), self.filename + ), + ) + if node.xml_element.text != text: + node.xml_element.text = text + self.needsrewrite = True
+ + +
+[docs] + def name(self, node): + return node.xml_element.tag
+ + +
+[docs] + def text(self, node): + return node.xml_element.text
+ + +
+[docs] + def add_child(self, node, root=None, position=None): + """ + Add element node to self at root + """ + expect( + not self.locked and not self.read_only, + "{}: cannot add child {} in file {}".format( + "read_only" if self.read_only else "locked", + self.name(node), + self.filename, + ), + ) + self.needsrewrite = True + root = root if root is not None else self.root + if position is not None: + root.xml_element.insert(position, node.xml_element) + else: + root.xml_element.append(node.xml_element)
+ + +
+[docs] + def copy(self, node): + return deepcopy(node)
+ + +
+[docs] + def remove_child(self, node, root=None): + expect( + not self.locked and not self.read_only, + "{}: cannot remove child {} in file {}".format( + "read_only" if self.read_only else "locked", + self.name(node), + self.filename, + ), + ) + self.needsrewrite = True + root = root if root is not None else self.root + root.xml_element.remove(node.xml_element)
+ + +
+[docs] + def make_child(self, name, attributes=None, root=None, text=None): + expect( + not self.locked and not self.read_only, + "{}: cannot make child {} in file {}".format( + "read_only" if self.read_only else "locked", name, self.filename + ), + ) + root = root if root is not None else self.root + self.needsrewrite = True + if attributes is None: + node = _Element(ET.SubElement(root.xml_element, name)) + else: + node = _Element(ET.SubElement(root.xml_element, name, attrib=attributes)) + + if text: + self.set_text(node, text) + + return node
+ + +
+[docs] + def make_child_comment(self, root=None, text=None): + expect( + not self.locked and not self.read_only, + "{}: cannot make child {} in file {}".format( + "read_only" if self.read_only else "locked", text, self.filename + ), + ) + root = root if root is not None else self.root + self.needsrewrite = True + et_comment = ET.Comment(text) + node = _Element(et_comment) + root.xml_element.append(node.xml_element) + return node
+ + +
+[docs] + def get_children(self, name=None, attributes=None, root=None): + """ + This is the critical function, its interface and performance are crucial. + + You can specify attributes={key:None} if you want to select children + with the key attribute but you don't care what its value is. + """ + root = root if root is not None else self.root + children = [] + for child in root.xml_element: + if name is not None: + if child.tag != name: + continue + + if attributes is not None: + if child.attrib is None: + continue + else: + match = True + for key, value in attributes.items(): + if key not in child.attrib: + match = False + break + elif value is not None: + if child.attrib[key] != value: + match = False + break + + if not match: + continue + + children.append(_Element(child)) + + return children
+ + +
+[docs] + def get_child(self, name=None, attributes=None, root=None, err_msg=None): + child = self.get_optional_child( + root=root, name=name, attributes=attributes, err_msg=err_msg + ) + expect( + child, + err_msg + if err_msg + else "Expected one child, found None with name '{}' and attribs '{}' in file {}".format( + name, attributes, self.filename + ), + ) + return child
+ + +
+[docs] + def get_optional_child(self, name=None, attributes=None, root=None, err_msg=None): + children = self.get_children(root=root, name=name, attributes=attributes) + if len(children) > 1: + # see if we can reduce to 1 based on attribute counts + if not attributes: + children = [c for c in children if not c.xml_element.attrib] + else: + attlen = len(attributes) + children = [c for c in children if len(c.xml_element.attrib) == attlen] + + expect( + len(children) <= 1, + err_msg + if err_msg + else "Multiple matches for name '{}' and attribs '{}' in file {}".format( + name, attributes, self.filename + ), + ) + return children[0] if children else None
+ + +
+[docs] + def get_element_text(self, element_name, attributes=None, root=None): + element_node = self.get_optional_child( + name=element_name, attributes=attributes, root=root + ) + if element_node is not None: + return self.text(element_node) + return None
+ + +
+[docs] + def set_element_text(self, element_name, new_text, attributes=None, root=None): + element_node = self.get_optional_child( + name=element_name, attributes=attributes, root=root + ) + if element_node is not None: + self.set_text(element_node, new_text) + return new_text + return None
+ + +
+[docs] + def to_string(self, node, method="xml", encoding="us-ascii"): + return ET.tostring(node.xml_element, method=method, encoding=encoding)
+ + + # + # API for operations over the entire file + # + +
+[docs] + def get_version(self): + version = self.get(self.root, "version") + version = 1.0 if version is None else float(version) + return version
+ + +
+[docs] + def check_timestamp(self): + """ + Returns True if timestamp matches what is expected + """ + timestamp_cache = self._FILEMAP[self.filename].modtime + if timestamp_cache != 0.0: + timestamp_file = os.path.getmtime(self.filename) + return timestamp_file == timestamp_cache + else: + return True
+ + +
+[docs] + def validate_timestamp(self): + timestamp_ok = self.check_timestamp() + expect( + timestamp_ok, + "File {} appears to have changed without a corresponding invalidation.".format( + self.filename + ), + )
+ + +
+[docs] + def write(self, outfile=None, force_write=False): + """ + Write an xml file from data in self + """ + if not (self.needsrewrite or force_write): + return + + self.validate_timestamp() + + if outfile is None: + outfile = self.filename + + logger.debug("write: " + outfile) + + xmlstr = self.get_raw_record() + + # xmllint provides a better format option for the output file + xmllint = find_executable("xmllint") + + if xmllint: + if isinstance(outfile, str): + run_cmd_no_fail( + "{} --format --output {} -".format(xmllint, outfile), + input_str=xmlstr, + ) + else: + outfile.write( + run_cmd_no_fail("{} --format -".format(xmllint), input_str=xmlstr) + ) + + else: + with open(outfile, "w") as xmlout: + xmlout.write(xmlstr) + + self._FILEMAP[self.filename] = self.CacheEntry( + self.tree, self.root, os.path.getmtime(self.filename) + ) + + self.needsrewrite = False
+ + +
+[docs] + def scan_child(self, nodename, attributes=None, root=None): + """ + Get an xml element matching nodename with optional attributes. + + Error unless exactly one match. + """ + + nodes = self.scan_children(nodename, attributes=attributes, root=root) + + expect( + len(nodes) == 1, + "Incorrect number of matches, {:d}, for nodename '{}' and attrs '{}' in file '{}'".format( + len(nodes), nodename, attributes, self.filename + ), + ) + return nodes[0]
+ + +
+[docs] + def scan_optional_child(self, nodename, attributes=None, root=None): + """ + Get an xml element matching nodename with optional attributes. + + Return None if no match. + """ + nodes = self.scan_children(nodename, attributes=attributes, root=root) + + expect( + len(nodes) <= 1, + "Multiple matches for nodename '{}' and attrs '{}' in file '{}', found {} matches".format( + nodename, attributes, self.filename, len(nodes) + ), + ) + return nodes[0] if nodes else None
+ + +
+[docs] + def scan_children(self, nodename, attributes=None, root=None): + + logger.debug( + "(get_nodes) Input values: {}, {}, {}, {}".format( + self.__class__.__name__, nodename, attributes, root + ) + ) + + if root is None: + root = self.root + nodes = [] + + namespace = {"xi": "http://www.w3.org/2001/XInclude"} + + xpath = ".//" + (nodename if nodename else "") + + if attributes: + # xml.etree has limited support for xpath and does not allow more than + # one attribute in an xpath query so we query seperately for each attribute + # and create a result with the intersection of those lists + + for key, value in attributes.items(): + if value is None: + xpath = ".//{}[@{}]".format(nodename, key) + else: + xpath = ".//{}[@{}='{}']".format(nodename, key, value) + + logger.debug("xpath is {}".format(xpath)) + + try: + newnodes = root.xml_element.findall(xpath, namespace) + except Exception as e: + expect( + False, "Bad xpath search term '{}', error: {}".format(xpath, e) + ) + + if not nodes: + nodes = newnodes + else: + for node in nodes[:]: + if node not in newnodes: + nodes.remove(node) + if not nodes: + return [] + + else: + logger.debug("xpath: {}".format(xpath)) + nodes = root.xml_element.findall(xpath, namespace) + + logger.debug("Returning {} nodes ({})".format(len(nodes), nodes)) + + return [_Element(node) for node in nodes]
+ + +
+[docs] + def get_value( + self, item, attribute=None, resolved=True, subgroup=None + ): # pylint: disable=unused-argument + """ + get_value is expected to be defined by the derived classes, if you get here + the value was not found in the class. + """ + logger.debug("Get Value for " + item) + return None
+ + +
+[docs] + def get_values( + self, vid, attribute=None, resolved=True, subgroup=None + ): # pylint: disable=unused-argument + logger.debug("Get Values for " + vid) + return []
+ + +
+[docs] + def set_value( + self, vid, value, subgroup=None, ignore_type=True + ): # pylint: disable=unused-argument + """ + ignore_type is not used in this flavor + """ + valnodes = self.get_children(vid) + for node in valnodes: + self.set_text(node, value) + + return value if valnodes else None
+ + +
+[docs] + def get_resolved_value(self, raw_value, allow_unresolved_envvars=False): + """ + A value in the xml file may contain references to other xml + variables or to environment variables. These are refered to in + the perl style with $name and $ENV{name}. + + >>> obj = GenericXML() + >>> os.environ["FOO"] = "BAR" + >>> os.environ["BAZ"] = "BARF" + >>> obj.get_resolved_value("one $ENV{FOO} two $ENV{BAZ} three") + 'one BAR two BARF three' + >>> obj.get_resolved_value("2 + 3 - 1") + '4' + >>> obj.get_resolved_value("0001-01-01") + '0001-01-01' + >>> obj.get_resolved_value("$SHELL{echo hi}") == 'hi' + True + """ + logger.debug("raw_value {}".format(raw_value)) + reference_re = re.compile(r"\${?(\w+)}?") + env_ref_re = re.compile(r"\$ENV\{(\w+)\}") + shell_ref_re = re.compile(r"\$SHELL\{([^}]+)\}") + math_re = re.compile(r"\s[+-/*]\s") + item_data = raw_value + + if item_data is None: + return None + + if not isinstance(item_data, str): + return item_data + + for m in env_ref_re.finditer(item_data): + logger.debug("look for {} in env".format(item_data)) + env_var = m.groups()[0] + env_var_exists = env_var in os.environ + if not allow_unresolved_envvars: + expect(env_var_exists, "Undefined env var '{}'".format(env_var)) + if env_var_exists: + item_data = item_data.replace(m.group(), os.environ[env_var]) + + for s in shell_ref_re.finditer(item_data): + logger.debug("execute {} in shell".format(item_data)) + shell_cmd = s.groups()[0] + item_data = item_data.replace(s.group(), run_cmd_no_fail(shell_cmd)) + + for m in reference_re.finditer(item_data): + var = m.groups()[0] + logger.debug("find: {}".format(var)) + # The overridden versions of this method do not simply return None + # so the pylint should not be flagging this + ref = self.get_value(var) # pylint: disable=assignment-from-none + + if ref is not None: + logger.debug("resolve: " + str(ref)) + item_data = item_data.replace( + m.group(), self.get_resolved_value(str(ref)) + ) + elif var == "CIMEROOT": + cimeroot = get_cime_root() + item_data = item_data.replace(m.group(), cimeroot) + elif var == "SRCROOT": + srcroot = get_src_root() + item_data = item_data.replace(m.group(), srcroot) + elif var == "USER": + item_data = item_data.replace(m.group(), getpass.getuser()) + + if math_re.search(item_data): + try: + tmp = eval(item_data) + except Exception: + tmp = item_data + item_data = str(tmp) + + return item_data
+ + +
+[docs] + def validate_xml_file(self, filename, schema): + """ + validate an XML file against a provided schema file using pylint + """ + expect(os.path.isfile(filename), "xml file not found {}".format(filename)) + expect(os.path.isfile(schema), "schema file not found {}".format(schema)) + xmllint = find_executable("xmllint") + + expect( + xmllint and os.path.isfile(xmllint), + " xmllint not found in PATH, xmllint is required for cime. PATH={}".format( + os.environ["PATH"] + ), + ) + + logger.debug("Checking file {} against schema {}".format(filename, schema)) + run_cmd_no_fail( + "{} --xinclude --noout --schema {} {}".format(xmllint, schema, filename) + )
+ + +
+[docs] + def get_raw_record(self, root=None): + logger.debug("writing file {}".format(self.filename)) + if root is None: + root = self.root + try: + xmlstr = ET.tostring(root.xml_element) + except ET.ParseError as e: + ET.dump(root.xml_element) + expect( + False, + "Could not write file {}, xml formatting error '{}'".format( + self.filename, e + ), + ) + return xmlstr
+ + +
+[docs] + def get_id(self): + xmlid = self.get(self.root, "id") + if xmlid is not None: + return xmlid + return self.name(self.root)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/grids.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/grids.html new file mode 100644 index 00000000000..acad037c6a2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/grids.html @@ -0,0 +1,983 @@ + + + + + + CIME.XML.grids — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.grids

+"""
+Common interface to XML files which follow the grids format,
+This is not an abstract class - but inherits from the abstact class GenericXML
+"""
+
+from collections import OrderedDict
+from CIME.XML.standard_module_setup import *
+from CIME.XML.files import Files
+from CIME.XML.generic_xml import GenericXML
+
+logger = logging.getLogger(__name__)
+
+# Separator character for multiple grids within a single component (currently just used
+# for GLC when there are multiple ice sheet grids). It is important that this character
+# NOT appear in any file names - or anywhere in the path of directories holding input
+# data.
+GRID_SEP = ":"
+
+
+
+[docs] +class Grids(GenericXML): + def __init__(self, infile=None, files=None, comp_interface=None): + if files is None: + files = Files(comp_interface=comp_interface) + if infile is None: + infile = files.get_value("GRIDS_SPEC_FILE") + logger.debug(" Grid specification file is {}".format(infile)) + schema = files.get_schema("GRIDS_SPEC_FILE") + expect( + os.path.isfile(infile) and os.access(infile, os.R_OK), + f" grid file not found {infile}", + ) + try: + GenericXML.__init__(self, infile, schema) + except: + # Getting false failures on izumi, change this to a warning + logger.warning("Schema validity test fails for {}".format(infile)) + + self._version = self.get_version() + self._comp_gridnames = self._get_grid_names() + + def _get_grid_names(self): + grids = self.get_child("grids") + model_grid_defaults = self.get_child("model_grid_defaults", root=grids) + nodes = self.get_children("grid", root=model_grid_defaults) + gridnames = [] + for node in nodes: + gn = self.get(node, "name") + if gn not in gridnames: + gridnames.append(gn) + if "mask" not in gridnames: + gridnames.append("mask") + + return gridnames + +
+[docs] + def get_grid_info(self, name, compset, driver): + """ + Find the matching grid node + + Returns a dictionary containing relevant grid variables: domains, gridmaps, etc. + """ + gridinfo = {} + atmnlev = None + lndnlev = None + + # mechanism to specify atm levels + atmlevregex = re.compile(r"([^_]+)z(\d+)(.*)$") + levmatch = re.match(atmlevregex, name) + if levmatch: + atmnlev = levmatch.group(2) + name = levmatch.group(1) + levmatch.group(3) + + # mechanism to specify lnd levels + lndlevregex = re.compile(r"(.*_)([^_]+)z(\d+)(_[^m].*)$") + levmatch = re.match(lndlevregex, name) + if levmatch: + lndnlev = levmatch.group(3) + name = levmatch.group(1) + levmatch.group(2) + levmatch.group(4) + + # determine component_grids dictionary and grid longname + lname = self._read_config_grids(name, compset, atmnlev, lndnlev) + gridinfo["GRID"] = lname + component_grids = _ComponentGrids(lname) + + # determine domains given component_grids + domains = self._get_domains(component_grids, atmlevregex, lndlevregex, driver) + + gridinfo.update(domains) + + # determine gridmaps given component_grids + gridmaps = self._get_gridmaps(component_grids, driver, compset) + gridinfo.update(gridmaps) + + component_grids.check_num_elements(gridinfo) + + return gridinfo
+ + + def _read_config_grids(self, name, compset, atmnlev, lndnlev): + """ + read config_grids.xml with version 2.0 schema + + Returns a grid long name given the alias ('name' argument) + """ + model_grid = {} + for comp_gridname in self._comp_gridnames: + model_grid[comp_gridname] = None + + # (1) set array of component grid defaults that match current compset + grids_node = self.get_child("grids") + grid_defaults_node = self.get_child("model_grid_defaults", root=grids_node) + for grid_node in self.get_children("grid", root=grid_defaults_node): + name_attrib = self.get(grid_node, "name") + compset_attrib = self.get(grid_node, "compset") + compset_match = re.search(compset_attrib, compset) + if compset_match is not None: + model_grid[name_attrib] = self.text(grid_node) + + # (2)loop over all of the "model grid" nodes and determine is there an alias match with the + # input grid name - if there is an alias match determine if the "compset" and "not_compset" + # regular expression attributes match the match the input compset + + model_gridnodes = self.get_children("model_grid", root=grids_node) + model_gridnode = None + foundalias = False + for node in model_gridnodes: + alias = self.get(node, "alias") + if alias == name: + foundalias = True + foundcompset = False + compset_attrib = self.get(node, "compset") + not_compset_attrib = self.get(node, "not_compset") + if compset_attrib and not_compset_attrib: + compset_match = re.search(compset_attrib, compset) + not_compset_match = re.search(not_compset_attrib, compset) + if compset_match is not None and not_compset_match is None: + foundcompset = True + model_gridnode = node + logger.debug( + "Found match for {} with compset_match {} and not_compset_match {}".format( + alias, compset_attrib, not_compset_attrib + ) + ) + break + elif compset_attrib: + compset_match = re.search(compset_attrib, compset) + if compset_match is not None: + foundcompset = True + model_gridnode = node + logger.debug( + "Found match for {} with compset_match {}".format( + alias, compset_attrib + ) + ) + break + elif not_compset_attrib: + not_compset_match = re.search(not_compset_attrib, compset) + if not_compset_match is None: + foundcompset = True + model_gridnode = node + logger.debug( + "Found match for {} with not_compset_match {}".format( + alias, not_compset_attrib + ) + ) + break + else: + foundcompset = True + model_gridnode = node + logger.debug("Found match for {}".format(alias)) + break + expect(foundalias, "no alias {} defined".format(name)) + # if no match is found in config_grids.xml - exit + expect( + foundcompset, "grid alias {} not valid for compset {}".format(name, compset) + ) + + # for the match - find all of the component grid settings + grid_nodes = self.get_children("grid", root=model_gridnode) + for grid_node in grid_nodes: + name = self.get(grid_node, "name") + value = self.text(grid_node) + if model_grid[name] != "null": + model_grid[name] = value + mask_node = self.get_optional_child("mask", root=model_gridnode) + if mask_node is not None: + model_grid["mask"] = self.text(mask_node) + else: + model_grid["mask"] = model_grid["ocnice"] + + # determine component grids and associated required domains and gridmaps + # TODO: this should be in XML, not here + prefix = { + "atm": "a%", + "lnd": "l%", + "ocnice": "oi%", + "rof": "r%", + "wav": "w%", + "glc": "g%", + "mask": "m%", + "iac": "z%", + } + lname = "" + for component_gridname in self._comp_gridnames: + if lname: + lname = lname + "_" + prefix[component_gridname] + else: + lname = prefix[component_gridname] + if model_grid[component_gridname] is not None: + lname += model_grid[component_gridname] + if component_gridname == "atm" and atmnlev is not None: + if not ("a{:n}ull" in lname): + lname += "z" + atmnlev + + elif component_gridname == "lnd" and lndnlev is not None: + if not ("l{:n}ull" in lname): + lname += "z" + lndnlev + + else: + lname += "null" + return lname + + def _get_domains(self, component_grids, atmlevregex, lndlevregex, driver): + """determine domains dictionary for config_grids.xml v2 schema""" + domains = {} + mask_name = component_grids.get_comp_gridname("mask") + + for comp_name in component_grids.get_compnames(include_mask=True): + for grid_name in component_grids.get_comp_gridlist(comp_name): + # Determine grid name with no nlev suffix if there is one + grid_name_nonlev = grid_name + levmatch = re.match(atmlevregex, grid_name) + if levmatch: + grid_name_nonlev = levmatch.group(1) + levmatch.group(3) + levmatch = re.match(lndlevregex, grid_name) + if levmatch: + grid_name_nonlev = ( + levmatch.group(1) + levmatch.group(2) + levmatch.group(4) + ) + self._get_domains_for_one_grid( + domains=domains, + comp_name=comp_name.upper(), + grid_name=grid_name, + grid_name_nonlev=grid_name_nonlev, + mask_name=mask_name, + driver=driver, + ) + + if driver == "nuopc": + # Obtain the root node for the domain entry that sets the mask + if domains["MASK_GRID"] != "null": + mask_domain_node = self.get_optional_child( + "domain", + attributes={"name": domains["MASK_GRID"]}, + root=self.get_child("domains"), + ) + # Now obtain the mesh for the mask for the domain node for that component grid + mesh_node = self.get_child("mesh", root=mask_domain_node) + domains["MASK_MESH"] = self.text(mesh_node) + + return domains + + def _get_domains_for_one_grid( + self, domains, comp_name, grid_name, grid_name_nonlev, mask_name, driver + ): + """Get domain information for the given grid, adding elements to the domains dictionary + + Args: + - domains: dictionary of values, modified in place + - comp_name: uppercase abbreviated name of component (e.g., "ATM") + - grid_name: name of this grid + - grid_name_nonlev: same as grid_name but with any level information stripped out + - mask_name: the mask being used in this case + - driver: the name of the driver being used in this case + """ + domain_node = self.get_optional_child( + "domain", + attributes={"name": grid_name_nonlev}, + root=self.get_child("domains"), + ) + if not domain_node: + domain_root = self.get_optional_child("domains", {"driver": driver}) + if domain_root: + domain_node = self.get_optional_child( + "domain", attributes={"name": grid_name_nonlev}, root=domain_root + ) + if domain_node: + # determine xml variable name + if not "PTS_LAT" in domains: + domains["PTS_LAT"] = "-999.99" + if not "PTS_LON" in domains: + domains["PTS_LON"] = "-999.99" + if not comp_name == "MASK": + if self.get_element_text("nx", root=domain_node): + # If there are multiple grids for this component, then the component + # _NX and _NY values won't end up being used, so we simply set them to 1 + _add_grid_info( + domains, + comp_name + "_NX", + int(self.get_element_text("nx", root=domain_node)), + value_for_multiple=1, + ) + _add_grid_info( + domains, + comp_name + "_NY", + int(self.get_element_text("ny", root=domain_node)), + value_for_multiple=1, + ) + elif self.get_element_text("lon", root=domain_node): + # No need to call _add_grid_info here because, for multiple grids, the + # end result will be the same as the hard-coded 1 used here + domains[comp_name + "_NX"] = 1 + domains[comp_name + "_NY"] = 1 + domains["PTS_LAT"] = self.get_element_text("lat", root=domain_node) + domains["PTS_LON"] = self.get_element_text("lon", root=domain_node) + else: + # No need to call _add_grid_info here because, for multiple grids, the + # end result will be the same as the hard-coded 1 used here + domains[comp_name + "_NX"] = 1 + domains[comp_name + "_NY"] = 1 + + if driver == "mct" or driver == "moab": + # mct + file_nodes = self.get_children("file", root=domain_node) + domain_file = "" + for file_node in file_nodes: + grid_attrib = self.get(file_node, "grid") + mask_attrib = self.get(file_node, "mask") + if grid_attrib is not None and mask_attrib is not None: + grid_match = re.search(comp_name.lower(), grid_attrib) + mask_match = False + if mask_name is not None: + mask_match = mask_name == mask_attrib + if grid_match is not None and mask_match: + domain_file = self.text(file_node) + elif grid_attrib is not None: + grid_match = re.search(comp_name.lower(), grid_attrib) + if grid_match is not None: + domain_file = self.text(file_node) + elif mask_attrib is not None: + mask_match = mask_name == mask_attrib + if mask_match: + domain_file = self.text(file_node) + if domain_file: + _add_grid_info( + domains, + comp_name + "_DOMAIN_FILE", + os.path.basename(domain_file), + ) + path = os.path.dirname(domain_file) + if len(path) > 0: + _add_grid_info(domains, comp_name + "_DOMAIN_PATH", path) + + if driver == "nuopc": + if not comp_name == "MASK": + mesh_nodes = self.get_children("mesh", root=domain_node) + mesh_file = "" + for mesh_node in mesh_nodes: + mesh_file = self.text(mesh_node) + if mesh_file: + _add_grid_info(domains, comp_name + "_DOMAIN_MESH", mesh_file) + if comp_name == "LND" or comp_name == "ATM": + # Note: ONLY want to define PTS_DOMAINFILE for land and ATM + file_node = self.get_optional_child("file", root=domain_node) + if file_node is not None and self.text(file_node) != "unset": + domains["PTS_DOMAINFILE"] = self.text(file_node) + # set up dictionary of domain files for every component + _add_grid_info(domains, comp_name + "_GRID", grid_name) + + def _get_gridmaps(self, component_grids, driver, compset): + """Set all mapping files for config_grids.xml v2 schema + + If a component (e.g., GLC) has multiple grids, then each mapping file variable for + that component will be a colon-delimited list with the appropriate number of + elements. + + If a given gridmap is required but not given explicitly, then its value will be + either "unset" or "idmap". Even in the case of a component with multiple grids + (e.g., GLC), there will only be a single "unset" or "idmap" value. (We do not + currently handle the possibility that some grids will have an "idmap" value while + others have an explicit mapping file. So it is currently an error for "idmap" to + appear in a mapping file variable for a component with multiple grids; this will + be checked elsewhere.) + + """ + gridmaps = {} + + # (1) determine values of gridmaps for target grid + # + # Exclude the ice component from the list of compnames because it is assumed to be + # on the same grid as ocn, so doesn't have any gridmaps of its own + compnames = component_grids.get_compnames( + include_mask=False, exclude_comps=["ice"] + ) + for idx, compname in enumerate(compnames): + for other_compname in compnames[idx + 1 :]: + for gridvalue in component_grids.get_comp_gridlist(compname): + for other_gridvalue in component_grids.get_comp_gridlist( + other_compname + ): + self._get_gridmaps_for_one_grid_pair( + gridmaps=gridmaps, + driver=driver, + compname=compname, + other_compname=other_compname, + gridvalue=gridvalue, + other_gridvalue=other_gridvalue, + ) + + # (2) set all possibly required gridmaps to 'idmap' for mct and 'unset/idmap' for + # nuopc, if they aren't already set + required_gridmaps_node = self.get_child("required_gridmaps") + tmp_gridmap_nodes = self.get_children( + "required_gridmap", root=required_gridmaps_node + ) + required_gridmap_nodes = [] + for node in tmp_gridmap_nodes: + compset_att = self.get(node, "compset") + not_compset_att = self.get(node, "not_compset") + if ( + compset_att + and not compset_att in compset + or not_compset_att + and not_compset_att in compset + ): + continue + required_gridmap_nodes.append(node) + mapname = self.text(node) + if mapname not in gridmaps: + gridmaps[mapname] = _get_unset_gridmap_value( + mapname, component_grids, driver + ) + + # (3) check that all necessary maps are not set to idmap + # + # NOTE(wjs, 2021-05-18) This could probably be combined with the above loop, but + # I'm avoiding making that change now due to fear of breaking this complex logic + # that isn't covered by unit tests. + atm_gridvalue = component_grids.get_comp_gridname("atm") + for node in required_gridmap_nodes: + comp1_name = _strip_grid_from_name(self.get(node, "grid1")) + comp2_name = _strip_grid_from_name(self.get(node, "grid2")) + grid1_value = component_grids.get_comp_gridname(comp1_name) + grid2_value = component_grids.get_comp_gridname(comp2_name) + if grid1_value is not None and grid2_value is not None: + if ( + grid1_value != grid2_value + and grid1_value != "null" + and grid2_value != "null" + ): + map_ = gridmaps[self.text(node)] + if map_ == "idmap": + if comp1_name == "ocn" and grid1_value == atm_gridvalue: + logger.debug( + "ocn_grid == atm_grid so this is not an idmap error" + ) + else: + if driver == "nuopc": + gridmaps[self.text(node)] = "unset" + else: + logger.warning( + "Warning: missing non-idmap {} for {}, {} and {} {} ".format( + self.text(node), + comp1_name, + grid1_value, + comp2_name, + grid2_value, + ) + ) + + return gridmaps + + def _get_gridmaps_for_one_grid_pair( + self, gridmaps, driver, compname, other_compname, gridvalue, other_gridvalue + ): + """Get gridmap information for one pair of grids, adding elements to the gridmaps dictionary + + Args: + - gridmaps: dictionary of values, modified in place + - driver: the name of the driver being used in this case + - compname: abbreviated name of component (e.g., "atm") + - other_compname: abbreviated name of other component (e.g., "ocn") + - gridvalue: name of grid for compname + - other_gridvalue: name of grid for other_compname + """ + gridmaps_roots = self.get_children("gridmaps") + gridmap_nodes = [] + for root in gridmaps_roots: + gmdriver = self.get(root, "driver") + if gmdriver is None or gmdriver == driver: + gridname = compname + "_grid" + other_gridname = other_compname + "_grid" + gridmap_nodes.extend( + self.get_children( + "gridmap", + root=root, + attributes={ + gridname: gridvalue, + other_gridname: other_gridvalue, + }, + ) + ) + + # We first create a dictionary of gridmaps just for this pair of grids, then later + # add these grids to the main gridmaps dict using _add_grid_info. The reason for + # doing this in two steps, using the intermediate these_gridmaps variable, is: If + # there are multiple definitions of a given gridmap for a given grid pair, we just + # want to use one of them, rather than adding them all to the final gridmaps dict. + # (This may not occur in practice, but the logic allowed for this possibility + # before extending it to handle multiple grids for a given component, so we are + # leaving this possibility in place.) + these_gridmaps = {} + for gridmap_node in gridmap_nodes: + expect( + len(self.attrib(gridmap_node)) == 2, + " Bad attribute count in gridmap node %s" % self.attrib(gridmap_node), + ) + map_nodes = self.get_children("map", root=gridmap_node) + for map_node in map_nodes: + name = self.get(map_node, "name") + value = self.text(map_node) + if name is not None and value is not None: + these_gridmaps[name] = value + logger.debug(" gridmap name,value are {}: {}".format(name, value)) + + for name, value in these_gridmaps.items(): + _add_grid_info(gridmaps, name, value) + +
+[docs] + def print_values(self, long_output=None): + # write out help message + helptext = self.get_element_text("help") + logger.info("{} ".format(helptext)) + + logger.info( + "{:5s}-------------------------------------------------------------".format( + "" + ) + ) + logger.info("{:10s} default component grids:\n".format("")) + logger.info(" component compset value ") + logger.info( + "{:5s}-------------------------------------------------------------".format( + "" + ) + ) + default_nodes = self.get_children( + "model_grid_defaults", root=self.get_child("grids") + ) + for default_node in default_nodes: + grid_nodes = self.get_children("grid", root=default_node) + for grid_node in grid_nodes: + name = self.get(grid_node, "name") + compset = self.get(grid_node, "compset") + value = self.text(grid_node) + logger.info(" {:6s} {:15s} {:10s}".format(name, compset, value)) + logger.info( + "{:5s}-------------------------------------------------------------".format( + "" + ) + ) + + domains = {} + if long_output is not None: + domain_nodes = self.get_children("domain", root=self.get_child("domains")) + for domain_node in domain_nodes: + name = self.get(domain_node, "name") + if name == "null": + continue + desc = self.text(self.get_child("desc", root=domain_node)) + files = "" + file_nodes = self.get_children("file", root=domain_node) + for file_node in file_nodes: + filename = self.text(file_node) + mask_attrib = self.get(file_node, "mask") + grid_attrib = self.get(file_node, "grid") + files += "\n " + filename + if mask_attrib or grid_attrib: + files += " (only for" + if mask_attrib: + files += " mask: " + mask_attrib + if grid_attrib: + files += " grid match: " + grid_attrib + if mask_attrib or grid_attrib: + files += ")" + domains[name] = "\n {} with domain file(s): {} ".format( + desc, files + ) + + model_grid_nodes = self.get_children("model_grid", root=self.get_child("grids")) + for model_grid_node in model_grid_nodes: + alias = self.get(model_grid_node, "alias") + compset = self.get(model_grid_node, "compset") + not_compset = self.get(model_grid_node, "not_compset") + restriction = "" + if compset: + restriction += "only for compsets that are {} ".format(compset) + if not_compset: + restriction += "only for compsets that are not {} ".format(not_compset) + if restriction: + logger.info("\n alias: {} ({})".format(alias, restriction)) + else: + logger.info("\n alias: {}".format(alias)) + grid_nodes = self.get_children("grid", root=model_grid_node) + grids = "" + gridnames = [] + for grid_node in grid_nodes: + gridnames.append(self.text(grid_node)) + grids += self.get(grid_node, "name") + ":" + self.text(grid_node) + " " + logger.info(" non-default grids are: {}".format(grids)) + mask_nodes = self.get_children("mask", root=model_grid_node) + for mask_node in mask_nodes: + logger.info(" mask is: {}".format(self.text(mask_node))) + if long_output is not None: + gridnames = set(gridnames) + for gridname in gridnames: + if gridname != "null": + logger.info(" {}".format(domains[gridname]))
+
+ + + +# ------------------------------------------------------------------------ +# Helper class: _ComponentGrids +# ------------------------------------------------------------------------ + + +class _ComponentGrids(object): + """This class stores the grid names for each component and allows retrieval in a variety + of formats + + """ + + # Mappings from component names to the single characters used in the grid long name. + # Ordering is potentially important here, because it will determine the order in the + # list returned by get_compnames, which will in turn impact ordering of components in + # iterations. + # + # TODO: this should be in XML, not here + _COMP_NAMES = OrderedDict( + [ + ("atm", "a"), + ("lnd", "l"), + ("ocn", "o"), + ("ice", "i"), + ("rof", "r"), + ("glc", "g"), + ("wav", "w"), + ("iac", "z"), + ("mask", "m"), + ] + ) + + def __init__(self, grid_longname): + self._comp_gridnames = self._get_component_grids_from_longname(grid_longname) + + def _get_component_grids_from_longname(self, name): + """Return a dictionary mapping each compname to its gridname""" + grid_re = re.compile(r"[_]{0,1}[a-z]{1,2}%") + grids = grid_re.split(name)[1:] + prefixes = re.findall("[a-z]+%", name) + component_grids = {} + i = 0 + while i < len(grids): + # In the following, [:-1] strips the trailing '%' + prefix = prefixes[i][:-1] + grid = grids[i] + component_grids[prefix] = grid + i += 1 + component_grids["i"] = component_grids["oi"] + component_grids["o"] = component_grids["oi"] + del component_grids["oi"] + + result = {} + for compname, prefix in self._COMP_NAMES.items(): + result[compname] = component_grids[prefix] + return result + + def get_compnames(self, include_mask=True, exclude_comps=None): + """Return a list of all component names (lower case) + + This can be used for iterating through the grid names + + If include_mask is True (the default), then 'mask' is included in the list of + returned component names. + + If exclude_comps is given, then it should be a list of component names to exclude + from the returned list. For example, if it is ['ice', 'rof'], then 'ice' and 'rof' + are NOT included in the returned list. + + """ + if exclude_comps is None: + all_exclude_comps = [] + else: + all_exclude_comps = exclude_comps + if not include_mask: + all_exclude_comps.append("mask") + result = [k for k in self._COMP_NAMES if k not in all_exclude_comps] + return result + + def get_comp_gridname(self, compname): + """Return the grid name for the given component name""" + return self._comp_gridnames[compname] + + def get_comp_gridlist(self, compname): + """Return a list of individual grids for the given component name + + Usually this list has only a single grid (so the return value will be a + single-element list like ["0.9x1.25"]). However, the glc component (glc) can have + multiple grids, separated by GRID_SEP. In this situation, the return value for + GLC will have multiple elements. + + """ + gridname = self.get_comp_gridname(compname) + return gridname.split(GRID_SEP) + + def get_comp_numgrids(self, compname): + """Return the number of grids for the given component name + + Usually this is one, but the glc component can have multiple grids. + """ + return len(self.get_comp_gridlist(compname)) + + def get_gridmap_total_nmaps(self, gridmap_name): + """Given a gridmap_name like ATM2OCN_FMAPNAME, return the total number of maps needed between the two components + + In most cases, this will be 1, but if either or both components has multiple grids, + then this will be the product of the number of grids for each component. + + """ + comp1_name, comp2_name = _get_compnames_from_mapname(gridmap_name) + comp1_ngrids = self.get_comp_numgrids(comp1_name) + comp2_ngrids = self.get_comp_numgrids(comp2_name) + total_nmaps = comp1_ngrids * comp2_ngrids + return total_nmaps + + def check_num_elements(self, gridinfo): + """Check each member of gridinfo to make sure that it has the correct number of elements + + gridinfo is a dict mapping variable names to their values + + """ + for compname in self.get_compnames(include_mask=False): + for name, value in gridinfo.items(): + if not isinstance(value, str): + # Non-string values only hold a single element, regardless of how many + # grids there are for a component. This is enforced in _add_grid_info + # by requiring value_for_multiple to be provided for non-string + # values. For now, it is *only* those non-string values that only + # carry a single element regardless of the number of grids. If, in the + # future, other variables are added with this property, then this + # logic would need to be extended to skip those variables as well. + # (This could be done by hard-coding some suffixes to skip here. A + # better alternative could be to do away with the value_for_multiple + # argument in _add_grid_info, instead setting a module-level + # dictionary mapping suffixes to their value_for_multiple, and + # referencing that dictionary in both _add_grid_info and here. For + # example: _VALUE_FOR_MULTIPLE = {'_NX': 1, '_NY': 1, '_FOO': 'bar'}.) + continue + name_lower = name.lower() + if name_lower.startswith(compname): + if name_lower.startswith(compname + "_"): + expected_num_elements = self.get_comp_numgrids(compname) + elif name_lower.startswith(compname + "2"): + expected_num_elements = self.get_gridmap_total_nmaps(name) + else: + # We don't know what to expect if the character after compname is + # neither "_" nor "2" + continue + if value.lower() == "unset": + # It's okay for there to be a single "unset" value even for a + # component with multiple grids + continue + num_elements = len(value.split(GRID_SEP)) + expect( + num_elements == expected_num_elements, + "Unexpected number of colon-delimited elements in {}: {} (expected {} elements)".format( + name, value, expected_num_elements + ), + ) + + +# ------------------------------------------------------------------------ +# Some helper functions +# ------------------------------------------------------------------------ + + +def _get_compnames_from_mapname(mapname): + """Given a mapname like ATM2OCN_FMAPNAME, return the two component names + + The returned component names are lowercase. So, for example, if mapname is + ATM2OCN_FMAPNAME, then this function returns a tuple ('atm', 'ocn') + + """ + comp1_name = mapname[0:3].lower() + comp2_name = mapname[4:7].lower() + return comp1_name, comp2_name + + +def _strip_grid_from_name(name): + """Given some string 'name', strip trailing '_grid' from name and return result + + Raises an exception if 'name' doesn't end with '_grid' + """ + expect(name.endswith("_grid"), "{} does not end with _grid".format(name)) + return name[: -len("_grid")] + + +def _add_grid_info(info_dict, key, value, value_for_multiple=None): + """Add a value to info_dict, handling the possibility of multiple grids for a component + + In the basic case, where key is not yet present in info_dict, this is equivalent to + setting: + info_dict[key] = value + + However, if the given key is already present, then instead of overriding the old + value, we instead concatenate, separated by GRID_SEP. This is used in case there are + multiple grids for a given component. An exception to this behavior is: If + value_for_multiple is specified (not None) then, if we find an existing value, then we + instead replace the value with the value given by value_for_multiple. + + value_for_multiple must be specified if value is not a string + + """ + if not isinstance(value, str): + expect( + value_for_multiple is not None, + "_add_grid_info: value_for_multiple must be specified if value is not a string", + ) + if key in info_dict: + if value_for_multiple is not None: + info_dict[key] = value_for_multiple + else: + info_dict[key] += GRID_SEP + value + else: + info_dict[key] = value + + +def _get_unset_gridmap_value(mapname, component_grids, driver): + """Return the appropriate setting for a given gridmap that has not been explicitly set + + This will be 'unset' or 'idmap' depending on various parameters. + """ + if driver == "nuopc": + comp1_name, comp2_name = _get_compnames_from_mapname(mapname) + grid1 = component_grids.get_comp_gridname(comp1_name) + grid2 = component_grids.get_comp_gridname(comp2_name) + if grid1 == grid2: + if grid1 != "null" and grid2 != "null": + gridmap = "idmap" + else: + gridmap = "unset" + else: + gridmap = "unset" + else: + gridmap = "idmap" + + return gridmap +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/headers.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/headers.html new file mode 100644 index 00000000000..c8d32b78ed1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/headers.html @@ -0,0 +1,156 @@ + + + + + + CIME.XML.headers — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.headers

+"""
+Interface to the config_headers.xml file.  This class inherits from EntryID.py
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Headers(GenericXML): + def __init__(self, infile=None): + """ + initialize an object + + >>> files = Files() + >>> files.get_value('CASEFILE_HEADERS',resolved=False) + '$CIMEROOT/CIME/data/config/config_headers.xml' + """ + if infile is None: + files = Files() + infile = files.get_value("CASEFILE_HEADERS", resolved=True) + super(Headers, self).__init__(infile) + +
+[docs] + def get_header_node(self, fname): + fnode = self.get_child("file", attributes={"name": fname}) + headernode = self.get_child("header", root=fnode) + return headernode
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/inputdata.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/inputdata.html new file mode 100644 index 00000000000..310b80eff96 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/inputdata.html @@ -0,0 +1,202 @@ + + + + + + CIME.XML.inputdata — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.inputdata

+"""
+Interface to the config_inputdata.xml file.  This class inherits from GenericXML.py
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import expect
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Inputdata(GenericXML): + def __init__(self, infile=None, files=None): + """ + initialize a files object given input pes specification file + """ + if files is None: + files = Files() + if infile is None: + infile = files.get_value("INPUTDATA_SPEC_FILE") + schema = files.get_schema("INPUTDATA_SPEC_FILE") + logger.debug("DEBUG: infile is {}".format(infile)) + GenericXML.__init__(self, infile, schema=schema) + + self._servernode = None + +
+[docs] + def get_next_server(self, attributes=None): + protocol = None + address = None + user = "" + passwd = "" + chksum_file = None + ic_filepath = None + servernodes = self.get_children("server", attributes=attributes) + + # inventory is a CSV list of available data files and the valid date for each + # expected format is pathtofile,YYYY-MM-DD HH:MM:SS + # currently only used for NEON tower data + inventory = None + if not attributes: + servernodes = [x for x in servernodes if not self.attrib(x)] + + if servernodes: + if self._servernode is None: + self._servernode = servernodes[0] + else: + prevserver = self._servernode + for i, node in enumerate(servernodes): + if self._servernode == node and len(servernodes) > i + 1: + self._servernode = servernodes[i + 1] + break + if prevserver is not None and self._servernode == prevserver: + self._servernode = None + + if self._servernode: + protocol = self.text(self.get_child("protocol", root=self._servernode)) + address = self.text(self.get_child("address", root=self._servernode)) + unode = self.get_optional_child("user", root=self._servernode) + if unode: + user = self.text(unode) + invnode = self.get_optional_child("inventory", root=self._servernode) + if invnode: + inventory = self.text(invnode) + + pnode = self.get_optional_child("password", root=self._servernode) + if pnode: + passwd = self.text(pnode) + csnode = self.get_optional_child("checksum", root=self._servernode) + if csnode: + chksum_file = self.text(csnode) + icnode = self.get_optional_child("ic_filepath", root=self._servernode) + if icnode: + ic_filepath = self.text(icnode) + + return protocol, address, user, passwd, chksum_file, ic_filepath, inventory
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/machines.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/machines.html new file mode 100644 index 00000000000..3a6c733181e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/machines.html @@ -0,0 +1,607 @@ + + + + + + CIME.XML.machines — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.machines

+"""
+Interface to the config_machines.xml file.  This class inherits from GenericXML.py
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import convert_to_unknown_type, get_cime_config
+
+import socket
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Machines(GenericXML): + def __init__(self, infile=None, files=None, machine=None, extra_machines_dir=None): + """ + initialize an object + if a filename is provided it will be used, + otherwise if a files object is provided it will be used + otherwise create a files object from default values + + If extra_machines_dir is provided, it should be a string giving a path to an + additional directory that will be searched for a config_machines.xml file; if + found, the contents of this file will be appended to the standard + config_machines.xml. An empty string is treated the same as None. + """ + + self.machine_node = None + self.machine = None + self.machines_dir = None + self.custom_settings = {} + self.extra_machines_dir = extra_machines_dir + + schema = None + checked_files = [] + if files is None: + files = Files() + if infile is None: + infile = files.get_value("MACHINES_SPEC_FILE") + schema = files.get_schema("MACHINES_SPEC_FILE") + logger.debug("Verifying using schema {}".format(schema)) + + self.machines_dir = os.path.dirname(infile) + if os.path.exists(infile): + checked_files.append(infile) + else: + expect(False, f"file not found {infile}") + + GenericXML.__init__(self, infile, schema) + + # Append the contents of $HOME/.cime/config_machines.xml if it exists. + # + # Also append the contents of a config_machines.xml file in the directory given by + # extra_machines_dir, if present. + # + # This could cause problems if node matches are repeated when only one is expected. + local_infile = os.path.join( + os.environ.get("HOME"), ".cime", "config_machines.xml" + ) + logger.debug("Infile: {}".format(local_infile)) + + if os.path.exists(local_infile): + GenericXML.read(self, local_infile, schema) + checked_files.append(local_infile) + + if extra_machines_dir: + local_infile = os.path.join(extra_machines_dir, "config_machines.xml") + logger.debug("Infile: {}".format(local_infile)) + if os.path.exists(local_infile): + GenericXML.read(self, local_infile, schema) + checked_files.append(local_infile) + + if machine is None: + if "CIME_MACHINE" in os.environ: + machine = os.environ["CIME_MACHINE"] + else: + cime_config = get_cime_config() + if cime_config.has_option("main", "machine"): + machine = cime_config.get("main", "machine") + if machine is None: + machine = self.probe_machine_name() + + expect( + machine is not None, + f"Could not initialize machine object from {', '.join(checked_files)}. This machine is not available for the target CIME_MODEL.", + ) + self.set_machine(machine) + +
+[docs] + def get_child(self, name=None, attributes=None, root=None, err_msg=None): + if root is None: + root = self.machine_node + return super(Machines, self).get_child(name, attributes, root, err_msg)
+ + +
+[docs] + def get_machines_dir(self): + """ + Return the directory of the machines file + """ + return self.machines_dir
+ + +
+[docs] + def get_extra_machines_dir(self): + return self.extra_machines_dir
+ + +
+[docs] + def get_machine_name(self): + """ + Return the name of the machine + """ + return self.machine
+ + +
+[docs] + def get_node_names(self): + """ + Return the names of all the child nodes for the target machine + """ + nodes = self.get_children(root=self.machine_node) + node_names = [] + for node in nodes: + node_names.append(self.name(node)) + return node_names
+ + +
+[docs] + def get_first_child_nodes(self, nodename): + """ + Return the names of all the child nodes for the target machine + """ + nodes = self.get_children(nodename, root=self.machine_node) + return nodes
+ + +
+[docs] + def list_available_machines(self): + """ + Return a list of machines defined for a given CIME_MODEL + """ + machines = [] + nodes = self.get_children("machine") + for node in nodes: + mach = self.get(node, "MACH") + machines.append(mach) + return machines
+ + +
+[docs] + def probe_machine_name(self, warn=True): + """ + Find a matching regular expression for hostname + in the NODENAME_REGEX field in the file. First match wins. + """ + + names_not_found = [] + + nametomatch = socket.getfqdn() + machine = self._probe_machine_name_one_guess(nametomatch) + + if machine is None: + names_not_found.append(nametomatch) + + nametomatch = socket.gethostname() + machine = self._probe_machine_name_one_guess(nametomatch) + + if machine is None: + names_not_found.append(nametomatch) + + names_not_found_quoted = ["'" + name + "'" for name in names_not_found] + names_not_found_str = " or ".join(names_not_found_quoted) + if warn: + logger.debug( + "Could not find machine match for {}".format( + names_not_found_str + ) + ) + + return machine
+ + + def _probe_machine_name_one_guess(self, nametomatch): + """ + Find a matching regular expression for nametomatch in the NODENAME_REGEX + field in the file. First match wins. Returns None if no match is found. + """ + + machine = None + nodes = self.get_children("machine") + + for node in nodes: + machtocheck = self.get(node, "MACH") + logger.debug("machine is " + machtocheck) + regex_str_node = self.get_optional_child("NODENAME_REGEX", root=node) + regex_str = ( + machtocheck if regex_str_node is None else self.text(regex_str_node) + ) + + if regex_str is not None: + logger.debug("machine regex string is " + regex_str) + # an environment variable can be used + if regex_str.startswith("$ENV"): + machine_value = self.get_resolved_value( + regex_str, allow_unresolved_envvars=True + ) + if not machine_value.startswith("$ENV"): + try: + match, this_machine = machine_value.split(":") + except ValueError: + expect( + False, + "Bad formation of NODENAME_REGEX. Expected envvar:value, found {}".format( + regex_str + ), + ) + if match == this_machine: + machine = machtocheck + break + else: + regex = re.compile(regex_str) + if regex.match(nametomatch): + logger.debug( + "Found machine: {} matches {}".format( + machtocheck, nametomatch + ) + ) + machine = machtocheck + break + + return machine + +
+[docs] + def set_machine(self, machine): + """ + Sets the machine block in the Machines object + + >>> machobj = Machines(machine="melvin") + >>> machobj.get_machine_name() + 'melvin' + >>> machobj.set_machine("trump") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: No machine trump found + """ + if machine == "Query": + self.machine = machine + elif self.machine != machine or self.machine_node is None: + self.machine_node = super(Machines, self).get_child( + "machine", + {"MACH": machine}, + err_msg="No machine {} found".format(machine), + ) + self.machine = machine + + return machine
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_value(self, name, attributes=None, resolved=True, subgroup=None): + """ + Get Value of fields in the config_machines.xml file + """ + if self.machine_node is None: + logger.debug("Machine object has no machine defined") + return None + + expect(subgroup is None, "This class does not support subgroups") + value = None + + if name in self.custom_settings: + return self.custom_settings[name] + + # COMPILER and MPILIB are special, if called without arguments they get the default value from the + # COMPILERS and MPILIBS lists in the file. + if name == "COMPILER": + value = self.get_default_compiler() + elif name == "MPILIB": + value = self.get_default_MPIlib(attributes) + else: + node = self.get_optional_child( + name, root=self.machine_node, attributes=attributes + ) + if node is not None: + value = self.text(node) + + if resolved: + if value is not None: + value = self.get_resolved_value(value) + elif name in os.environ: + value = os.environ[name] + + value = convert_to_unknown_type(value) + + return value
+ + +
+[docs] + def get_field_from_list(self, listname, reqval=None, attributes=None): + """ + Some of the fields have lists of valid values in the xml, parse these + lists and return the first value if reqval is not provided and reqval + if it is a valid setting for the machine + """ + expect(self.machine_node is not None, "Machine object has no machine defined") + supported_values = self.get_value(listname, attributes=attributes) + # if no match with attributes, try without + if supported_values is None: + supported_values = self.get_value(listname, attributes=None) + + expect( + supported_values is not None, + "No list found for " + listname + " on machine " + self.machine, + ) + supported_values = supported_values.split(",") # pylint: disable=no-member + + if reqval is None or reqval == "UNSET": + return supported_values[0] + + for val in supported_values: + if val == reqval: + return reqval + return None
+ + +
+[docs] + def get_default_compiler(self): + """ + Get the compiler to use from the list of COMPILERS + """ + cime_config = get_cime_config() + if cime_config.has_option("main", "COMPILER"): + value = cime_config.get("main", "COMPILER") + expect( + self.is_valid_compiler(value), + "User-selected compiler {} is not supported on machine {}".format( + value, self.machine + ), + ) + else: + value = self.get_field_from_list("COMPILERS") + return value
+ + +
+[docs] + def get_default_MPIlib(self, attributes=None): + """ + Get the MPILIB to use from the list of MPILIBS + """ + return self.get_field_from_list("MPILIBS", attributes=attributes)
+ + +
+[docs] + def is_valid_compiler(self, compiler): + """ + Check the compiler is valid for the current machine + """ + return self.get_field_from_list("COMPILERS", reqval=compiler) is not None
+ + +
+[docs] + def is_valid_MPIlib(self, mpilib, attributes=None): + """ + Check the MPILIB is valid for the current machine + """ + return ( + mpilib == "mpi-serial" + or self.get_field_from_list("MPILIBS", reqval=mpilib, attributes=attributes) + is not None + )
+ + +
+[docs] + def has_batch_system(self): + """ + Return if this machine has a batch system + """ + result = False + batch_system = self.get_optional_child("BATCH_SYSTEM", root=self.machine_node) + if batch_system is not None: + result = ( + self.text(batch_system) is not None + and self.text(batch_system) != "none" + ) + logger.debug("Machine {} has batch: {}".format(self.machine, result)) + return result
+ + +
+[docs] + def get_suffix(self, suffix_type): + node = self.get_optional_child("default_run_suffix") + if node is not None: + suffix_node = self.get_optional_child(suffix_type, root=node) + if suffix_node is not None: + return self.text(suffix_node) + + return None
+ + +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=True): + # A temporary cache only + self.custom_settings[vid] = value
+ + +
+[docs] + def print_values(self): + # write out machines + machines = self.get_children("machine") + logger.info("Machines") + for machine in machines: + name = self.get(machine, "MACH") + desc = self.get_child("DESC", root=machine) + os_ = self.get_child("OS", root=machine) + compilers = self.get_child("COMPILERS", root=machine) + max_tasks_per_node = self.get_child("MAX_TASKS_PER_NODE", root=machine) + max_mpitasks_per_node = self.get_child( + "MAX_MPITASKS_PER_NODE", root=machine + ) + max_gpus_per_node = self.get_child("MAX_GPUS_PER_NODE", root=machine) + + print(" {} : {} ".format(name, self.text(desc))) + print(" os ", self.text(os_)) + print(" compilers ", self.text(compilers)) + if max_mpitasks_per_node is not None: + print(" pes/node ", self.text(max_mpitasks_per_node)) + if max_tasks_per_node is not None: + print(" max_tasks/node ", self.text(max_tasks_per_node)) + if max_gpus_per_node is not None: + print(" max_gpus/node ", self.text(max_gpus_per_node))
+ + +
+[docs] + def return_values(self): + """return a dictionary of machine info + This routine is used by external tools in https://github.com/NCAR/CESM_xml2html + """ + machines = self.get_children("machine") + mach_dict = dict() + logger.debug("Machines return values") + for machine in machines: + name = self.get(machine, "MACH") + desc = self.get_child("DESC", root=machine) + mach_dict[(name, "description")] = self.text(desc) + os_ = self.get_child("OS", root=machine) + mach_dict[(name, "os")] = self.text(os_) + compilers = self.get_child("COMPILERS", root=machine) + mach_dict[(name, "compilers")] = self.text(compilers) + max_tasks_per_node = self.get_child("MAX_TASKS_PER_NODE", root=machine) + mach_dict[(name, "max_tasks_per_node")] = self.text(max_tasks_per_node) + max_mpitasks_per_node = self.get_child( + "MAX_MPITASKS_PER_NODE", root=machine + ) + mach_dict[(name, "max_mpitasks_per_node")] = self.text( + max_mpitasks_per_node + ) + max_gpus_per_node = self.get_child("MAX_GPUS_PER_NODE", root=machine) + mach_dict[(name, "max_gpus_per_node")] = self.text(max_gpus_per_node) + + return mach_dict
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/namelist_definition.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/namelist_definition.html new file mode 100644 index 00000000000..8a269438af7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/namelist_definition.html @@ -0,0 +1,701 @@ + + + + + + CIME.XML.namelist_definition — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.namelist_definition

+"""Interface to `namelist_definition.xml`.
+
+This module contains only one class, `NamelistDefinition`, inheriting from
+`EntryID`.
+"""
+
+# Warnings we typically ignore.
+# pylint:disable=invalid-name
+
+# Disable warnings due to using `standard_module_setup`
+# pylint:disable=wildcard-import,unused-wildcard-import
+
+import re
+import collections
+
+from CIME.namelist import (
+    fortran_namelist_base_value,
+    is_valid_fortran_namelist_literal,
+    character_literal_to_string,
+    expand_literal_list,
+    Namelist,
+    get_fortran_name_only,
+)
+
+from CIME.XML.standard_module_setup import *
+from CIME.XML.entry_id import EntryID
+from CIME.XML.files import Files
+
+logger = logging.getLogger(__name__)
+
+_array_size_re = re.compile(r"^(?P<type>[^(]+)\((?P<size>[^)]+)\)$")
+
+
+
+[docs] +class CaseInsensitiveDict(dict): + + """Basic case insensitive dict with strings only keys. + From https://stackoverflow.com/a/27890005""" + + proxy = {} + + def __init__(self, data): + dict.__init__(self) + self.proxy = dict((k.lower(), k) for k in data) + for k in data: + self[k] = data[k] + + def __contains__(self, k): + return k.lower() in self.proxy + + def __delitem__(self, k): + key = self.proxy[k.lower()] + super(CaseInsensitiveDict, self).__delitem__(key) + del self.proxy[k.lower()] + + def __getitem__(self, k): + key = self.proxy[k.lower()] + return super(CaseInsensitiveDict, self).__getitem__(key) + +
+[docs] + def get(self, k, default=None): + return self[k] if k in self else default
+ + + def __setitem__(self, k, v): + super(CaseInsensitiveDict, self).__setitem__(k, v) + self.proxy[k.lower()] = k
+ + + +
+[docs] +class NamelistDefinition(EntryID): + + """Class representing variable definitions for a namelist. + This class inherits from `EntryID`, and supports most inherited methods; + however, `set_value` is unsupported. + + Additional public methods: + - dict_to_namelist. + - is_valid_value + - validate + """ + + def __init__(self, infile, files=None): + """Construct a `NamelistDefinition` from an XML file.""" + + # if the file is invalid we may not be able to check the version + # but we need to do it this way until we remove the version 1 files + schema = None + if files is None: + files = Files() + schema = files.get_schema("NAMELIST_DEFINITION_FILE") + expect(os.path.isfile(infile), "File {} does not exist".format(infile)) + super(NamelistDefinition, self).__init__(infile, schema=schema) + + self._attributes = {} + self._entry_nodes = [] + self._entry_ids = [] + self._valid_values = {} + self._entry_types = {} + self._group_names = CaseInsensitiveDict({}) + self._nodes = {} + +
+[docs] + def set_node_values(self, name, node): + self._entry_nodes.append(node) + self._entry_ids.append(name) + self._nodes[name] = node + self._entry_types[name] = self._get_type(node) + self._valid_values[name] = self._get_valid_values(node) + self._group_names[name] = self.get_group_name(node)
+ + +
+[docs] + def set_nodes(self, skip_groups=None): + """ + populates the object data types for all nodes that are not part of the skip_groups array + returns nodes that do not have attributes of `skip_default_entry` or `per_stream_entry` + """ + default_nodes = [] + for node in self.get_children("entry"): + name = self.get(node, "id") + skip_default_entry = self.get(node, "skip_default_entry") == "true" + per_stream_entry = self.get(node, "per_stream_entry") == "true" + + if skip_groups: + group_name = self.get_group_name(node) + + if not group_name in skip_groups: + self.set_node_values(name, node) + + if not skip_default_entry and not per_stream_entry: + default_nodes.append(node) + else: + self.set_node_values(name, node) + + if not skip_default_entry and not per_stream_entry: + default_nodes.append(node) + + return default_nodes
+ + +
+[docs] + def get_group_name(self, node=None): + if self.get_version() == 1.0: + group = self.get(node, "group") + elif self.get_version() >= 2.0: + group = self.get_element_text("group", root=node) + return group
+ + + def _get_type(self, node): + if self.get_version() == 1.0: + type_info = self.get(node, "type") + elif self.get_version() >= 2.0: + type_info = self._get_type_info(node) + return type_info + + def _get_valid_values(self, node): + # The "valid_values" attribute is not required, and an empty string has + # the same effect as not specifying it. + # Returns a list from a comma seperated string in xml + valid_values = "" + if self.get_version() == 1.0: + valid_values = self.get(node, "valid_values") + elif self.get_version() >= 2.0: + valid_values = self._get_node_element_info(node, "valid_values") + if valid_values == "": + valid_values = None + if valid_values is not None: + valid_values = valid_values.split(",") + return valid_values + +
+[docs] + def get_group(self, name): + return self._group_names[name]
+ + +
+[docs] + def rename_group(self, oldgroup, newgroup): + for var in self._group_names: + if self._group_names[var] == oldgroup: + self._group_names[var] = newgroup
+ + +
+[docs] + def add_attributes(self, attributes): + self._attributes = attributes
+ + +
+[docs] + def get_attributes(self): + """Return this object's attributes dictionary""" + return self._attributes
+ + +
+[docs] + def get_entry_nodes(self): + return self._entry_nodes
+ + +
+[docs] + def get_per_stream_entries(self): + entries = [] + nodes = self.get_children("entry") + for node in nodes: + per_stream_entry = self.get(node, "per_stream_entry") == "true" + if per_stream_entry: + entries.append(self.get(node, "id")) + return entries
+ + + # Currently we don't use this object to construct new files, and it's no + # good for that purpose anyway, so stop this function from being called. +
+[docs] + def set_value(self, vid, value, subgroup=None, ignore_type=True): + """This function is not implemented.""" + raise TypeError("NamelistDefinition does not support `set_value`.")
+ + + # In contrast to the entry_id version of this method, this version doesn't support the + # replacement_for_none argument, because it is hard-coded to ''. + # pylint: disable=arguments-differ +
+[docs] + def get_value_match(self, vid, attributes=None, exact_match=True, entry_node=None): + """Return the default value for the variable named `vid`. + + The return value is a list of strings corresponding to the + comma-separated list of entries for the value (length 1 for scalars). If + there is no default value in the file, this returns `None`. + """ + # Merge internal attributes with those passed in. + all_attributes = {} + if self._attributes is not None: + all_attributes.update(self._attributes) + if attributes is not None: + all_attributes.update(attributes) + + if entry_node is None: + entry_node = self._nodes[vid] + # NOTE(wjs, 2021-06-04) In the following call, replacement_for_none='' may not + # actually be needed, but I'm setting it to maintain some old logic, to be safe. + value = super(NamelistDefinition, self).get_value_match( + vid.lower(), + attributes=all_attributes, + exact_match=exact_match, + entry_node=entry_node, + replacement_for_none="", + ) + if value is not None: + value = self._split_defaults_text(value) + + return value
+ + + @staticmethod + def _split_defaults_text(string): + """Take a comma-separated list in a string, and split it into a list.""" + # Some trickiness here; we want to split items on commas, but not inside + # quote-delimited strings. Stripping whitespace is also useful. + value = [] + if len(string): + pos = 0 + delim = None + for i, char in enumerate(string): + if delim is None: + # If not inside a string... + if char in ('"', "'"): + # if we have a quote character, start a string. + delim = char + elif char == ",": + # if we have a comma, this is a new value. + value.append(string[pos:i].strip()) + pos = i + 1 + else: + # If inside a string, the only thing that can happen is the end + # of the string. + if char == delim: + delim = None + value.append(string[pos:].strip()) + return value + +
+[docs] + def split_type_string(self, name): + """Split a 'type' attribute string into its component parts. + + The `name` argument is the variable name. + This is used for error reporting purposes. + + The return value is a tuple consisting of the type itself, a length + (which is an integer for character variables, otherwise `None`), and the + size of the array (which is 1 for scalar variables). + """ + type_string = self._entry_types[name] + + # 'char' is frequently used as an abbreviation of 'character'. + type_string = type_string.replace("char", "character") + + # Separate into a size and the rest of the type. + size_match = _array_size_re.search(type_string) + if size_match: + type_string = size_match.group("type") + size_string = size_match.group("size") + try: + size = int(size_string) + except ValueError: + expect( + False, + "In namelist definition, variable {} had the non-integer string {!r} specified as an array size.".format( + name, size_string + ), + ) + else: + size = 1 + + # Separate into a type and an optional length. + type_, star, length = type_string.partition("*") + if star == "*": + # Length allowed only for character variables. + expect( + type_ == "character", + "In namelist definition, length specified for non-character " + "variable {}.".format(name), + ) + # Check that the length is actually an integer, to make the error + # message a bit cleaner if the xml input is bad. + try: + max_len = int(length) + except ValueError: + expect( + False, + "In namelist definition, character variable {} had the non-integer string {!r} specified as a length.".format( + name, length + ), + ) + else: + max_len = None + return type_, max_len, size
+ + + @staticmethod + def _canonicalize_value(type_, value): + """Create 'canonical' version of a value for comparison purposes.""" + canonical_value = [fortran_namelist_base_value(scalar) for scalar in value] + canonical_value = [scalar for scalar in canonical_value if scalar != ""] + if type_ == "character": + canonical_value = [ + character_literal_to_string(scalar) for scalar in canonical_value + ] + elif type_ == "integer": + canonical_value = [int(scalar) for scalar in canonical_value] + return canonical_value + +
+[docs] + def is_valid_value(self, name, value): + """Determine whether a value is valid for the named variable. + + The `value` argument must be a list of strings formatted as they would + appear in the namelist (even for scalar variables, in which case the + length of the list is always 1). + """ + # Separate into a type, optional length, and optional size. + type_, max_len, size = self.split_type_string(name) + invalid = [] + + # Check value against type. + for scalar in value: + if not is_valid_fortran_namelist_literal(type_, scalar): + invalid.append(scalar) + if len(invalid) > 0: + logger.warning("Invalid values {}".format(invalid)) + return False + + # Now that we know that the strings as input are valid Fortran, do some + # canonicalization for further checks. + canonical_value = self._canonicalize_value(type_, value) + + # Check maximum length (if applicable). + if max_len is not None: + for scalar in canonical_value: + if len(scalar) > max_len: + return False + + # Check valid value constraints (if applicable). + valid_values = self._valid_values[name] + if valid_values is not None: + expect( + type_ in ("integer", "character"), + "Found valid_values attribute for variable {} with type {}, but valid_values only allowed for character and integer variables.".format( + name, type_ + ), + ) + if type_ == "integer": + compare_list = [int(vv) for vv in valid_values] + else: + compare_list = valid_values + for scalar in canonical_value: + if scalar not in compare_list: + invalid.append(scalar) + if len(invalid) > 0: + logger.warning("Invalid values {}".format(invalid)) + return False + + # Check size of input array. + if len(expand_literal_list(value)) > size: + expect( + False, + "Value index exceeds variable size for variable {}, allowed array length is {} value array size is {}".format( + name, size, len(expand_literal_list(value)) + ), + ) + return True
+ + + def _expect_variable_in_definition(self, name, variable_template): + """Used to get a better error message for an unexpected variable. + case insensitve match""" + + expect( + name in self._entry_ids, + (variable_template + " is not in the namelist definition.").format( + str(name) + ), + ) + + def _user_modifiable_in_variable_definition(self, name): + # Is name user modifiable? + node = self.get_optional_child("entry", attributes={"id": name}) + user_modifiable_only_by_xml = self.get(node, "modify_via_xml") + if user_modifiable_only_by_xml is not None: + expect( + False, + "Cannot change {} in user_nl file: set via xml variable {}".format( + name, user_modifiable_only_by_xml + ), + ) + user_cannot_modify = self.get(node, "cannot_modify_by_user_nl") + if user_cannot_modify is not None: + expect( + False, + "Cannot change {} in user_nl file: {}".format(name, user_cannot_modify), + ) + + def _generate_variable_template(self, filename): + # Improve error reporting when a file name is provided. + if filename is None: + variable_template = "Variable {!r}" + else: + # for the next step we want the name of the original user_nl file not the internal one + # We do this by extracting the component name from the filepath string + if "Buildconf" in filename and "namelist_infile" in filename: + msgfn = "user_nl_" + (filename.split(os.sep)[-2])[:-4] + else: + msgfn = filename + variable_template = "Variable {!r} from file " + repr(str(msgfn)) + return variable_template + +
+[docs] + def validate(self, namelist, filename=None): + """Validate a namelist object against this definition. + + The optional `filename` argument can be used to assist in error + reporting when the namelist comes from a specific, known file. + """ + variable_template = self._generate_variable_template(filename) + + # Iterate through variables. + for group_name in namelist.get_group_names(): + for variable_name in namelist.get_variable_names(group_name): + # Check that the variable is defined... + qualified_variable_name = get_fortran_name_only(variable_name) + self._expect_variable_in_definition( + qualified_variable_name, variable_template + ) + + # Check if can actually change this variable via filename change + if filename is not None: + self._user_modifiable_in_variable_definition( + qualified_variable_name + ) + + # and has the right group name... + var_group = self.get_group(qualified_variable_name) + expect( + var_group == group_name, + ( + variable_template + + " is in a group named {!r}, but should be in {!r}." + ).format(str(variable_name), str(group_name), str(var_group)), + ) + + # and has a valid value. + value = namelist.get_variable_value(group_name, variable_name) + expect( + self.is_valid_value(qualified_variable_name, value), + (variable_template + " has invalid value {!r}.").format( + str(variable_name), [str(scalar) for scalar in value] + ), + )
+ + +
+[docs] + def dict_to_namelist(self, dict_, filename=None): + """Converts a dictionary of name-value pairs to a `Namelist`. + + The input is assumed to be similar to the output of `parse` when + `groupless=True` is set. This function uses the namelist definition file + to look up the namelist group associated with each variable, and uses + this information to create a true `Namelist` object. + + The optional `filename` argument can be used to assist in error + reporting when the namelist comes from a specific, known file. + """ + # Improve error reporting when a file name is provided. + variable_template = self._generate_variable_template(filename) + groups = {} + for variable_name in dict_: + variable_lc = variable_name.lower() + qualified_varname = get_fortran_name_only(variable_lc) + self._expect_variable_in_definition(qualified_varname, variable_template) + group_name = self.get_group(qualified_varname) + expect( + group_name is not None, "No group found for var {}".format(variable_lc) + ) + if group_name not in groups: + groups[group_name] = collections.OrderedDict() + groups[group_name][variable_lc] = dict_[variable_name] + return Namelist(groups)
+ + +
+[docs] + def get_input_pathname(self, name): + node = self._nodes[name] + if self.get_version() == 1.0: + input_pathname = self.get(node, "input_pathname") + elif self.get_version() >= 2.0: + input_pathname = self._get_node_element_info(node, "input_pathname") + return input_pathname
+ + + # pylint: disable=arguments-differ +
+[docs] + def get_default_value(self, item, attribute=None): + """Return the default value for the variable named `item`. + + The return value is a list of strings corresponding to the + comma-separated list of entries for the value (length 1 for scalars). If + there is no default value in the file, this returns `None`. + """ + # Merge internal attributes with those passed in. + all_attributes = {} + if self._attributes is not None: + all_attributes.update(self._attributes) + if attribute is not None: + all_attributes.update(attribute) + + value = self.get_value_match(item.lower(), all_attributes, True) + return value
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/pes.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/pes.html new file mode 100644 index 00000000000..f7b845a76be --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/pes.html @@ -0,0 +1,360 @@ + + + + + + CIME.XML.pes — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.pes

+"""
+Interface to the config_pes.xml file.  This class inherits from GenericXML.py
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import expect
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Pes(GenericXML): + def __init__(self, infile, files=None): + """ + initialize a files object given input pes specification file + """ + if files is None: + files = Files() + schema = files.get_schema("PES_SPEC_FILE") + logger.debug("DEBUG: infile is {}".format(infile)) + GenericXML.__init__(self, infile, schema=schema) + +
+[docs] + def find_pes_layout(self, grid, compset, machine, pesize_opts="M", mpilib=None): + opes_ntasks = {} + opes_nthrds = {} + opes_rootpe = {} + opes_pstrid = {} + oother_settings = {} + other_settings = {} + o_grid_nodes = [] + comments = None + # Get any override nodes + overrides = self.get_optional_child("overrides") + ocomments = None + if overrides is not None: + o_grid_nodes = self.get_children("grid", root=overrides) + ( + opes_ntasks, + opes_nthrds, + opes_rootpe, + opes_pstrid, + oother_settings, + ocomments, + ) = self._find_matches( + o_grid_nodes, grid, compset, machine, pesize_opts, True + ) + + # Get all the nodes + grid_nodes = self.get_children("grid") + if o_grid_nodes: + gn_set = set(grid_nodes) + ogn_set = set(o_grid_nodes) + gn_set.difference_update(ogn_set) + grid_nodes = list(gn_set) + + ( + pes_ntasks, + pes_nthrds, + pes_rootpe, + pes_pstrid, + other_settings, + comments, + ) = self._find_matches(grid_nodes, grid, compset, machine, pesize_opts, False) + pes_ntasks.update(opes_ntasks) + pes_nthrds.update(opes_nthrds) + pes_rootpe.update(opes_rootpe) + pes_pstrid.update(opes_pstrid) + other_settings.update(oother_settings) + if ocomments is not None: + comments = ocomments + + if mpilib == "mpi-serial": + for i in iter(pes_ntasks): + pes_ntasks[i] = 1 + for i in iter(pes_rootpe): + pes_rootpe[i] = 0 + for i in iter(pes_pstrid): + pes_pstrid[i] = 0 + + logger.info("Pes setting: grid is {} ".format(grid)) + logger.info("Pes setting: compset is {} ".format(compset)) + logger.info("Pes setting: tasks is {} ".format(pes_ntasks)) + logger.info("Pes setting: threads is {} ".format(pes_nthrds)) + logger.info("Pes setting: rootpe is {} ".format(pes_rootpe)) + logger.info("Pes setting: pstrid is {} ".format(pes_pstrid)) + logger.info("Pes other settings: {}".format(other_settings)) + if comments is not None: + logger.info("Pes comments: {}".format(comments)) + + return pes_ntasks, pes_nthrds, pes_rootpe, pes_pstrid, other_settings, comments
+ + + def _find_matches( + self, grid_nodes, grid, compset, machine, pesize_opts, override=False + ): + grid_choice = None + mach_choice = None + compset_choice = None + pesize_choice = None + max_points = -1 + pes_ntasks, pes_nthrds, pes_rootpe, pes_pstrid, other_settings = ( + {}, + {}, + {}, + {}, + {}, + ) + pe_select = None + comment = None + for grid_node in grid_nodes: + grid_match = self.get(grid_node, "name") + if grid_match == "any" or re.search(grid_match, grid): + mach_nodes = self.get_children("mach", root=grid_node) + for mach_node in mach_nodes: + mach_match = self.get(mach_node, "name") + if mach_match == "any" or re.search(mach_match, machine): + pes_nodes = self.get_children("pes", root=mach_node) + for pes_node in pes_nodes: + pesize_match = self.get(pes_node, "pesize") + compset_match = self.get(pes_node, "compset") + if ( + pesize_match == "any" + or ( + pesize_opts is not None + and pesize_match == pesize_opts + ) + ) and ( + compset_match == "any" + or re.search(compset_match, compset) + ): + + points = ( + int(grid_match != "any") * 3 + + int(mach_match != "any") * 7 + + int(compset_match != "any") * 2 + + int(pesize_match != "any") + ) + if override and points > 0: + for node in self.get_children(root=pes_node): + vid = self.name(node) + logger.info("vid is {}".format(vid)) + if "comment" in vid: + comment = self.text(node) + elif "ntasks" in vid: + for child in self.get_children(root=node): + pes_ntasks[ + self.name(child).upper() + ] = int(self.text(child)) + elif "nthrds" in vid: + for child in self.get_children(root=node): + pes_nthrds[ + self.name(child).upper() + ] = int(self.text(child)) + elif "rootpe" in vid: + for child in self.get_children(root=node): + pes_rootpe[ + self.name(child).upper() + ] = int(self.text(child)) + elif "pstrid" in vid: + for child in self.get_children(root=node): + pes_pstrid[ + self.name(child).upper() + ] = int(self.text(child)) + # if the value is already upper case its something else we are trying to set + elif vid == self.name(node): + other_settings[vid] = self.text(node) + + else: + if points > max_points: + pe_select = pes_node + max_points = points + mach_choice = mach_match + grid_choice = grid_match + compset_choice = compset_match + pesize_choice = pesize_match + elif points == max_points: + logger.warning( + "mach_choice {} mach_match {}".format( + mach_choice, mach_match + ) + ) + logger.warning( + "grid_choice {} grid_match {}".format( + grid_choice, grid_match + ) + ) + logger.warning( + "compset_choice {} compset_match {}".format( + compset_choice, compset_match + ) + ) + logger.warning( + "pesize_choice {} pesize_match {}".format( + pesize_choice, pesize_match + ) + ) + logger.warning("points = {:d}".format(points)) + expect( + False, + "More than one PE layout matches given PE specs", + ) + if not override: + for node in self.get_children(root=pe_select): + vid = self.name(node) + logger.debug("vid is {}".format(vid)) + if "comment" in vid: + comment = self.text(node) + elif "ntasks" in vid: + for child in self.get_children(root=node): + pes_ntasks[self.name(child).upper()] = int(self.text(child)) + elif "nthrds" in vid: + for child in self.get_children(root=node): + pes_nthrds[self.name(child).upper()] = int(self.text(child)) + elif "rootpe" in vid: + for child in self.get_children(root=node): + pes_rootpe[self.name(child).upper()] = int(self.text(child)) + elif "pstrid" in vid: + for child in self.get_children(root=node): + pes_pstrid[self.name(child).upper()] = int(self.text(child)) + # if the value is already upper case its something else we are trying to set + elif vid == self.name(node): + text = self.text(node).strip() + if len(text): + other_settings[vid] = self.text(node) + if grid_choice != "any" or logger.isEnabledFor(logging.DEBUG): + logger.info("Pes setting: grid match is {} ".format(grid_choice)) + if mach_choice != "any" or logger.isEnabledFor(logging.DEBUG): + logger.info("Pes setting: machine match is {} ".format(mach_choice)) + if compset_choice != "any" or logger.isEnabledFor(logging.DEBUG): + logger.info("Pes setting: compset_match is {} ".format(compset_choice)) + if pesize_choice != "any" or logger.isEnabledFor(logging.DEBUG): + logger.info("Pes setting: pesize match is {} ".format(pesize_choice)) + + return pes_ntasks, pes_nthrds, pes_rootpe, pes_pstrid, other_settings, comment
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/pio.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/pio.html new file mode 100644 index 00000000000..b03914bc52e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/pio.html @@ -0,0 +1,198 @@ + + + + + + CIME.XML.pio — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.pio

+"""
+Class for config_pio files .  This class inherits from EntryID.py
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.entry_id import EntryID
+from CIME.XML.files import Files
+
+from collections import OrderedDict
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class PIO(EntryID): + def __init__(self, comp_classes, infile=None, files=None): + if infile is None: + if files is None: + files = Files() + infile = files.get_value("PIO_SPEC_FILE") + + EntryID.__init__(self, infile) + + self._components = list(comp_classes) + +
+[docs] + def check_if_comp_var(self, vid, attribute=None, node=None): + comp = None + new_vid = None + for comp in self._components: + if vid.endswith("_" + comp): + new_vid = vid.replace("_" + comp, "", 1) + elif vid.startswith(comp + "_"): + new_vid = vid.replace(comp + "_", "", 1) + elif "_" + comp + "_" in vid: + new_vid = vid.replace(comp + "_", "", 1) + + if new_vid is not None: + return new_vid, comp, True + + return vid, None, False
+ + +
+[docs] + def get_defaults( + self, grid=None, compset=None, mach=None, compiler=None, mpilib=None + ): # pylint: disable=unused-argument + # should we have a env_pio file + defaults = OrderedDict() + save_for_last = [] + + # Load args into attribute dict + attributes = {} + for attrib in ["grid", "compset", "mach", "compiler", "mpilib"]: + if locals()[attrib] is not None: + attributes[attrib] = locals()[attrib] + + # Find defauts + for node in self.get_children("entry"): + value = self.get_default_value(node, attributes) + if value: + myid = self.get(node, "id") + iscompvar = self.check_if_comp_var(myid)[-1] + if iscompvar: + save_for_last.append((myid, value)) + else: + defaults[myid] = value + + # comp-specific vars must come last so they take precedence over general settings + for k, v in save_for_last: + defaults[k] = v + + return defaults
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/stream.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/stream.html new file mode 100644 index 00000000000..5347d92b964 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/stream.html @@ -0,0 +1,176 @@ + + + + + + CIME.XML.stream — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.stream

+"""
+Interface to the streams.xml style files.  This class inherits from GenericXML.py
+
+stream files predate cime and so do not conform to entry id format
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import expect
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Stream(GenericXML): + def __init__(self, infile=None, files=None): + """ + initialize an object + """ + if files is None: + files = Files() + schema = None + GenericXML.__init__(self, infile, schema=schema) + +
+[docs] + def get_value(self, item, attribute=None, resolved=True, subgroup=None): + """ + Get Value of fields in a stream.xml file + """ + expect(subgroup is None, "This class does not support subgroups") + value = None + node = None + names = item.split("/") + node = None + for name in names: + node = self.scan_child(name, root=node) + if node is not None: + value = self.text(node).strip() + + if value is None: + # if all else fails + # pylint: disable=assignment-from-none + value = GenericXML.get_value(self, item, attribute, resolved, subgroup) + + if resolved: + if value is not None: + value = self.get_resolved_value(value) + elif item in os.environ: + value = os.environ[item] + + return value
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/test_reporter.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/test_reporter.html new file mode 100644 index 00000000000..83117abdf12 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/test_reporter.html @@ -0,0 +1,217 @@ + + + + + + CIME.XML.test_reporter — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.test_reporter

+"""
+Interface to the testreporter xml.  This class inherits from GenericXML.py
+
+"""
+# pylint: disable=import-error
+import urllib.parse
+import urllib.request
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+import ssl
+
+# pylint: disable=protected-access
+ssl._create_default_https_context = ssl._create_unverified_context
+
+
+
+[docs] +class TestReporter(GenericXML): + def __init__(self): + """ + initialize an object + """ + self.root = None + + GenericXML.__init__( + self, + root_name_override="testrecord", + read_only=False, + infile="TestRecord.xml", + ) + +
+[docs] + def setup_header( + self, tagname, machine, compiler, mpilib, testroot, testtype, baseline + ): + # + # Create the XML header that the testdb is expecting to recieve + # + for name, text, attribs in [ + ("tag_name", tagname, None), + ("mach", machine, None), + ("compiler", compiler, {"version": ""}), + ("mpilib", mpilib, {"version": ""}), + ("testroot", testroot, None), + ("testtype", testtype, None), + ("baselinetag", baseline, None), + ]: + self.make_child(name, attributes=attribs, text=text)
+ + +
+[docs] + def add_result(self, test_name, test_status): + # + # Add a test result to the XML structure. + # + tlelem = self.make_child("tests", {"testname": test_name}) + + for attrib_name, text in [ + ("casestatus", None), + ("comment", test_status["COMMENT"]), + ("compare", test_status["BASELINE"]), + ("memcomp", test_status["MEMCOMP"]), + ("memleak", test_status["MEMLEAK"]), + ("nlcomp", test_status["NLCOMP"]), + ("status", test_status["STATUS"]), + ("tputcomp", test_status["TPUTCOMP"]), + ]: + + self.make_child( + "category", attributes={"name": attrib_name}, text=text, root=tlelem + )
+ + +
+[docs] + def push2testdb(self): + # + # Post test result XML to CESM test database + # + xmlstr = self.get_raw_record() + username = input("Username:") + os.system("stty -echo") + password = input("Password:") + os.system("stty echo") + print() + params = {"username": username, "password": password, "testXML": xmlstr} + url = "https://csegweb.cgd.ucar.edu/testdb/cgi-bin/processXMLtest.cgi" + data = urllib.parse.urlencode(params) + data = data.encode("ascii") + req = urllib.request.Request(url, data) + result = urllib.request.urlopen(req) + print(result.read())
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/testlist.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/testlist.html new file mode 100644 index 00000000000..5c02a76a49d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/testlist.html @@ -0,0 +1,269 @@ + + + + + + CIME.XML.testlist — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.testlist

+"""
+Interface to the config_files.xml file.  This class inherits from generic_xml.py
+It supports version 2.0 of the testlist.xml file
+
+In version 2 of the file options can be specified to further refine a test or
+set of tests. They can be specified either at the top level, in which case they
+apply to all machines/compilers for this test:
+
+<test ...>
+  <options>
+    <option name="wallclock">00:20</option>
+  </options>
+  ...
+</test>
+
+or at the level of a particular machine/compiler:
+
+<test ...>
+  <machines>
+    <machine ...>
+      <options>
+        <option name="wallclock">00:20</option>
+      </options>
+    </machine>
+  </machines>
+</test>
+
+Currently supported options are:
+
+- walltime: sets the wallclock limit in the queuing system
+
+- memleak_tolerance: specifies the relative memory growth expected for this test
+
+- comment: has no effect, but is written out when printing the test list
+
+- workflow: adds a workflow to the test
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Testlist(GenericXML): + def __init__(self, infile, files=None): + """ + initialize an object + """ + schema = None + if files is None: + files = Files() + schema = files.get_schema("TESTS_SPEC_FILE") + GenericXML.__init__(self, infile, schema=schema) + expect( + self.get_version() >= 2.0, + "{} is an unsupported version of the testfile format and will be ignored".format( + infile + ), + ) + +
+[docs] + def get_tests( + self, + machine=None, + category=None, + compiler=None, + compset=None, + grid=None, + supported_only=False, + ): + tests = [] + attributes = {} + if compset is not None: + attributes["compset"] = compset + if grid is not None: + attributes["grid"] = grid + + testnodes = self.get_children("test", attributes=attributes) + + machatts = {} + if machine is not None: + machatts["name"] = machine + if category is not None: + machatts["category"] = category + if compiler is not None: + machatts["compiler"] = compiler + + for tnode in testnodes: + if ( + supported_only + and self.has(tnode, "supported") + and self.get(tnode, "supported") == "false" + ): + continue + + machnode = self.get_optional_child("machines", root=tnode) + machnodes = ( + None + if machnode is None + else self.get_children("machine", machatts, root=machnode) + ) + if machnodes: + this_test_node = {} + for key, value in self.attrib(tnode).items(): + if key == "name": + this_test_node["testname"] = value + else: + this_test_node[key] = value + + # Get options that apply to all machines/compilers for this test + options = self.get_children("options", root=tnode) + if len(options) > 0: + optionnodes = self.get_children("option", root=options[0]) + else: + optionnodes = [] + for mach in machnodes: + # this_test_node can include multiple tests + this_test = dict(this_test_node) + for key, value in self.attrib(mach).items(): + if key == "name": + this_test["machine"] = value + else: + this_test[key] = value + this_test["options"] = {} + + for onode in optionnodes: + this_test["options"][self.get(onode, "name")] = self.text(onode) + + # Now get options specific to this machine/compiler + options = self.get_optional_child("options", root=mach) + optionnodes = ( + [] + if options is None + else self.get_children("option", root=options) + ) + for onode in optionnodes: + this_test["options"][self.get(onode, "name")] = self.text(onode) + + tests.append(this_test) + + return tests
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/tests.html new file mode 100644 index 00000000000..a656411e928 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/tests.html @@ -0,0 +1,221 @@ + + + + + + CIME.XML.tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.tests

+"""
+Interface to the config_tests.xml file.  This class inherits from GenericEntry
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import find_system_test
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.SystemTests.system_tests_compare_n import SystemTestsCompareN
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Tests(GenericXML): + def __init__(self, infile=None, files=None): + """ + initialize an object interface to file config_tests.xml + """ + if infile is None: + if files is None: + files = Files() + infile = files.get_value("CONFIG_TESTS_FILE") + GenericXML.__init__(self, infile) + # append any component specific config_tests.xml files + for comp in files.get_components("CONFIG_TESTS_FILE"): + if comp is None: + continue + infile = files.get_value("CONFIG_TESTS_FILE", attribute={"component": comp}) + if os.path.isfile(infile): + self.read(infile) + +
+[docs] + def support_single_exe(self, case): + """Checks if case supports --single-exe. + + Raises: + Exception: If system test cannot be found. + Exception: If `case` does not support --single-exe. + """ + testname = case.get_value("TESTCASE") + + try: + test = find_system_test(testname, case)(case, dry_run=True) + except Exception as e: + raise e + else: + # valid if subclass is SystemTestsCommon or _separate_builds is false + valid = ( + not issubclass(type(test), SystemTestsCompareTwo) + and not issubclass(type(test), SystemTestsCompareN) + ) or not test._separate_builds + + if not valid: + case_base_id = case.get_value("CASEBASEID") + + raise Exception( + f"{case_base_id} does not support the '--single-exe' option as it requires separate builds" + )
+ + +
+[docs] + def get_test_node(self, testname): + logger.debug("Get settings for {}".format(testname)) + node = self.get_child("test", {"NAME": testname}) + logger.debug("Found {}".format(self.text(node))) + return node
+ + +
+[docs] + def print_values(self, skip_infrastructure_tests=True): + """ + Print each test type and its description. + + If skip_infrastructure_tests is True, then this does not write + information for tests with the attribute + INFRASTRUCTURE_TEST="TRUE". + """ + all_tests = [] + root = self.get_optional_child("testlist") + if root is not None: + all_tests = self.get_children("test", root=root) + for one_test in all_tests: + if skip_infrastructure_tests: + infrastructure_test = self.get(one_test, "INFRASTRUCTURE_TEST") + if ( + infrastructure_test is not None + and infrastructure_test.upper() == "TRUE" + ): + continue + name = self.get(one_test, "NAME") + desc = self.get_element_text("DESC", root=one_test) + logger.info("{}: {}".format(name, desc))
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/testspec.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/testspec.html new file mode 100644 index 00000000000..9dd9f0d347f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/testspec.html @@ -0,0 +1,199 @@ + + + + + + CIME.XML.testspec — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.testspec

+"""
+Interface to the testspec.xml file.  This class inherits from generic_xml.py
+"""
+from CIME.XML.standard_module_setup import *
+
+from CIME.XML.generic_xml import GenericXML
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class TestSpec(GenericXML): + def __init__(self, infile): + """ + initialize an object + """ + GenericXML.__init__(self, infile) + self._testnodes = {} + self._testlist_node = None + if os.path.isfile(infile): + testnodes = self.get_children("test") + for node in testnodes: + self._testnodes[self.get(node, "name")] = node + +
+[docs] + def set_header( + self, testroot, machine, testid, baselinetag=None, baselineroot=None + ): + tlelem = self.make_child("testlist") + + for name, text in [ + ("testroot", testroot), + ("machine", machine), + ("testid", testid), + ("baselinetag", baselinetag), + ("baselineroot", baselineroot), + ]: + if text is not None: + self.make_child(name, root=tlelem, text=text) + + self._testlist_node = tlelem
+ + +
+[docs] + def add_test(self, compiler, mpilib, testname): + expect( + testname not in self._testnodes, + "Test {} already in testlist".format(testname), + ) + + telem = self.make_child( + "test", attributes={"name": testname}, root=self._testlist_node + ) + + for name, text in [("compiler", compiler), ("mpilib", mpilib)]: + self.make_child(name, root=telem, text=text) + + self._testnodes[testname] = telem
+ + +
+[docs] + def update_test_status(self, testname, phase, status): + expect( + testname in self._testnodes, + "Test {} not defined in testlist".format(testname), + ) + root = self._testnodes[testname] + pnode = self.get_optional_child("section", {"name": phase}, root=root) + if pnode is not None: + self.set(pnode, "status", status) + else: + self.make_child("section", {"name": phase, "status": status}, root=root)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/workflow.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/workflow.html new file mode 100644 index 00000000000..a31ad501351 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/XML/workflow.html @@ -0,0 +1,212 @@ + + + + + + CIME.XML.workflow — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.XML.workflow

+"""
+Interface to the config_workflow.xml file.  This class inherits from GenericXML.py
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.files import Files
+from CIME.utils import expect
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class Workflow(GenericXML): + def __init__(self, infile=None, files=None): + """ + initialize an object + """ + if files is None: + files = Files() + if infile is None: + infile = files.get_value("WORKFLOW_SPEC_FILE") + expect(infile, "No workflow file defined in {}".format(files.filename)) + + schema = files.get_schema("WORKFLOW_SPEC_FILE") + + GenericXML.__init__(self, infile, schema=schema) + + # Append the contents of $HOME/.cime/config_workflow.xml if it exists + # This could cause problems if node matchs are repeated when only one is expected + infile = os.path.join(os.environ.get("HOME"), ".cime", "config_workflow.xml") + if os.path.exists(infile): + GenericXML.read(self, infile) + +
+[docs] + def get_workflow_jobs(self, machine, workflowid="default"): + """ + Return a list of jobs with the first element the name of the script + and the second a dict of qualifiers for the job + """ + jobs = [] + bnodes = [] + findmore = True + prepend = False + while findmore: + bnode = self.get_optional_child( + "workflow_jobs", attributes={"id": workflowid} + ) + expect( + bnode, + "No workflow {} found in file {}".format(workflowid, self.filename), + ) + if prepend: + bnodes = [bnode] + bnodes + else: + bnodes.append(bnode) + prepend = False + workflow_attribs = self.attrib(bnode) + if "prepend" in workflow_attribs: + workflowid = workflow_attribs["prepend"] + prepend = True + elif "append" in workflow_attribs: + workflowid = workflow_attribs["append"] + else: + findmore = False + for bnode in bnodes: + for jnode in self.get_children(root=bnode): + if self.name(jnode) == "job": + name = self.get(jnode, "name") + jdict = {} + for child in self.get_children(root=jnode): + if self.name(child) == "runtime_parameters": + attrib = self.attrib(child) + if attrib and attrib == {"MACH": machine}: + for rtchild in self.get_children(root=child): + jdict[self.name(rtchild)] = self.text(rtchild) + elif not attrib: + for rtchild in self.get_children(root=child): + if self.name(rtchild) not in jdict: + jdict[self.name(rtchild)] = self.text(rtchild) + + else: + jdict[self.name(child)] = self.text(child) + + jobs.append((name, jdict)) + + return jobs
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/aprun.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/aprun.html new file mode 100644 index 00000000000..80db4280c28 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/aprun.html @@ -0,0 +1,318 @@ + + + + + + CIME.aprun — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.aprun

+"""
+Aprun is far too complex to handle purely through XML. We need python
+code to compute and assemble aprun commands.
+"""
+
+from CIME.XML.standard_module_setup import *
+
+import math
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+def _get_aprun_cmd_for_case_impl(
+    ntasks,
+    nthreads,
+    rootpes,
+    pstrids,
+    max_tasks_per_node,
+    max_mpitasks_per_node,
+    pio_numtasks,
+    pio_async_interface,
+    compiler,
+    machine,
+    run_exe,
+    extra_args,
+):
+    ###############################################################################
+    """
+    No one really understands this code, but we can at least test it.
+
+    >>> ntasks = [512, 675, 168, 512, 128, 168, 168, 512, 1]
+    >>> nthreads = [2, 2, 2, 2, 4, 2, 2, 2, 1]
+    >>> rootpes = [0, 0, 512, 0, 680, 512, 512, 0, 0]
+    >>> pstrids = [1, 1, 1, 1, 1, 1, 1, 1, 1]
+    >>> max_tasks_per_node = 16
+    >>> max_mpitasks_per_node = 16
+    >>> pio_numtasks = -1
+    >>> pio_async_interface = False
+    >>> compiler = "pgi"
+    >>> machine = "titan"
+    >>> run_exe = "e3sm.exe"
+    >>> _get_aprun_cmd_for_case_impl(ntasks, nthreads, rootpes, pstrids, max_tasks_per_node, max_mpitasks_per_node, pio_numtasks, pio_async_interface, compiler, machine, run_exe, None)
+    ('  -S 4 -n 680 -N 8 -d 2  e3sm.exe : -S 2 -n 128 -N 4 -d 4  e3sm.exe ', 117, 808, 4, 4)
+    >>> compiler = "intel"
+    >>> _get_aprun_cmd_for_case_impl(ntasks, nthreads, rootpes, pstrids, max_tasks_per_node, max_mpitasks_per_node, pio_numtasks, pio_async_interface, compiler, machine, run_exe, None)
+    ('  -S 4 -cc numa_node -n 680 -N 8 -d 2  e3sm.exe : -S 2 -cc numa_node -n 128 -N 4 -d 4  e3sm.exe ', 117, 808, 4, 4)
+
+    >>> ntasks = [64, 64, 64, 64, 64, 64, 64, 64, 1]
+    >>> nthreads = [1, 1, 1, 1, 1, 1, 1, 1, 1]
+    >>> rootpes = [0, 0, 0, 0, 0, 0, 0, 0, 0]
+    >>> pstrids = [1, 1, 1, 1, 1, 1, 1, 1, 1]
+    >>> _get_aprun_cmd_for_case_impl(ntasks, nthreads, rootpes, pstrids, max_tasks_per_node, max_mpitasks_per_node, pio_numtasks, pio_async_interface, compiler, machine, run_exe, None)
+    ('  -S 8 -cc numa_node -n 64 -N 16 -d 1  e3sm.exe ', 4, 64, 16, 1)
+    """
+    if extra_args is None:
+        extra_args = {}
+
+    max_tasks_per_node = 1 if max_tasks_per_node < 1 else max_tasks_per_node
+
+    total_tasks = 0
+    for ntask, rootpe, pstrid in zip(ntasks, rootpes, pstrids):
+        tt = rootpe + (ntask - 1) * pstrid + 1
+        total_tasks = max(tt, total_tasks)
+
+    # Check if we need to add pio's tasks to the total task count
+    if pio_async_interface:
+        total_tasks += pio_numtasks if pio_numtasks > 0 else max_mpitasks_per_node
+
+    # Compute max threads for each mpi task
+    maxt = [0] * total_tasks
+    for ntask, nthrd, rootpe, pstrid in zip(ntasks, nthreads, rootpes, pstrids):
+        c2 = 0
+        while c2 < ntask:
+            s = rootpe + c2 * pstrid
+            if nthrd > maxt[s]:
+                maxt[s] = nthrd
+
+            c2 += 1
+
+    # make sure all maxt values at least 1
+    for c1 in range(0, total_tasks):
+        if maxt[c1] < 1:
+            maxt[c1] = 1
+
+    global_flags = " ".join(
+        [x for x, y in extra_args.items() if y["position"] == "global"]
+    )
+
+    per_flags = " ".join([x for x, y in extra_args.items() if y["position"] == "per"])
+
+    # Compute task and thread settings for batch commands
+    (
+        tasks_per_node,
+        min_tasks_per_node,
+        task_count,
+        thread_count,
+        max_thread_count,
+        total_node_count,
+        total_task_count,
+        aprun_args,
+    ) = (0, max_mpitasks_per_node, 1, maxt[0], maxt[0], 0, 0, f" {global_flags}")
+    c1list = list(range(1, total_tasks))
+    c1list.append(None)
+    for c1 in c1list:
+        if c1 is None or maxt[c1] != thread_count:
+            tasks_per_node = min(
+                max_mpitasks_per_node, int(max_tasks_per_node / thread_count)
+            )
+
+            tasks_per_node = min(task_count, tasks_per_node)
+
+            # Compute for every subset
+            task_per_numa = int(math.ceil(tasks_per_node / 2.0))
+            # Option for Titan
+            if machine == "titan" and tasks_per_node > 1:
+                aprun_args += " -S {:d}".format(task_per_numa)
+                if compiler == "intel":
+                    aprun_args += " -cc numa_node"
+
+            aprun_args += " -n {:d} -N {:d} -d {:d} {} {} {}".format(
+                task_count,
+                tasks_per_node,
+                thread_count,
+                per_flags,
+                run_exe,
+                "" if c1 is None else ":",
+            )
+
+            node_count = int(math.ceil(float(task_count) / tasks_per_node))
+            total_node_count += node_count
+            total_task_count += task_count
+
+            if tasks_per_node < min_tasks_per_node:
+                min_tasks_per_node = tasks_per_node
+
+            if c1 is not None:
+                thread_count = maxt[c1]
+                max_thread_count = max(max_thread_count, maxt[c1])
+                task_count = 1
+
+        else:
+            task_count += 1
+
+    return (
+        aprun_args,
+        total_node_count,
+        total_task_count,
+        min_tasks_per_node,
+        max_thread_count,
+    )
+
+
+###############################################################################
+
+[docs] +def get_aprun_cmd_for_case(case, run_exe, overrides=None, extra_args=None): + ############################################################################### + """ + Given a case, construct and return the aprun command and optimized node count + """ + models = case.get_values("COMP_CLASSES") + ntasks, nthreads, rootpes, pstrids = [], [], [], [] + for model in models: + model = "CPL" if model == "DRV" else model + for the_list, item_name in zip( + [ntasks, nthreads, rootpes, pstrids], + ["NTASKS", "NTHRDS", "ROOTPE", "PSTRID"], + ): + the_list.append(case.get_value("_".join([item_name, model]))) + max_tasks_per_node = case.get_value("MAX_TASKS_PER_NODE") + if overrides: + overrides = { + x: y if isinstance(y, int) or y is None else int(y) + for x, y in overrides.items() + } + if "max_tasks_per_node" in overrides: + max_tasks_per_node = overrides["max_tasks_per_node"] + if "total_tasks" in overrides: + ntasks = [overrides["total_tasks"] if x > 1 else x for x in ntasks] + if "thread_count" in overrides: + nthreads = [overrides["thread_count"] if x > 1 else x for x in nthreads] + + return _get_aprun_cmd_for_case_impl( + ntasks, + nthreads, + rootpes, + pstrids, + max_tasks_per_node, + case.get_value("MAX_MPITASKS_PER_NODE"), + case.get_value("PIO_NUMTASKS"), + case.get_value("PIO_ASYNC_INTERFACE"), + case.get_value("COMPILER"), + case.get_value("MACH"), + run_exe, + extra_args, + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/baselines/performance.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/baselines/performance.html new file mode 100644 index 00000000000..acb7a0c09b0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/baselines/performance.html @@ -0,0 +1,767 @@ + + + + + + CIME.baselines.performance — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.baselines.performance

+import os
+import glob
+import re
+import gzip
+import logging
+from CIME.config import Config
+from CIME.utils import expect, get_src_root, get_current_commit, get_timestamp
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +def perf_compare_throughput_baseline(case, baseline_dir=None): + """ + Compares model throughput. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + baseline_dir : str + Overrides the baseline directory. + + Returns + ------- + below_tolerance : bool + Whether the comparison was below the tolerance. + comment : str + Provides explanation from comparison. + """ + if baseline_dir is None: + baseline_dir = case.get_baseline_dir() + + config = load_coupler_customization(case) + + baseline_file = os.path.join(baseline_dir, "cpl-tput.log") + + baseline = read_baseline_file(baseline_file) + + tolerance = case.get_value("TEST_TPUT_TOLERANCE") + + if tolerance is None: + tolerance = 0.1 + + expect( + tolerance > 0.0, + "Bad value for throughput tolerance in test", + ) + + try: + below_tolerance, comment = config.perf_compare_throughput_baseline( + case, baseline, tolerance + ) + except AttributeError: + below_tolerance, comment = _perf_compare_throughput_baseline( + case, baseline, tolerance + ) + + return below_tolerance, comment
+ + + +
+[docs] +def perf_compare_memory_baseline(case, baseline_dir=None): + """ + Compares model highwater memory usage. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + baseline_dir : str + Overrides the baseline directory. + + Returns + ------- + below_tolerance : bool + Whether the comparison was below the tolerance. + comment : str + Provides explanation from comparison. + """ + if baseline_dir is None: + baseline_dir = case.get_baseline_dir() + + config = load_coupler_customization(case) + + baseline_file = os.path.join(baseline_dir, "cpl-mem.log") + + baseline = read_baseline_file(baseline_file) + + tolerance = case.get_value("TEST_MEMLEAK_TOLERANCE") + + if tolerance is None: + tolerance = 0.1 + + try: + below_tolerance, comments = config.perf_compare_memory_baseline( + case, baseline, tolerance + ) + except AttributeError: + below_tolerance, comments = _perf_compare_memory_baseline( + case, baseline, tolerance + ) + + return below_tolerance, comments
+ + + +
+[docs] +def perf_write_baseline(case, basegen_dir, throughput=True, memory=True): + """ + Writes the baseline performance files. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + basegen_dir : str + Path to baseline directory. + throughput : bool + If true, write throughput baseline. + memory : bool + If true, write memory baseline. + """ + config = load_coupler_customization(case) + + if throughput: + try: + tput, mode = perf_get_throughput(case, config) + except RuntimeError as e: + logger.debug("Could not get throughput: {0!s}".format(e)) + else: + baseline_file = os.path.join(basegen_dir, "cpl-tput.log") + + write_baseline_file(baseline_file, tput, mode) + + logger.info("Updated throughput baseline to {!s}".format(tput)) + + if memory: + try: + mem, mode = perf_get_memory(case, config) + except RuntimeError as e: + logger.info("Could not get memory usage: {0!s}".format(e)) + else: + baseline_file = os.path.join(basegen_dir, "cpl-mem.log") + + write_baseline_file(baseline_file, mem, mode) + + logger.info("Updated memory usage baseline to {!s}".format(mem))
+ + + +
+[docs] +def load_coupler_customization(case): + """ + Loads customizations from the coupler `cime_config` directory. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + + Returns + ------- + CIME.config.Config + Runtime configuration. + """ + comp_root_dir_cpl = case.get_value("COMP_ROOT_DIR_CPL") + + cpl_customize = os.path.join(comp_root_dir_cpl, "cime_config", "customize") + + return Config.load(cpl_customize)
+ + + +
+[docs] +def perf_get_throughput(case, config): + """ + Gets the model throughput. + + First attempts to use a coupler define method to retrieve the + models throughput. If this is not defined then the default + method of parsing the coupler log is used. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + + Returns + ------- + str or None + Model throughput. + """ + try: + tput, mode = config.perf_get_throughput(case) + except AttributeError: + tput, mode = _perf_get_throughput(case) + + return tput, mode
+ + + +
+[docs] +def perf_get_memory(case, config): + """ + Gets the model memory usage. + + First attempts to use a coupler defined method to retrieve the + models memory usage. If this is not defined then the default + method of parsing the coupler log is used. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + + Returns + ------- + str or None + Model memory usage. + """ + try: + mem, mode = config.perf_get_memory(case) + except AttributeError: + mem, mode = _perf_get_memory(case) + + return mem, mode
+ + + +
+[docs] +def write_baseline_file(baseline_file, value, mode="a"): + """ + Writes value to `baseline_file`. + + Parameters + ---------- + baseline_file : str + Path to the baseline file. + value : str + Value to write. + mode : str + Mode to open file with. + """ + with open(baseline_file, mode) as fd: + fd.write(value)
+ + + +def _perf_get_memory(case, cpllog=None): + """ + Default function to retrieve memory usage from the coupler log. + + If the usage is not available from the log then `None` is returned. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + cpllog : str + Overrides the default coupler log. + + Returns + ------- + str or None + Model memory usage or `None`. + + Raises + ------ + RuntimeError + If not enough sample were found. + """ + memlist = perf_get_memory_list(case, cpllog) + + if memlist is None: + raise RuntimeError("Could not get default memory usage") from None + + value = _format_baseline(memlist[-1][1]) + + return value, "a" + + +
+[docs] +def perf_get_memory_list(case, cpllog): + if cpllog is None: + cpllog = get_latest_cpl_logs(case) + else: + cpllog = [ + cpllog, + ] + + try: + memlist = get_cpl_mem_usage(cpllog[0]) + except (FileNotFoundError, IndexError): + memlist = None + + logger.debug("Could not parse memory usage from coupler log") + else: + if len(memlist) <= 3: + raise RuntimeError( + f"Found {len(memlist)} memory usage samples, need atleast 4" + ) + + return memlist
+ + + +def _perf_get_throughput(case): + """ + Default function to retrieve throughput from the coupler log. + + If the throughput is not available from the log then `None` is returned. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + + Returns + ------- + str or None + Model throughput or `None`. + """ + cpllog = get_latest_cpl_logs(case) + + try: + tput = get_cpl_throughput(cpllog[0]) + except (FileNotFoundError, IndexError): + tput = None + + logger.debug("Could not parse throughput from coupler log") + + if tput is None: + raise RuntimeError("Could not get default throughput") from None + + value = _format_baseline(tput) + + return value, "a" + + +
+[docs] +def get_latest_cpl_logs(case): + """ + find and return the latest cpl log file in the run directory + """ + coupler_log_path = case.get_value("RUNDIR") + + cpllog_name = "drv" if case.get_value("COMP_INTERFACE") == "nuopc" else "cpl" + + cpllogs = glob.glob(os.path.join(coupler_log_path, "{}*.log.*".format(cpllog_name))) + + lastcpllogs = [] + + if cpllogs: + lastcpllogs.append(max(cpllogs, key=os.path.getctime)) + + basename = os.path.basename(lastcpllogs[0]) + + suffix = basename.split(".", 1)[1] + + for log in cpllogs: + if log in lastcpllogs: + continue + + if log.endswith(suffix): + lastcpllogs.append(log) + + return lastcpllogs
+ + + +
+[docs] +def get_cpl_mem_usage(cpllog): + """ + Read memory usage from coupler log. + + Parameters + ---------- + cpllog : str + Path to the coupler log. + + Returns + ------- + list + Memory usage (data, highwater) as recorded by the coupler or empty list. + """ + memlist = [] + + meminfo = re.compile(r".*model date =\s+(\w+).*memory =\s+(\d+\.?\d+).*highwater") + + if cpllog is not None and os.path.isfile(cpllog): + if ".gz" == cpllog[-3:]: + fopen = gzip.open + else: + fopen = open + + with fopen(cpllog, "rb") as f: + for line in f: + m = meminfo.match(line.decode("utf-8")) + + if m: + memlist.append((float(m.group(1)), float(m.group(2)))) + + # Remove the last mem record, it's sometimes artificially high + if len(memlist) > 0: + memlist.pop() + + return memlist
+ + + +
+[docs] +def get_cpl_throughput(cpllog): + """ + Reads throuhgput from coupler log. + + Parameters + ---------- + cpllog : str + Path to the coupler log. + + Returns + ------- + int or None + Throughput as recorded by the coupler or None + """ + if cpllog is not None and os.path.isfile(cpllog): + with gzip.open(cpllog, "rb") as f: + cpltext = f.read().decode("utf-8") + + m = re.search(r"# simulated years / cmp-day =\s+(\d+\.\d+)\s", cpltext) + + if m: + return float(m.group(1)) + return None
+ + + +
+[docs] +def read_baseline_file(baseline_file): + """ + Reads value from `baseline_file`. + + Strips comments and returns the raw content to be decoded. + + Parameters + ---------- + baseline_file : str + Path to the baseline file. + + Returns + ------- + str + Value stored in baseline file without comments. + """ + with open(baseline_file) as fd: + lines = [x.strip() for x in fd.readlines() if not x.startswith("#") and x != ""] + + return "\n".join(lines)
+ + + +def _perf_compare_throughput_baseline(case, baseline, tolerance): + """ + Default throughput baseline comparison. + + Compares the throughput from the coupler to the baseline value. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + baseline : list + Lines contained in the baseline file. + tolerance : float + Allowed tolerance for comparison. + + Returns + ------- + below_tolerance : bool + Whether the comparison was below the tolerance. + comment : str + provides explanation from comparison. + """ + current, _ = _perf_get_throughput(case) + + try: + current = float(_parse_baseline(current)) + except (ValueError, TypeError): + comment = "Could not compare throughput to baseline, as baseline had no value." + + return None, comment + + try: + # default baseline is stored as single float + baseline = float(_parse_baseline(baseline)) + except (ValueError, TypeError): + comment = "Could not compare throughput to baseline, as baseline had no value." + + return None, comment + + # comparing ypd so bigger is better + diff = (baseline - current) / baseline + + below_tolerance = None + + if diff is not None: + below_tolerance = diff < tolerance + + info = "Throughput changed by {:.2f}%: baseline={:.3f} sypd, tolerance={:d}%, current={:.3f} sypd".format( + diff * 100, baseline, int(tolerance * 100), current + ) + if below_tolerance: + comment = "TPUTCOMP: " + info + else: + comment = "Error: TPUTCOMP: " + info + + return below_tolerance, comment + + +def _perf_compare_memory_baseline(case, baseline, tolerance): + """ + Default memory usage baseline comparison. + + Compares the highwater memory usage from the coupler to the baseline value. + + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + baseline : list + Lines contained in the baseline file. + tolerance : float + Allowed tolerance for comparison. + + Returns + ------- + below_tolerance : bool + Whether the comparison was below the tolerance. + comment : str + provides explanation from comparison. + """ + try: + current, _ = _perf_get_memory(case) + except RuntimeError as e: + return None, str(e) + + try: + current = float(_parse_baseline(current)) + except (ValueError, TypeError): + comment = "Could not compare throughput to baseline, as baseline had no value." + + return None, comment + + try: + # default baseline is stored as single float + baseline = float(_parse_baseline(baseline)) + except (ValueError, TypeError): + baseline = 0.0 + + try: + diff = (current - baseline) / baseline + except ZeroDivisionError: + diff = 0.0 + + # Should we check if tolerance is above 0 + below_tolerance = None + comment = "" + + if diff is not None: + below_tolerance = diff < tolerance + + info = "Memory usage highwater changed by {:.2f}%: baseline={:.3f} MB, tolerance={:d}%, current={:.3f} MB".format( + diff * 100, baseline, int(tolerance * 100), current + ) + if below_tolerance: + comment = "MEMCOMP: " + info + else: + comment = "Error: MEMCOMP: " + info + + return below_tolerance, comment + + +def _format_baseline(value): + """ + Encodes value with default baseline format. + + Default format: + sha: <commit sha> date: <date of bless> <value> + + Parameters + ---------- + value : str + Baseline value to encode. + + Returns + ------- + value : str + Baseline entry. + """ + commit_hash = get_current_commit(repo=get_src_root()) + + timestamp = get_timestamp(timestamp_format="%Y-%m-%d_%H:%M:%S") + + return f"sha:{commit_hash} date:{timestamp} {value}\n" + + +def _parse_baseline(data): + """ + Parses default baseline format. + + Default format: + sha: <commit sha> date: <date of bless> <value> + + Parameters + ---------- + data : str + Containing contents of baseline file. + + Returns + ------- + value : str + Value of the latest blessed baseline. + """ + lines = data.split("\n") + lines = [x for x in lines if x != ""] + + try: + value = lines[-1].strip().split(" ")[-1] + except IndexError: + value = None + + return value +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/bless_test_results.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/bless_test_results.html new file mode 100644 index 00000000000..bd857be8a75 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/bless_test_results.html @@ -0,0 +1,629 @@ + + + + + + CIME.bless_test_results — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.bless_test_results

+import CIME.compare_namelists, CIME.simple_compare
+from CIME.test_scheduler import NAMELIST_PHASE
+from CIME.utils import (
+    run_cmd,
+    get_scripts_root,
+    EnvironmentContext,
+    parse_test_name,
+    match_any,
+)
+from CIME.config import Config
+from CIME.test_status import *
+from CIME.hist_utils import generate_baseline, compare_baseline
+from CIME.case import Case
+from CIME.test_utils import get_test_status_files
+from CIME.baselines.performance import (
+    perf_compare_throughput_baseline,
+    perf_compare_memory_baseline,
+    perf_write_baseline,
+)
+import os, time
+
+logger = logging.getLogger(__name__)
+
+
+def _bless_throughput(
+    case,
+    test_name,
+    baseline_root,
+    baseline_name,
+    report_only,
+    force,
+):
+    success = True
+    reason = None
+    below_threshold = False
+
+    baseline_dir = os.path.join(
+        baseline_root, baseline_name, case.get_value("CASEBASEID")
+    )
+
+    try:
+        below_threshold, comment = perf_compare_throughput_baseline(
+            case, baseline_dir=baseline_dir
+        )
+    except FileNotFoundError as e:
+        comment = f"Could not read throughput file: {e!s}"
+    except Exception as e:
+        comment = f"Error comparing throughput baseline: {e!s}"
+
+    if below_threshold:
+        logger.info("Throughput diff appears to have been already resolved.")
+    else:
+        logger.info(comment)
+
+        if not report_only and (
+            force or input("Update this diff (y/n)? ").upper() in ["Y", "YES"]
+        ):
+            try:
+                perf_write_baseline(case, baseline_dir, memory=False)
+            except Exception as e:
+                success = False
+
+                reason = f"Failed to write baseline throughput for {test_name!r}: {e!s}"
+
+    return success, reason
+
+
+def _bless_memory(
+    case,
+    test_name,
+    baseline_root,
+    baseline_name,
+    report_only,
+    force,
+):
+    success = True
+    reason = None
+    below_threshold = False
+
+    baseline_dir = os.path.join(
+        baseline_root, baseline_name, case.get_value("CASEBASEID")
+    )
+
+    try:
+        below_threshold, comment = perf_compare_memory_baseline(
+            case, baseline_dir=baseline_dir
+        )
+    except FileNotFoundError as e:
+        comment = f"Could not read memory usage file: {e!s}"
+    except Exception as e:
+        comment = f"Error comparing memory baseline: {e!s}"
+
+    if below_threshold:
+        logger.info("Memory usage diff appears to have been already resolved.")
+    else:
+        logger.info(comment)
+
+        if not report_only and (
+            force or input("Update this diff (y/n)? ").upper() in ["Y", "YES"]
+        ):
+            try:
+                perf_write_baseline(case, baseline_dir, throughput=False)
+            except Exception as e:
+                success = False
+
+                reason = f"Failed to write baseline memory usage for test {test_name!r}: {e!s}"
+
+    return success, reason
+
+
+###############################################################################
+
+[docs] +def bless_namelists( + test_name, + report_only, + force, + pes_file, + baseline_name, + baseline_root, + new_test_root=None, + new_test_id=None, +): + ############################################################################### + # Be aware that restart test will overwrite the original namelist files + # with versions of the files that should not be blessed. This forces us to + # re-run create_test. + + # Update namelist files + logger.info("Test '{}' had namelist diff".format(test_name)) + if not report_only and ( + force or input("Update namelists (y/n)? ").upper() in ["Y", "YES"] + ): + config = Config.instance() + + create_test_gen_args = ( + " -g {} ".format(baseline_name) + if config.create_test_flag_mode == "cesm" + else " -g -b {} ".format(baseline_name) + ) + + if new_test_root is not None: + create_test_gen_args += " --test-root={0} --output-root={0} ".format( + new_test_root + ) + if new_test_id is not None: + create_test_gen_args += " -t {}".format(new_test_id) + + if pes_file is not None: + create_test_gen_args += " --pesfile {}".format(pes_file) + + stat, out, _ = run_cmd( + "{}/create_test {} --namelists-only {} --baseline-root {} -o".format( + get_scripts_root(), test_name, create_test_gen_args, baseline_root + ), + combine_output=True, + ) + if stat != 0: + return False, "Namelist regen failed: '{}'".format(out) + else: + return True, None + else: + return True, None
+ + + +
+[docs] +def bless_history(test_name, case, baseline_name, baseline_root, report_only, force): + real_user = case.get_value("REALUSER") + with EnvironmentContext(USER=real_user): + + baseline_full_dir = os.path.join( + baseline_root, baseline_name, case.get_value("CASEBASEID") + ) + + cmp_result, cmp_comments = compare_baseline( + case, baseline_dir=baseline_full_dir, outfile_suffix=None + ) + if cmp_result: + logger.info("Diff appears to have been already resolved.") + return True, None + else: + logger.info(cmp_comments) + if not report_only and ( + force or input("Update this diff (y/n)? ").upper() in ["Y", "YES"] + ): + gen_result, gen_comments = generate_baseline( + case, baseline_dir=baseline_full_dir + ) + if not gen_result: + logger.warning( + "Hist file bless FAILED for test {}".format(test_name) + ) + return False, "Generate baseline failed: {}".format(gen_comments) + else: + logger.info(gen_comments) + return True, None + else: + return True, None
+ + + +
+[docs] +def bless_test_results( + baseline_name, + baseline_root, + test_root, + compiler, + test_id=None, + namelists_only=False, + hist_only=False, + report_only=False, + force=False, + pes_file=None, + bless_tests=None, + no_skip_pass=False, + new_test_root=None, + new_test_id=None, + exclude=None, + bless_tput=False, + bless_mem=False, + bless_perf=False, + **_, # Capture all for extra +): + bless_all = not (namelists_only | hist_only | bless_tput | bless_mem | bless_perf) + + test_status_files = get_test_status_files(test_root, compiler, test_id=test_id) + + # auto-adjust test-id if multiple rounds of tests were matched + timestamps = set() + for test_status_file in test_status_files: + timestamp = os.path.basename(os.path.dirname(test_status_file)).split(".")[-1] + timestamps.add(timestamp) + + if len(timestamps) > 1: + logger.warning( + "Multiple sets of tests were matched! Selected only most recent tests." + ) + + most_recent = sorted(timestamps)[-1] + logger.info("Matched test batch is {}".format(most_recent)) + + bless_tests_counts = [] + if bless_tests: + bless_tests_counts = dict([(bless_test, 0) for bless_test in bless_tests]) + + # compile excludes into single regex + if exclude is not None: + exclude = re.compile("|".join([f"({x})" for x in exclude])) + + broken_blesses = [] + for test_status_file in test_status_files: + if not most_recent in test_status_file: + logger.info("Skipping {}".format(test_status_file)) + continue + + test_dir = os.path.dirname(test_status_file) + ts = TestStatus(test_dir=test_dir) + test_name = ts.get_name() + testopts = parse_test_name(test_name)[1] + testopts = [] if testopts is None else testopts + build_only = "B" in testopts + # TODO test_name will never be None otherwise `parse_test_name` would raise an error + if test_name is None: + case_dir = os.path.basename(test_dir) + test_name = CIME.utils.normalize_case_id(case_dir) + if not bless_tests or match_any(test_name, bless_tests_counts): + broken_blesses.append( + ( + "unknown", + "test had invalid TestStatus file: '{}'".format( + test_status_file + ), + ) + ) + continue + else: + continue + + # Must pass tests to continue + has_no_tests = bless_tests in [[], None] + match_test_name = match_any(test_name, bless_tests_counts) + excluded = exclude.match(test_name) if exclude else False + + if (not has_no_tests and not match_test_name) or excluded: + logger.debug("Skipping {!r}".format(test_name)) + + continue + + overall_result, phase = ts.get_overall_test_status( + ignore_namelists=True, + ignore_memleak=True, + check_throughput=False, + check_memory=False, + ) + + # See if we need to bless namelist + if namelists_only or bless_all: + if no_skip_pass: + nl_bless = True + else: + nl_bless = ts.get_status(NAMELIST_PHASE) != TEST_PASS_STATUS + else: + nl_bless = False + + hist_bless, tput_bless, mem_bless = [False] * 3 + + # Skip if test is build only i.e. testopts contains "B" + if not build_only: + bless_needed = is_bless_needed( + test_name, ts, broken_blesses, overall_result, no_skip_pass, phase + ) + + # See if we need to bless baselines + if hist_only or bless_all: + hist_bless = bless_needed + + if bless_tput or bless_perf: + tput_bless = bless_needed + + if not tput_bless: + tput_bless = ts.get_status(THROUGHPUT_PHASE) != TEST_PASS_STATUS + + if bless_mem or bless_perf: + mem_bless = bless_needed + + if not mem_bless: + mem_bless = ts.get_status(MEMCOMP_PHASE) != TEST_PASS_STATUS + + # Now, do the bless + if not nl_bless and not hist_bless and not tput_bless and not mem_bless: + logger.info( + "Nothing to bless for test: {}, overall status: {}".format( + test_name, overall_result + ) + ) + else: + logger.debug("Determined blesses for {!r}".format(test_name)) + logger.debug("nl_bless = {}".format(nl_bless)) + logger.debug("hist_bless = {}".format(hist_bless)) + logger.debug("tput_bless = {}".format(tput_bless)) + logger.debug("mem_bless = {}".format(mem_bless)) + + logger.info( + "###############################################################################" + ) + logger.info( + "Blessing results for test: {}, most recent result: {}".format( + test_name, overall_result + ) + ) + logger.info("Case dir: {}".format(test_dir)) + logger.info( + "###############################################################################" + ) + if not force: + time.sleep(2) + + with Case(test_dir) as case: + # Resolve baseline_name and baseline_root + if baseline_name is None: + baseline_name_resolved = case.get_value("BASELINE_NAME_CMP") + if not baseline_name_resolved: + cime_root = CIME.utils.get_cime_root() + baseline_name_resolved = CIME.utils.get_current_branch( + repo=cime_root + ) + else: + baseline_name_resolved = baseline_name + + if baseline_root is None: + baseline_root_resolved = case.get_value("BASELINE_ROOT") + else: + baseline_root_resolved = baseline_root + + if baseline_name_resolved is None: + broken_blesses.append( + (test_name, "Could not determine baseline name") + ) + continue + + if baseline_root_resolved is None: + broken_blesses.append( + (test_name, "Could not determine baseline root") + ) + continue + + # Bless namelists + if nl_bless: + success, reason = bless_namelists( + test_name, + report_only, + force, + pes_file, + baseline_name_resolved, + baseline_root_resolved, + new_test_root=new_test_root, + new_test_id=new_test_id, + ) + if not success: + broken_blesses.append((test_name, reason)) + + # Bless hist files + if hist_bless: + if "HOMME" in test_name: + success = False + reason = "HOMME tests cannot be blessed with bless_for_tests" + else: + success, reason = bless_history( + test_name, + case, + baseline_name_resolved, + baseline_root_resolved, + report_only, + force, + ) + + if not success: + broken_blesses.append((test_name, reason)) + + if tput_bless: + success, reason = _bless_throughput( + case, + test_name, + baseline_root_resolved, + baseline_name_resolved, + report_only, + force, + ) + + if not success: + broken_blesses.append((test_name, reason)) + + if mem_bless: + success, reason = _bless_memory( + case, + test_name, + baseline_root_resolved, + baseline_name_resolved, + report_only, + force, + ) + + if not success: + broken_blesses.append((test_name, reason)) + + # Emit a warning if items in bless_tests did not match anything + if bless_tests: + for bless_test, bless_count in bless_tests_counts.items(): + if bless_count == 0: + logger.warning( + """ +bless test arg '{}' did not match any tests in test_root {} with +compiler {} and test_id {}. It's possible that one of these arguments +had a mistake (likely compiler or testid).""".format( + bless_test, test_root, compiler, test_id + ) + ) + + # Make sure user knows that some tests were not blessed + success = True + for broken_bless, reason in broken_blesses: + logger.warning( + "FAILED TO BLESS TEST: {}, reason {}".format(broken_bless, reason) + ) + success = False + + return success
+ + + +
+[docs] +def is_bless_needed(test_name, ts, broken_blesses, overall_result, no_skip_pass, phase): + needed = False + + run_result = ts.get_status(RUN_PHASE) + + if run_result is None: + broken_blesses.append((test_name, "no run phase")) + logger.warning("Test '{}' did not make it to run phase".format(test_name)) + needed = False + elif run_result != TEST_PASS_STATUS: + broken_blesses.append((test_name, "run phase did not pass")) + logger.warning( + "Test '{}' run phase did not pass, not safe to bless, test status = {}".format( + test_name, ts.phase_statuses_dump() + ) + ) + needed = False + elif overall_result == TEST_FAIL_STATUS: + broken_blesses.append((test_name, "test did not pass")) + logger.warning( + "Test '{}' did not pass due to phase {}, not safe to bless, test status = {}".format( + test_name, phase, ts.phase_statuses_dump() + ) + ) + needed = False + + elif no_skip_pass: + needed = True + else: + needed = ts.get_status(BASELINE_PHASE) != TEST_PASS_STATUS + + return needed
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/build.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/build.html new file mode 100644 index 00000000000..21483b12595 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/build.html @@ -0,0 +1,1490 @@ + + + + + + CIME.build — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.build

+"""
+functions for building CIME models
+"""
+import glob, shutil, time, threading, subprocess
+from pathlib import Path
+from CIME.XML.standard_module_setup import *
+from CIME.utils import (
+    get_model,
+    analyze_build_log,
+    stringify_bool,
+    run_and_log_case_status,
+    get_timestamp,
+    run_sub_or_cmd,
+    run_cmd,
+    get_batch_script_for_job,
+    gzip_existing_file,
+    safe_copy,
+    is_python_executable,
+    get_logging_options,
+    import_from_file,
+)
+from CIME.config import Config
+from CIME.locked_files import lock_file, unlock_file
+from CIME.XML.files import Files
+
+logger = logging.getLogger(__name__)
+
+config = Config.instance()
+
+_CMD_ARGS_FOR_BUILD = (
+    "CASEROOT",
+    "CASETOOLS",
+    "CIMEROOT",
+    "SRCROOT",
+    "COMP_INTERFACE",
+    "COMPILER",
+    "DEBUG",
+    "EXEROOT",
+    "RUNDIR",
+    "INCROOT",
+    "LIBROOT",
+    "MACH",
+    "MPILIB",
+    "NINST_VALUE",
+    "OS",
+    "PIO_VERSION",
+    "SHAREDLIBROOT",
+    "SMP_PRESENT",
+    "USE_ESMF_LIB",
+    "USE_MOAB",
+    "CAM_CONFIG_OPTS",
+    "COMP_ATM",
+    "COMP_ICE",
+    "COMP_GLC",
+    "COMP_LND",
+    "COMP_OCN",
+    "COMP_ROF",
+    "COMP_WAV",
+    "COMPARE_TO_NUOPC",
+    "HOMME_TARGET",
+    "OCN_SUBMODEL",
+    "CISM_USE_TRILINOS",
+    "USE_TRILINOS",
+    "USE_ALBANY",
+    "USE_PETSC",
+)
+
+
+
+[docs] +class CmakeTmpBuildDir(object): + """ + Use to create a temporary cmake build dir for the purposes of querying + Macros. + """ + + def __init__(self, macroloc=None, rootdir=None, tmpdir=None): + """ + macroloc: The dir containing the cmake macros, default is pwd. This can be a case or CMAKE_MACROS_DIR + rootdir: The dir containing the tmpdir, default is macroloc + tmpdir: The name of the tempdir, default is "cmaketmp" + """ + self._macroloc = os.getcwd() if macroloc is None else macroloc + self._rootdir = self._macroloc if rootdir is None else rootdir + self._tmpdir = "cmaketmp" if tmpdir is None else tmpdir + + self._entered = False + +
+[docs] + def get_full_tmpdir(self): + return os.path.join(self._rootdir, self._tmpdir)
+ + + def __enter__(self): + cmake_macros_dir = os.path.join(self._macroloc, "cmake_macros") + expect( + os.path.isdir(cmake_macros_dir), + "Cannot create cmake temp build dir, no {} macros found".format( + cmake_macros_dir + ), + ) + cmake_lists = os.path.join(cmake_macros_dir, "CMakeLists.txt") + full_tmp_dir = self.get_full_tmpdir() + Path(full_tmp_dir).mkdir(parents=False, exist_ok=True) + safe_copy(cmake_lists, full_tmp_dir) + + self._entered = True + + return self + + def __exit__(self, *args): + shutil.rmtree(self.get_full_tmpdir()) + self._entered = False + +
+[docs] + def get_makefile_vars(self, case=None, comp=None, cmake_args=None): + """ + Run cmake and process output to a list of variable settings + + case can be None if caller is providing their own cmake args + """ + expect( + self._entered, "Should only call get_makefile_vars within a with statement" + ) + if case is None: + expect( + cmake_args is not None, + "Need either a case or hardcoded cmake_args to generate makefile vars", + ) + + cmake_args = ( + get_standard_cmake_args(case, "DO_NOT_USE") + if cmake_args is None + else cmake_args + ) + dcomp = "-DCOMP_NAME={}".format(comp) if comp else "" + output = run_cmd_no_fail( + "cmake -DCONVERT_TO_MAKE=ON {dcomp} {cmake_args} .".format( + dcomp=dcomp, cmake_args=cmake_args + ), + combine_output=True, + from_dir=self.get_full_tmpdir(), + ) + + lines_to_keep = [] + for line in output.splitlines(): + if "CIME_SET_MAKEFILE_VAR" in line and "BUILD_INTERNAL_IGNORE" not in line: + lines_to_keep.append(line) + + output_to_keep = "\n".join(lines_to_keep) + "\n" + output_to_keep = ( + output_to_keep.replace("CIME_SET_MAKEFILE_VAR ", "") + .replace("CPPDEFS := ", "CPPDEFS := $(CPPDEFS) ") + .replace("SLIBS := ", "SLIBS := $(SLIBS) ") + + "\n" + ) + + return output_to_keep
+
+ + + +
+[docs] +def generate_makefile_macro(case, caseroot): + """ + Generates a flat Makefile macro file based on the CMake cache system. + This macro is only used by certain sharedlibs since components use CMake. + Since indirection based on comp_name is allowed for sharedlibs, each sharedlib must generate + their own macro. + """ + with CmakeTmpBuildDir(macroloc=caseroot) as cmake_tmp: + + # Append CMakeLists.txt with compset specific stuff + comps = _get_compset_comps(case) + comps.extend( + [ + "mct", + "pio{}".format(case.get_value("PIO_VERSION")), + "gptl", + "csm_share", + "csm_share_cpl7", + "mpi-serial", + ] + ) + cmake_macro = os.path.join(caseroot, "Macros.cmake") + expect( + os.path.exists(cmake_macro), + "Cannot generate Makefile macro without {}".format(cmake_macro), + ) + + # run once with no COMP_NAME + no_comp_output = cmake_tmp.get_makefile_vars(case=case) + all_output = no_comp_output + no_comp_lines = no_comp_output.splitlines() + + for comp in comps: + comp_output = cmake_tmp.get_makefile_vars(case=case, comp=comp) + # The Tools/Makefile may have already adding things to CPPDEFS and SLIBS + comp_lines = comp_output.splitlines() + first = True + for comp_line in comp_lines: + if comp_line not in no_comp_lines: + if first: + all_output += 'ifeq "$(COMP_NAME)" "{}"\n'.format(comp) + first = False + + all_output += " " + comp_line + "\n" + + if not first: + all_output += "endif\n" + + with open(os.path.join(caseroot, "Macros.make"), "w") as fd: + fd.write( + """ +# This file is auto-generated, do not edit. If you want to change +# sharedlib flags, you can edit the cmake_macros in this case. You +# can change flags for specific sharedlibs only by checking COMP_NAME. + +""" + ) + fd.write(all_output)
+ + + +
+[docs] +def get_standard_makefile_args(case, shared_lib=False): + make_args = "CIME_MODEL={} ".format(case.get_value("MODEL")) + make_args += " SMP={} ".format(stringify_bool(case.get_build_threaded())) + expect( + not (uses_kokkos(case) and not shared_lib), + "Kokkos is not supported for classic Makefile build system", + ) + for var in _CMD_ARGS_FOR_BUILD: + make_args += xml_to_make_variable(case, var) + + return make_args
+ + + +def _get_compset_comps(case): + comps = [] + driver = case.get_value("COMP_INTERFACE") + for comp_class in case.get_values("COMP_CLASSES"): + comp = case.get_value("COMP_{}".format(comp_class)) + if comp == "cpl": + comp = "driver" + if comp == "s{}".format(comp_class.lower()) and driver == "nuopc": + comp = "" + else: + comps.append(comp) + return comps + + +
+[docs] +def get_standard_cmake_args(case, sharedpath): + cmake_args = "-DCIME_MODEL={} ".format(case.get_value("MODEL")) + cmake_args += "-DSRC_ROOT={} ".format(case.get_value("SRCROOT")) + cmake_args += " -Dcompile_threaded={} ".format( + stringify_bool(case.get_build_threaded()) + ) + # check settings for GPU + gpu_type = case.get_value("GPU_TYPE") + gpu_offload = case.get_value("GPU_OFFLOAD") + if gpu_type != "none": + expect( + gpu_offload != "none", + "Both GPU_TYPE and GPU_OFFLOAD must be defined if either is", + ) + cmake_args += f" -DGPU_TYPE={gpu_type} -DGPU_OFFLOAD={gpu_offload}" + else: + expect( + gpu_offload == "none", + "Both GPU_TYPE and GPU_OFFLOAD must be defined if either is", + ) + + ocn_model = case.get_value("COMP_OCN") + atm_dycore = case.get_value("CAM_DYCORE") + if ocn_model == "mom" or (atm_dycore and atm_dycore == "fv3"): + cmake_args += " -DUSE_FMS=TRUE " + + cmake_args += " -DINSTALL_SHAREDPATH={} ".format( + os.path.join(case.get_value("EXEROOT"), sharedpath) + ) + + # if sharedlibs are common to entire suite, they cannot be customized + # per case/compset + if not config.common_sharedlibroot: + cmake_args += " -DUSE_KOKKOS={} ".format(stringify_bool(uses_kokkos(case))) + comps = _get_compset_comps(case) + cmake_args += " -DCOMP_NAMES='{}' ".format(";".join(comps)) + + for var in _CMD_ARGS_FOR_BUILD: + cmake_args += xml_to_make_variable(case, var, cmake=True) + + atm_model = case.get_value("COMP_ATM") + if atm_model == "scream": + cmake_args += xml_to_make_variable(case, "HOMME_TARGET", cmake=True) + + # Disable compiler checks + cmake_args += " -DCMAKE_C_COMPILER_WORKS=1 -DCMAKE_CXX_COMPILER_WORKS=1 -DCMAKE_Fortran_COMPILER_WORKS=1" + + return cmake_args
+ + + +
+[docs] +def xml_to_make_variable(case, varname, cmake=False): + varvalue = case.get_value(varname) + if varvalue is None: + return "" + if isinstance(varvalue, bool): + varvalue = stringify_bool(varvalue) + elif isinstance(varvalue, str): + # assure that paths passed to make do not end in / or contain // + varvalue = varvalue.replace("//", "/") + if varvalue.endswith("/"): + varvalue = varvalue[:-1] + if cmake or isinstance(varvalue, str): + return '{}{}="{}" '.format("-D" if cmake else "", varname, varvalue) + else: + return "{}={} ".format(varname, varvalue)
+ + + +############################################################################### +
+[docs] +def uses_kokkos(case): + ############################################################################### + cam_target = case.get_value("CAM_TARGET") + # atm_comp = case.get_value("COMP_ATM") # scream does not use the shared kokkoslib for now + + return config.use_kokkos and cam_target in ( + "preqx_kokkos", + "theta-l", + "theta-l_kokkos", + )
+ + + +############################################################################### +def _build_model( + build_threaded, + exeroot, + incroot, + complist, + lid, + caseroot, + cimeroot, + compiler, + buildlist, + comp_interface, +): + ############################################################################### + logs = [] + thread_bad_results = [] + libroot = os.path.join(exeroot, "lib") + bldroot = None + for model, comp, nthrds, _, config_dir in complist: + if buildlist is not None and model.lower() not in buildlist: + continue + + # aquap has a dependency on atm so we will build it after the threaded loop + if comp == "aquap": + logger.debug("Skip aquap ocn build here") + continue + + # coupler handled seperately + if model == "cpl": + continue + + # special case for clm + # clm 4_5 and newer is a shared (as in sharedlibs, shared by all tests) library + # (but not in E3SM) and should be built in build_libraries + if config.shared_clm_component and comp == "clm": + continue + else: + logger.info(" - Building {} Library ".format(model)) + + smp = nthrds > 1 or build_threaded + + file_build = os.path.join(exeroot, "{}.bldlog.{}".format(model, lid)) + bldroot = os.path.join(exeroot, model, "obj") + logger.debug("bldroot is {}".format(bldroot)) + logger.debug("libroot is {}".format(libroot)) + + # make sure bldroot and libroot exist + for build_dir in [bldroot, libroot]: + if not os.path.exists(build_dir): + os.makedirs(build_dir) + + # build the component library + # thread_bad_results captures error output from thread (expected to be empty) + # logs is a list of log files to be compressed and added to the case logs/bld directory + t = threading.Thread( + target=_build_model_thread, + args=( + config_dir, + model, + comp, + caseroot, + libroot, + bldroot, + incroot, + file_build, + thread_bad_results, + smp, + compiler, + ), + ) + t.start() + + logs.append(file_build) + + # Wait for threads to finish + while threading.active_count() > 1: + time.sleep(1) + + expect(not thread_bad_results, "\n".join(thread_bad_results)) + + # + # Now build the executable + # + + if not buildlist: + cime_model = get_model() + file_build = os.path.join(exeroot, "{}.bldlog.{}".format(cime_model, lid)) + + ufs_driver = os.environ.get("UFS_DRIVER") + if config.ufs_alternative_config and ufs_driver == "nems": + config_dir = os.path.join( + cimeroot, os.pardir, "src", "model", "NEMS", "cime", "cime_config" + ) + else: + files = Files(comp_interface=comp_interface) + if comp_interface == "nuopc": + config_dir = os.path.join( + os.path.dirname(files.get_value("BUILD_LIB_FILE", {"lib": "CMEPS"})) + ) + else: + config_dir = os.path.join( + files.get_value("COMP_ROOT_DIR_CPL"), "cime_config" + ) + + expect( + os.path.exists(config_dir), + "Config directory not found {}".format(config_dir), + ) + if "cpl" in complist: + bldroot = os.path.join(exeroot, "cpl", "obj") + if not os.path.isdir(bldroot): + os.makedirs(bldroot) + logger.info( + "Building {} from {}/buildexe with output to {} ".format( + cime_model, config_dir, file_build + ) + ) + with open(file_build, "w") as fd: + stat = run_cmd( + "{}/buildexe {} {} {} ".format(config_dir, caseroot, libroot, bldroot), + from_dir=bldroot, + arg_stdout=fd, + arg_stderr=subprocess.STDOUT, + )[0] + + analyze_build_log("{} exe".format(cime_model), file_build, compiler) + expect(stat == 0, "BUILD FAIL: buildexe failed, cat {}".format(file_build)) + + # Copy the just-built ${MODEL}.exe to ${MODEL}.exe.$LID + safe_copy( + "{}/{}.exe".format(exeroot, cime_model), + "{}/{}.exe.{}".format(exeroot, cime_model, lid), + ) + + logs.append(file_build) + + return logs + + +############################################################################### +def _build_model_cmake( + exeroot, + complist, + lid, + buildlist, + comp_interface, + sharedpath, + separate_builds, + ninja, + dry_run, + case, +): + ############################################################################### + cime_model = get_model() + bldroot = os.path.join(exeroot, "cmake-bld") + libroot = os.path.join(exeroot, "lib") + bldlog = os.path.join(exeroot, "{}.bldlog.{}".format(cime_model, lid)) + srcroot = case.get_value("SRCROOT") + gmake_j = case.get_value("GMAKE_J") + gmake = case.get_value("GMAKE") + + # make sure bldroot and libroot exist + for build_dir in [bldroot, libroot]: + if not os.path.exists(build_dir): + os.makedirs(build_dir) + + # Components-specific cmake args. Cmake requires all component inputs to be available + # regardless of requested build list. We do not want to re-invoke cmake + # if it has already been called. + do_timing = "/usr/bin/time -p " if os.path.exists("/usr/bin/time") else "" + if not os.path.exists(os.path.join(bldroot, "CMakeCache.txt")): + cmp_cmake_args = "" + all_models = [] + files = Files(comp_interface=comp_interface) + for model, _, _, _, config_dir in complist: + # Create the Filepath and CIME_cppdefs files + if model == "cpl": + config_dir = os.path.join( + files.get_value("COMP_ROOT_DIR_CPL"), "cime_config" + ) + + cmp_cmake_args += _create_build_metadata_for_component( + config_dir, libroot, bldroot, case + ) + all_models.append(model) + + # Call CMake + cmake_args = get_standard_cmake_args(case, sharedpath) + cmake_env = "" + ninja_path = os.path.join(srcroot, "externals/ninja/bin") + if ninja: + cmake_args += " -GNinja " + cmake_env += "PATH={}:$PATH ".format(ninja_path) + + # Glue all pieces together: + # - cmake environment + # - common (i.e. project-wide) cmake args + # - component-specific cmake args + # - path to src folder + cmake_cmd = "{} {}cmake {} {} {}/components".format( + cmake_env, do_timing, cmake_args, cmp_cmake_args, srcroot + ) + stat = 0 + if dry_run: + logger.info("CMake cmd:\ncd {} && {}\n\n".format(bldroot, cmake_cmd)) + else: + logger.info( + "Configuring full {} model with output to file {}".format( + cime_model, bldlog + ) + ) + logger.info( + " Calling cmake directly, see top of log file for specific call" + ) + with open(bldlog, "w") as fd: + fd.write("Configuring with cmake cmd:\n{}\n\n".format(cmake_cmd)) + + # Add logging before running + cmake_cmd = "({}) >> {} 2>&1".format(cmake_cmd, bldlog) + stat = run_cmd(cmake_cmd, from_dir=bldroot)[0] + expect( + stat == 0, + "BUILD FAIL: cmake config {} failed, cat {}".format(cime_model, bldlog), + ) + + # Set up buildlist + if not buildlist: + if separate_builds: + buildlist = all_models + else: + buildlist = ["cpl"] + + if "cpl" in buildlist: + buildlist.remove("cpl") + buildlist.append("cpl") # must come at end + + # Call Make + logs = [] + for model in buildlist: + t1 = time.time() + + make_cmd = "{}{} -j {}".format( + do_timing, + gmake if not ninja else "{} -v".format(os.path.join(ninja_path, "ninja")), + gmake_j, + ) + if model != "cpl": + make_cmd += " {}".format(model) + curr_log = os.path.join(exeroot, "{}.bldlog.{}".format(model, lid)) + model_name = model + else: + curr_log = bldlog + model_name = cime_model if buildlist == ["cpl"] else model + + if dry_run: + logger.info("Build cmd:\ncd {} && {}\n\n".format(bldroot, make_cmd)) + else: + logger.info( + "Building {} model with output to file {}".format(model_name, curr_log) + ) + logger.info(" Calling make, see top of log file for specific call") + with open(curr_log, "a") as fd: + fd.write("\n\nBuilding with cmd:\n{}\n\n".format(make_cmd)) + + # Add logging before running + make_cmd = "({}) >> {} 2>&1".format(make_cmd, curr_log) + stat = run_cmd(make_cmd, from_dir=bldroot)[0] + expect( + stat == 0, + "BUILD FAIL: build {} failed, cat {}".format(model_name, curr_log), + ) + + t2 = time.time() + if separate_builds: + logger.info(" {} built in {:f} seconds".format(model_name, (t2 - t1))) + + logs.append(curr_log) + + expect(not dry_run, "User requested dry-run only, terminating build") + + # Copy the just-built ${MODEL}.exe to ${MODEL}.exe.$LID + if "cpl" in buildlist: + safe_copy( + "{}/{}.exe".format(exeroot, cime_model), + "{}/{}.exe.{}".format(exeroot, cime_model, lid), + ) + + return logs + + +############################################################################### +def _build_checks( + case, + build_threaded, + comp_interface, + debug, + compiler, + mpilib, + complist, + ninst_build, + smp_value, + model_only, + buildlist, +): + ############################################################################### + """ + check if a build needs to be done and warn if a clean is warrented first + returns the relative sharedpath directory for sharedlibraries + """ + smp_build = case.get_value("SMP_BUILD") + build_status = case.get_value("BUILD_STATUS") + expect( + comp_interface in ("mct", "moab", "nuopc"), + "Only supporting mct nuopc, or moab comp_interfaces at this time, found {}".format( + comp_interface + ), + ) + smpstr = "" + ninst_value = "" + for model, _, nthrds, ninst, _ in complist: + if nthrds > 1: + build_threaded = True + if build_threaded: + smpstr += "{}1".format(model[0]) + else: + smpstr += "{}0".format(model[0]) + ninst_value += "{}{:d}".format((model[0]), ninst) + + case.set_value("SMP_VALUE", smpstr) + case.set_value("NINST_VALUE", ninst_value) + + debugdir = "debug" if debug else "nodebug" + threaddir = "threads" if build_threaded else "nothreads" + sharedpath = os.path.join(compiler, mpilib, debugdir, threaddir, comp_interface) + + logger.debug( + "compiler={} mpilib={} debugdir={} threaddir={}".format( + compiler, mpilib, debugdir, threaddir + ) + ) + + expect( + ninst_build == ninst_value or ninst_build == "0", + """ +ERROR, NINST VALUES HAVE CHANGED + NINST_BUILD = {} + NINST_VALUE = {} + A manual clean of your obj directories is strongly recommended + You should execute the following: + ./case.build --clean + Then rerun the build script interactively + ---- OR ---- + You can override this error message at your own risk by executing: + ./xmlchange -file env_build.xml -id NINST_BUILD -val 0 + Then rerun the build script interactively +""".format( + ninst_build, ninst_value + ), + ) + + expect( + smp_build == smpstr or smp_build == "0", + """ +ERROR, SMP VALUES HAVE CHANGED + SMP_BUILD = {} + SMP_VALUE = {} + smpstr = {} + A manual clean of your obj directories is strongly recommended + You should execute the following: + ./case.build --clean + Then rerun the build script interactively + ---- OR ---- + You can override this error message at your own risk by executing: + ./xmlchange -file env_build.xml -id SMP_BUILD -val 0 + Then rerun the build script interactively +""".format( + smp_build, smp_value, smpstr + ), + ) + + expect( + build_status == 0, + """ +ERROR env_build HAS CHANGED + A manual clean of your obj directories is required + You should execute the following: + ./case.build --clean-all +""", + ) + + case.set_value("BUILD_COMPLETE", False) + + # User may have rm -rf their build directory + case.create_dirs() + + case.flush() + if not model_only and not buildlist: + logger.info("Generating component namelists as part of build") + case.create_namelists() + + return sharedpath + + +############################################################################### +def _build_libraries( + case, + exeroot, + sharedpath, + caseroot, + cimeroot, + libroot, + lid, + compiler, + buildlist, + comp_interface, + complist, +): + ############################################################################### + + shared_lib = os.path.join(exeroot, sharedpath, "lib") + shared_inc = os.path.join(exeroot, sharedpath, "include") + for shared_item in [shared_lib, shared_inc]: + if not os.path.exists(shared_item): + os.makedirs(shared_item) + + mpilib = case.get_value("MPILIB") + ufs_driver = os.environ.get("UFS_DRIVER") + cpl_in_complist = False + for l in complist: + if "cpl" in l: + cpl_in_complist = True + if ufs_driver: + logger.info("UFS_DRIVER is set to {}".format(ufs_driver)) + if ufs_driver and ufs_driver == "nems" and not cpl_in_complist: + libs = [] + elif case.get_value("MODEL") == "cesm" and comp_interface == "nuopc": + libs = ["gptl", "mct", "pio", "csm_share"] + elif case.get_value("MODEL") == "cesm": + libs = ["gptl", "mct", "pio", "csm_share", "csm_share_cpl7"] + elif case.get_value("MODEL") == "e3sm": + libs = ["gptl", "mct", "spio", "csm_share"] + else: + libs = ["gptl", "mct", "pio", "csm_share"] + + if mpilib == "mpi-serial": + libs.insert(0, mpilib) + + if uses_kokkos(case): + libs.append("kokkos") + + # Build shared code of CDEPS nuopc data models + build_script = {} + if comp_interface == "nuopc" and (not ufs_driver or ufs_driver != "nems"): + libs.append("CDEPS") + + ocn_model = case.get_value("COMP_OCN") + + atm_dycore = case.get_value("CAM_DYCORE") + if ocn_model == "mom" or (atm_dycore and atm_dycore == "fv3"): + libs.append("FMS") + + files = Files(comp_interface=comp_interface) + for lib in libs: + build_script[lib] = files.get_value("BUILD_LIB_FILE", {"lib": lib}) + + sharedlibroot = os.path.abspath(case.get_value("SHAREDLIBROOT")) + # Check if we need to build our own cprnc + if case.get_value("TEST"): + cprnc_loc = case.get_value("CCSM_CPRNC") + full_lib_path = os.path.join(sharedlibroot, compiler, "cprnc") + if not cprnc_loc or not os.path.exists(cprnc_loc): + case.set_value("CCSM_CPRNC", os.path.join(full_lib_path, "cprnc")) + if not os.path.isdir(full_lib_path): + os.makedirs(full_lib_path) + libs.insert(0, "cprnc") + + logs = [] + + # generate Makefile macro + generate_makefile_macro(case, caseroot) + + for lib in libs: + if buildlist is not None and lib not in buildlist: + continue + + if lib == "csm_share" or lib == "csm_share_cpl7": + # csm_share adds its own dir name + full_lib_path = os.path.join(sharedlibroot, sharedpath) + elif lib == "mpi-serial": + full_lib_path = os.path.join(sharedlibroot, sharedpath, "mct", lib) + elif lib == "cprnc": + full_lib_path = os.path.join(sharedlibroot, compiler, "cprnc") + else: + full_lib_path = os.path.join(sharedlibroot, sharedpath, lib) + + # pio build creates its own directory + if lib != "pio" and not os.path.isdir(full_lib_path): + os.makedirs(full_lib_path) + + file_build = os.path.join(exeroot, "{}.bldlog.{}".format(lib, lid)) + if lib in build_script.keys(): + my_file = build_script[lib] + else: + my_file = os.path.join( + cimeroot, "CIME", "build_scripts", "buildlib.{}".format(lib) + ) + expect( + os.path.exists(my_file), + "Build script {} for component {} not found.".format(my_file, lib), + ) + logger.info("Building {} with output to file {}".format(lib, file_build)) + + run_sub_or_cmd( + my_file, + [full_lib_path, os.path.join(exeroot, sharedpath), caseroot], + "buildlib", + [full_lib_path, os.path.join(exeroot, sharedpath), case], + logfile=file_build, + ) + + analyze_build_log(lib, file_build, compiler) + logs.append(file_build) + if lib == "pio": + bldlog = open(file_build, "r") + for line in bldlog: + if re.search("Current setting for", line): + logger.warning(line) + + # clm not a shared lib for E3SM + if config.shared_clm_component and (buildlist is None or "lnd" in buildlist): + comp_lnd = case.get_value("COMP_LND") + if comp_lnd == "clm": + logging.info(" - Building clm library ") + esmfdir = "esmf" if case.get_value("USE_ESMF_LIB") else "noesmf" + bldroot = os.path.join( + sharedlibroot, sharedpath, comp_interface, esmfdir, "clm", "obj" + ) + libroot = os.path.join(exeroot, sharedpath, comp_interface, esmfdir, "lib") + incroot = os.path.join( + exeroot, sharedpath, comp_interface, esmfdir, "include" + ) + file_build = os.path.join(exeroot, "lnd.bldlog.{}".format(lid)) + config_lnd_dir = os.path.dirname(case.get_value("CONFIG_LND_FILE")) + + for ndir in [bldroot, libroot, incroot]: + if not os.path.isdir(ndir): + os.makedirs(ndir) + + smp = "SMP" in os.environ and os.environ["SMP"] == "TRUE" + # thread_bad_results captures error output from thread (expected to be empty) + # logs is a list of log files to be compressed and added to the case logs/bld directory + thread_bad_results = [] + _build_model_thread( + config_lnd_dir, + "lnd", + comp_lnd, + caseroot, + libroot, + bldroot, + incroot, + file_build, + thread_bad_results, + smp, + compiler, + ) + logs.append(file_build) + expect(not thread_bad_results, "\n".join(thread_bad_results)) + + case.flush() # python sharedlib subs may have made XML modifications + return logs + + +############################################################################### +def _build_model_thread( + config_dir, + compclass, + compname, + caseroot, + libroot, + bldroot, + incroot, + file_build, + thread_bad_results, + smp, + compiler, +): + ############################################################################### + logger.info("Building {} with output to {}".format(compclass, file_build)) + t1 = time.time() + cmd = os.path.join(caseroot, "SourceMods", "src." + compname, "buildlib") + if os.path.isfile(cmd): + logger.warning("WARNING: using local buildlib script for {}".format(compname)) + else: + cmd = os.path.join(config_dir, "buildlib") + expect(os.path.isfile(cmd), "Could not find buildlib for {}".format(compname)) + + compile_cmd = "COMP_CLASS={compclass} COMP_NAME={compname} {cmd} {caseroot} {libroot} {bldroot} ".format( + compclass=compclass, + compname=compname, + cmd=cmd, + caseroot=caseroot, + libroot=libroot, + bldroot=bldroot, + ) + if config.enable_smp: + compile_cmd = "SMP={} {}".format(stringify_bool(smp), compile_cmd) + + if is_python_executable(cmd): + logging_options = get_logging_options() + if logging_options != "": + compile_cmd = compile_cmd + logging_options + + with open(file_build, "w") as fd: + stat = run_cmd( + compile_cmd, from_dir=bldroot, arg_stdout=fd, arg_stderr=subprocess.STDOUT + )[0] + + if stat != 0: + thread_bad_results.append( + "BUILD FAIL: {}.buildlib failed, cat {}".format(compname, file_build) + ) + + analyze_build_log(compclass, file_build, compiler) + + for mod_file in glob.glob(os.path.join(bldroot, "*_[Cc][Oo][Mm][Pp]_*.mod")): + safe_copy(mod_file, incroot) + + t2 = time.time() + logger.info("{} built in {:f} seconds".format(compname, (t2 - t1))) + + +############################################################################### +def _create_build_metadata_for_component(config_dir, libroot, bldroot, case): + ############################################################################### + """ + Ensure that crucial Filepath and CIME_CPPDEFS files exist for this component. + In many cases, the bld/configure script will have already created these. + """ + bc_path = os.path.join(config_dir, "buildlib_cmake") + expect(os.path.exists(bc_path), "Missing: {}".format(bc_path)) + buildlib = import_from_file( + "buildlib_cmake", os.path.join(config_dir, "buildlib_cmake") + ) + cmake_args = buildlib.buildlib(bldroot, libroot, case) + return "" if cmake_args is None else cmake_args + + +############################################################################### +def _clean_impl(case, cleanlist, clean_all, clean_depends): + ############################################################################### + exeroot = os.path.abspath(case.get_value("EXEROOT")) + case.load_env() + if clean_all: + # If cleanlist is empty just remove the bld directory + expect(exeroot is not None, "No EXEROOT defined in case") + if os.path.isdir(exeroot): + logging.info("cleaning directory {}".format(exeroot)) + shutil.rmtree(exeroot) + + # if clean_all is True also remove the sharedlibpath + sharedlibroot = os.path.abspath(case.get_value("SHAREDLIBROOT")) + expect(sharedlibroot is not None, "No SHAREDLIBROOT defined in case") + if sharedlibroot != exeroot and os.path.isdir(sharedlibroot): + logging.warning("cleaning directory {}".format(sharedlibroot)) + shutil.rmtree(sharedlibroot) + + else: + expect( + (cleanlist is not None and len(cleanlist) > 0) + or (clean_depends is not None and len(clean_depends)), + "Empty cleanlist not expected", + ) + gmake = case.get_value("GMAKE") + + cleanlist = [] if cleanlist is None else cleanlist + clean_depends = [] if clean_depends is None else clean_depends + things_to_clean = cleanlist + clean_depends + + cmake_comp_root = os.path.join(exeroot, "cmake-bld", "cmake") + casetools = case.get_value("CASETOOLS") + classic_cmd = "{} -f {} {}".format( + gmake, + os.path.join(casetools, "Makefile"), + get_standard_makefile_args(case, shared_lib=True), + ) + + for clean_item in things_to_clean: + logging.info("Cleaning {}".format(clean_item)) + cmake_path = os.path.join(cmake_comp_root, clean_item) + if os.path.exists(cmake_path): + # Item was created by cmake build system + clean_cmd = "cd {} && {} clean".format(cmake_path, gmake) + else: + # Item was created by classic build system + # do I need this? generate_makefile_macro(case, caseroot, clean_item) + + clean_cmd = "{} {}{}".format( + classic_cmd, + "clean" if clean_item in cleanlist else "clean_depends", + clean_item, + ) + + logger.info("calling {}".format(clean_cmd)) + run_cmd_no_fail(clean_cmd) + + # unlink Locked files directory + unlock_file("env_build.xml") + + # reset following values in xml files + case.set_value("SMP_BUILD", str(0)) + case.set_value("NINST_BUILD", str(0)) + case.set_value("BUILD_STATUS", str(0)) + case.set_value("BUILD_COMPLETE", "FALSE") + case.flush() + + +############################################################################### +def _case_build_impl( + caseroot, + case, + sharedlib_only, + model_only, + buildlist, + save_build_provenance, + separate_builds, + ninja, + dry_run, +): + ############################################################################### + + t1 = time.time() + + expect( + not (sharedlib_only and model_only), + "Contradiction: both sharedlib_only and model_only", + ) + expect( + not (dry_run and not model_only), + "Dry-run is only for model builds, please build sharedlibs first", + ) + logger.info("Building case in directory {}".format(caseroot)) + logger.info("sharedlib_only is {}".format(sharedlib_only)) + logger.info("model_only is {}".format(model_only)) + + expect(os.path.isdir(caseroot), "'{}' is not a valid directory".format(caseroot)) + os.chdir(caseroot) + + expect( + os.path.exists(get_batch_script_for_job(case.get_primary_job())), + "ERROR: must invoke case.setup script before calling build script ", + ) + + cimeroot = case.get_value("CIMEROOT") + + comp_classes = case.get_values("COMP_CLASSES") + + case.check_lockedfiles(skip="env_batch") + + # Retrieve relevant case data + # This environment variable gets set for cesm Make and + # needs to be unset before building again. + if "MODEL" in os.environ: + del os.environ["MODEL"] + build_threaded = case.get_build_threaded() + exeroot = os.path.abspath(case.get_value("EXEROOT")) + incroot = os.path.abspath(case.get_value("INCROOT")) + libroot = os.path.abspath(case.get_value("LIBROOT")) + multi_driver = case.get_value("MULTI_DRIVER") + complist = [] + ninst = 1 + comp_interface = case.get_value("COMP_INTERFACE") + for comp_class in comp_classes: + if comp_class == "CPL": + config_dir = None + if multi_driver: + ninst = case.get_value("NINST_MAX") + else: + config_dir = os.path.dirname( + case.get_value("CONFIG_{}_FILE".format(comp_class)) + ) + if multi_driver: + ninst = 1 + else: + ninst = case.get_value("NINST_{}".format(comp_class)) + + comp = case.get_value("COMP_{}".format(comp_class)) + if comp_interface == "nuopc" and comp in ( + "satm", + "slnd", + "sesp", + "sglc", + "srof", + "sice", + "socn", + "swav", + "siac", + ): + continue + thrds = case.get_value("NTHRDS_{}".format(comp_class)) + expect( + ninst is not None, + "Failed to get ninst for comp_class {}".format(comp_class), + ) + complist.append((comp_class.lower(), comp, thrds, ninst, config_dir)) + os.environ["COMP_{}".format(comp_class)] = comp + + compiler = case.get_value("COMPILER") + mpilib = case.get_value("MPILIB") + debug = case.get_value("DEBUG") + ninst_build = case.get_value("NINST_BUILD") + smp_value = case.get_value("SMP_VALUE") + clm_use_petsc = case.get_value("CLM_USE_PETSC") + mpaso_use_petsc = case.get_value("MPASO_USE_PETSC") + cism_use_trilinos = case.get_value("CISM_USE_TRILINOS") + mali_use_albany = case.get_value("MALI_USE_ALBANY") + mach = case.get_value("MACH") + + # Load some params into env + os.environ["BUILD_THREADED"] = stringify_bool(build_threaded) + cime_model = get_model() + + # TODO need some other method than a flag. + if cime_model == "e3sm" and mach == "titan" and compiler == "pgiacc": + case.set_value("CAM_TARGET", "preqx_acc") + + # This is a timestamp for the build , not the same as the testid, + # and this case may not be a test anyway. For a production + # experiment there may be many builds of the same case. + lid = get_timestamp("%y%m%d-%H%M%S") + os.environ["LID"] = lid + + # Set the overall USE_PETSC variable to TRUE if any of the + # *_USE_PETSC variables are TRUE. + # For now, there is just the one CLM_USE_PETSC variable, but in + # the future there may be others -- so USE_PETSC will be true if + # ANY of those are true. + + use_petsc = bool(clm_use_petsc) or bool(mpaso_use_petsc) + case.set_value("USE_PETSC", use_petsc) + + # Set the overall USE_TRILINOS variable to TRUE if any of the + # *_USE_TRILINOS variables are TRUE. + # For now, there is just the one CISM_USE_TRILINOS variable, but in + # the future there may be others -- so USE_TRILINOS will be true if + # ANY of those are true. + + use_trilinos = False if cism_use_trilinos is None else cism_use_trilinos + case.set_value("USE_TRILINOS", use_trilinos) + + # Set the overall USE_ALBANY variable to TRUE if any of the + # *_USE_ALBANY variables are TRUE. + # For now, there is just the one MALI_USE_ALBANY variable, but in + # the future there may be others -- so USE_ALBANY will be true if + # ANY of those are true. + + use_albany = stringify_bool(mali_use_albany) + case.set_value("USE_ALBANY", use_albany) + + # Load modules + case.load_env() + + sharedpath = _build_checks( + case, + build_threaded, + comp_interface, + debug, + compiler, + mpilib, + complist, + ninst_build, + smp_value, + model_only, + buildlist, + ) + + logs = [] + + if not model_only: + logs = _build_libraries( + case, + exeroot, + sharedpath, + caseroot, + cimeroot, + libroot, + lid, + compiler, + buildlist, + comp_interface, + complist, + ) + + if not sharedlib_only: + if config.build_model_use_cmake: + logs.extend( + _build_model_cmake( + exeroot, + complist, + lid, + buildlist, + comp_interface, + sharedpath, + separate_builds, + ninja, + dry_run, + case, + ) + ) + else: + os.environ["INSTALL_SHAREDPATH"] = os.path.join( + exeroot, sharedpath + ) # for MPAS makefile generators + logs.extend( + _build_model( + build_threaded, + exeroot, + incroot, + complist, + lid, + caseroot, + cimeroot, + compiler, + buildlist, + comp_interface, + ) + ) + + if not buildlist: + # in case component build scripts updated the xml files, update the case object + case.read_xml() + # Note, doing buildlists will never result in the system thinking the build is complete + + post_build( + case, + logs, + build_complete=not (buildlist or sharedlib_only), + save_build_provenance=save_build_provenance, + ) + + t2 = time.time() + + if not sharedlib_only: + logger.info("Total build time: {:f} seconds".format(t2 - t1)) + logger.info("MODEL BUILD HAS FINISHED SUCCESSFULLY") + + return True + + +############################################################################### +
+[docs] +def post_build(case, logs, build_complete=False, save_build_provenance=True): + ############################################################################### + for log in logs: + gzip_existing_file(log) + + if build_complete: + # must ensure there's an lid + lid = ( + os.environ["LID"] if "LID" in os.environ else get_timestamp("%y%m%d-%H%M%S") + ) + if save_build_provenance: + try: + Config.instance().save_build_provenance(case, lid=lid) + except AttributeError: + logger.debug("No handler for save_build_provenance was found") + # Set XML to indicate build complete + case.set_value("BUILD_COMPLETE", True) + case.set_value("BUILD_STATUS", 0) + if "SMP_VALUE" in os.environ: + case.set_value("SMP_BUILD", os.environ["SMP_VALUE"]) + + case.flush() + + lock_file("env_build.xml", caseroot=case.get_value("CASEROOT"))
+ + + +############################################################################### +
+[docs] +def case_build( + caseroot, + case, + sharedlib_only=False, + model_only=False, + buildlist=None, + save_build_provenance=True, + separate_builds=False, + ninja=False, + dry_run=False, +): + ############################################################################### + functor = lambda: _case_build_impl( + caseroot, + case, + sharedlib_only, + model_only, + buildlist, + save_build_provenance, + separate_builds, + ninja, + dry_run, + ) + cb = "case.build" + if sharedlib_only == True: + cb = cb + " (SHAREDLIB_BUILD)" + if model_only == True: + cb = cb + " (MODEL_BUILD)" + return run_and_log_case_status(functor, cb, caseroot=caseroot)
+ + + +############################################################################### +
+[docs] +def clean(case, cleanlist=None, clean_all=False, clean_depends=None): + ############################################################################### + functor = lambda: _clean_impl(case, cleanlist, clean_all, clean_depends) + return run_and_log_case_status( + functor, "build.clean", caseroot=case.get_value("CASEROOT") + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/buildlib.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/buildlib.html new file mode 100644 index 00000000000..6ba37f936a4 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/buildlib.html @@ -0,0 +1,257 @@ + + + + + + CIME.buildlib — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.buildlib

+"""
+common utilities for buildlib
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.case import Case
+from CIME.utils import (
+    parse_args_and_handle_standard_logging_options,
+    setup_standard_logging_options,
+    safe_copy,
+)
+from CIME.config import Config
+from CIME.build import get_standard_makefile_args
+from CIME.XML.files import Files
+
+import sys, os, argparse
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_input(argv): + ############################################################################### + + parser = argparse.ArgumentParser() + + setup_standard_logging_options(parser) + + parser.add_argument("caseroot", default=os.getcwd(), help="Case directory") + + parser.add_argument("libroot", help="root for creating the library") + + parser.add_argument("bldroot", help="root for building library") + + args = parse_args_and_handle_standard_logging_options(argv, parser) + + # Some compilers have trouble with long include paths, setting + # EXEROOT to the relative path from bldroot solves the problem + # doing it in the environment means we don't need to change all of + # the component buildlib scripts + with Case(args.caseroot) as case: + os.environ["EXEROOT"] = os.path.relpath(case.get_value("EXEROOT"), args.bldroot) + + return args.caseroot, args.libroot, args.bldroot
+ + + +############################################################################### +
+[docs] +def build_cime_component_lib(case, compname, libroot, bldroot): + ############################################################################### + + casebuild = case.get_value("CASEBUILD") + compclass = compname[1:] # This very hacky + comp_interface = case.get_value("COMP_INTERFACE") + confdir = os.path.join(casebuild, "{}conf".format(compname)) + + if not os.path.exists(confdir): + os.mkdir(confdir) + + with open(os.path.join(confdir, "Filepath"), "w") as out: + out.write( + os.path.join( + case.get_value("CASEROOT"), "SourceMods", "src.{}\n".format(compname) + ) + + "\n" + ) + files = Files(comp_interface=comp_interface) + compdir = files.get_value( + "COMP_ROOT_DIR_" + compclass.upper(), {"component": compname} + ) + if compname.startswith("d"): + out.write(os.path.join(compdir, "src") + "\n") + out.write(os.path.join(compdir) + "\n") + elif compname.startswith("x"): + out.write(os.path.join(compdir, "..", "xshare") + "\n") + out.write(os.path.join(compdir, "src") + "\n") + elif compname.startswith("s"): + out.write(os.path.join(compdir, "src") + "\n") + + with open(os.path.join(confdir, "CIME_cppdefs"), "w") as out: + out.write("") + + config = Config.instance() + + # Build the component + if config.build_cime_component_lib: + safe_copy(os.path.join(confdir, "Filepath"), bldroot) + if os.path.exists(os.path.join(confdir, "CIME_cppdefs")): + safe_copy(os.path.join(confdir, "CIME_cppdefs"), bldroot) + elif os.path.exists(os.path.join(confdir, "CCSM_cppdefs")): + safe_copy(os.path.join(confdir, "CCSM_cppdefs"), bldroot) + run_gmake(case, compclass, compname, libroot, bldroot)
+ + + +############################################################################### +
+[docs] +def run_gmake(case, compclass, compname, libroot, bldroot, libname="", user_cppdefs=""): + ############################################################################### + gmake_args = get_standard_makefile_args(case) + + gmake_j = case.get_value("GMAKE_J") + gmake = case.get_value("GMAKE") + + complib = "" + if libname: + complib = os.path.join(libroot, "lib{}.a".format(libname)) + else: + complib = os.path.join(libroot, "lib{}.a".format(compclass)) + + makefile = os.path.join(case.get_value("CASETOOLS"), "Makefile") + + cmd = "{gmake} complib -j {gmake_j:d} COMP_CLASS={compclass} COMP_NAME={compname} COMPLIB={complib} {gmake_args} -f {makefile} -C {bldroot} ".format( + gmake=gmake, + gmake_j=gmake_j, + compclass=compclass, + compname=compname, + complib=complib, + gmake_args=gmake_args, + makefile=makefile, + bldroot=bldroot, + ) + if user_cppdefs: + cmd = cmd + "USER_CPPDEFS='{}'".format(user_cppdefs) + + stat, out, err = run_cmd(cmd, combine_output=True) + print(out) + if stat: + logger.info("buildlib stat={} err={}".format(stat, err)) + os.unlink(complib) + return stat
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/buildnml.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/buildnml.html new file mode 100644 index 00000000000..a62bcba6850 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/buildnml.html @@ -0,0 +1,291 @@ + + + + + + CIME.buildnml — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.buildnml

+"""
+common implementation for building namelist commands
+
+These are used by components/<model_type>/<component>/cime_config/buildnml
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import (
+    expect,
+    parse_args_and_handle_standard_logging_options,
+    setup_standard_logging_options,
+)
+from CIME.utils import safe_copy
+import sys, os, argparse, glob
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_input(argv): + ############################################################################### + + parser = argparse.ArgumentParser() + + setup_standard_logging_options(parser) + + parser.add_argument("caseroot", default=os.getcwd(), help="Case directory") + + args = parse_args_and_handle_standard_logging_options(argv, parser) + + return args.caseroot
+ + + +############################################################################### +# pylint: disable=unused-argument +
+[docs] +def build_xcpl_nml(case, caseroot, compname): + ############################################################################### + compclasses = case.get_values("COMP_CLASSES") + compclass = None + for compclass in compclasses: + if case.get_value("COMP_{}".format(compclass)) == compname: + break + expect( + compclass is not None, + "Could not identify compclass for compname {}".format(compname), + ) + rundir = case.get_value("RUNDIR") + comp_interface = case.get_value("COMP_INTERFACE") + + if comp_interface != "nuopc": + ninst = case.get_value("NINST_{}".format(compclass.upper())) + else: + ninst = case.get_value("NINST") + if not ninst: + ninst = 1 + + nx = case.get_value("{}_NX".format(compclass.upper())) + ny = case.get_value("{}_NY".format(compclass.upper())) + + if comp_interface != "nuopc": + if compname == "xrof": + flood_mode = case.get_value("XROF_FLOOD_MODE") + extras = [] + dtype = 1 + npes = 0 + length = 0 + if compname == "xatm": + if ny == 1: + dtype = 2 + extras = [ + ["24", "ncpl number of communications w/coupler per dat"], + ["0.0", "simul time proxy (secs): time between cpl comms"], + ] + elif compname == "xglc" or compname == "xice": + dtype = 2 + elif compname == "xlnd": + dtype = 11 + elif compname == "xocn": + dtype = 4 + elif compname == "xrof": + dtype = 11 + if flood_mode == "ACTIVE": + extras = [[".true.", "flood flag"]] + else: + extras = [[".false.", "flood flag"]] + + for i in range(1, ninst + 1): + # If only 1 file, name is 'compclass_in' + # otherwise files are 'compclass_in0001', 'compclass_in0002', etc + if ninst == 1: + filename = os.path.join(rundir, "{}_in".format(compname)) + else: + filename = os.path.join(rundir, "{}_in_{:04d}".format(compname, i)) + + with open(filename, "w") as infile: + infile.write("{:<20d} ! i-direction global dimension\n".format(nx)) + infile.write("{:<20d} ! j-direction global dimension\n".format(ny)) + if comp_interface != "nuopc": + infile.write( + "{:<20d} ! decomp_type 1=1d-by-lat, 2=1d-by-lon, 3=2d, 4=2d evensquare, 11=segmented\n".format( + dtype + ) + ) + infile.write("{:<20d} ! num of pes for i (type 3 only)\n".format(npes)) + infile.write( + "{:<20d} ! length of segments (type 4 only)\n".format(length) + ) + for extra in extras: + infile.write("{:<20s} ! {}\n".format(extra[0], extra[1]))
+ + + +############################################################################### +
+[docs] +def create_namelist_infile(case, user_nl_file, namelist_infile, infile_text=""): + ############################################################################### + lines_input = [] + if os.path.isfile(user_nl_file): + with open(user_nl_file, "r") as file_usernl: + lines_input = file_usernl.readlines() + else: + logger.warning( + "WARNING: No file {} found in case directory".format(user_nl_file) + ) + + lines_output = [] + lines_output.append("&comp_inparm \n") + if infile_text: + lines_output.append(infile_text) + logger.debug("file_infile {} ".format(infile_text)) + + for line in lines_input: + match1 = re.search(r"^[\&\/\!]", line) + match2 = re.search(r"\$([\w\_])+", line) + if match1 is None and match2 is not None: + line = case.get_resolved_value(line) + if match1 is None: + lines_output.append(line) + + lines_output.append("/ \n") + with open(namelist_infile, "w") as file_infile: + file_infile.write("\n".join(lines_output))
+ + + +
+[docs] +def copy_inputs_to_rundir(caseroot, compname, confdir, rundir, inst_string): + + if os.path.isdir(rundir): + filename = compname + "_in" + file_src = os.path.join(confdir, filename) + file_dest = os.path.join(rundir, filename) + if inst_string: + file_dest += inst_string + safe_copy(file_src, file_dest) + + for xmlfile in glob.glob(os.path.join(confdir, "*streams*.xml")): + casexml = os.path.join(caseroot, os.path.basename(xmlfile)) + if os.path.exists(casexml): + logger.info("Using {} for {} streams".format(casexml, compname)) + safe_copy(casexml, rundir) + else: + safe_copy(xmlfile, rundir)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case.html new file mode 100644 index 00000000000..acfdf7e1d01 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case.html @@ -0,0 +1,2865 @@ + + + + + + CIME.case.case — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case

+# -*- coding: utf-8 -*-
+"""
+Wrapper around all env XML for a case.
+
+All interaction with and between the module files in XML/ takes place
+through the Case module.
+"""
+from copy import deepcopy
+import sys
+import glob, os, shutil, math, time, hashlib, socket, getpass
+from CIME.XML.standard_module_setup import *
+
+# pylint: disable=import-error,redefined-builtin
+from CIME import utils
+from CIME.config import Config
+from CIME.utils import expect, get_cime_root, append_status
+from CIME.utils import convert_to_type, get_model, set_model
+from CIME.utils import get_project, get_charge_account, check_name
+from CIME.utils import get_current_commit, safe_copy, get_cime_default_driver
+from CIME.locked_files import LOCKED_DIR, lock_file
+from CIME.XML.machines import Machines
+from CIME.XML.pes import Pes
+from CIME.XML.files import Files
+from CIME.XML.testlist import Testlist
+from CIME.XML.component import Component
+from CIME.XML.compsets import Compsets
+from CIME.XML.grids import Grids
+from CIME.XML.batch import Batch
+from CIME.XML.workflow import Workflow
+from CIME.XML.pio import PIO
+from CIME.XML.archive import Archive
+from CIME.XML.env_test import EnvTest
+from CIME.XML.env_mach_specific import EnvMachSpecific
+from CIME.XML.env_case import EnvCase
+from CIME.XML.env_mach_pes import EnvMachPes
+from CIME.XML.env_build import EnvBuild
+from CIME.XML.env_run import EnvRun
+from CIME.XML.env_archive import EnvArchive
+from CIME.XML.env_batch import EnvBatch
+from CIME.XML.env_workflow import EnvWorkflow
+from CIME.XML.generic_xml import GenericXML
+from CIME.user_mod_support import apply_user_mods
+from CIME.aprun import get_aprun_cmd_for_case
+
+logger = logging.getLogger(__name__)
+
+config = Config.instance()
+
+
+
+[docs] +class Case(object): + """ + https://github.com/ESMCI/cime/wiki/Developers-Introduction + The Case class is the heart of the CIME Case Control system. All + interactions with a Case take part through this class. All of the + variables used to create and manipulate a case are defined in xml + files and for every xml file there is a python class to interact + with that file. + + XML files which are part of the CIME distribution and are meant to + be readonly with respect to a case are typically named + config_something.xml and the corresponding python Class is + Something and can be found in file CIME.XML.something.py. I'll + refer to these as the CIME config classes. + + XML files which are part of a case and thus are read/write to a + case are typically named env_whatever.xml and the cooresponding + python modules are CIME.XML.env_whatever.py and classes are + EnvWhatever. I'll refer to these as the Case env classes. + + The Case Class includes an array of the Case env classes, in the + configure function and it's supporting functions defined below + the case object creates and manipulates the Case env classes + by reading and interpreting the CIME config classes. + + This class extends across multiple files, class members external to this file + are listed in the following imports + + """ + + from CIME.case.case_setup import case_setup + from CIME.case.case_clone import create_clone, _copy_user_modified_to_clone + from CIME.case.case_test import case_test + from CIME.case.case_submit import check_DA_settings, check_case, submit + from CIME.case.case_st_archive import ( + case_st_archive, + restore_from_archive, + archive_last_restarts, + test_st_archive, + test_env_archive, + ) + from CIME.case.case_run import case_run + from CIME.case.case_cmpgen_namelists import case_cmpgen_namelists + from CIME.case.check_lockedfiles import ( + check_lockedfile, + check_lockedfiles, + check_pelayouts_require_rebuild, + ) + from CIME.case.preview_namelists import create_dirs, create_namelists + from CIME.case.check_input_data import ( + check_all_input_data, + stage_refcase, + check_input_data, + ) + + def __init__(self, case_root=None, read_only=True, record=False, non_local=False): + + if case_root is None: + case_root = os.getcwd() + expect( + not os.path.isdir(case_root) + or os.path.isfile(os.path.join(case_root, "env_case.xml")), + "Directory {} does not appear to be a valid case directory".format( + case_root + ), + ) + + self._caseroot = case_root + logger.debug("Initializing Case.") + self._read_only_mode = True + self._force_read_only = read_only + self._primary_component = None + + self._env_entryid_files = [] + self._env_generic_files = [] + self._files = [] + self._comp_interface = None + self.gpu_enabled = False + self._non_local = non_local + self.read_xml() + + srcroot = self.get_value("SRCROOT") + + # Propagate `srcroot` to `GenericXML` to resolve $SRCROOT + if srcroot is not None: + utils.GLOBAL["SRCROOT"] = srcroot + + # srcroot may not be known yet, in the instance of creating + # a new case + customize_path = os.path.join(srcroot, "cime_config", "customize") + + config.load(customize_path) + + if record: + self.record_cmd() + + cimeroot = get_cime_root() + + # Insert tools path to support external code trying to import + # standard_script_setup + tools_path = os.path.join(cimeroot, "CIME", "Tools") + if tools_path not in sys.path: + sys.path.insert(0, tools_path) + + # Hold arbitary values. In create_newcase we may set values + # for xml files that haven't been created yet. We need a place + # to store them until we are ready to create the file. At file + # creation we get the values for those fields from this lookup + # table and then remove the entry. + self.lookups = {} + self.set_lookup_value("CIMEROOT", cimeroot) + self._cime_model = get_model() + self.set_lookup_value("MODEL", self._cime_model) + self._compsetname = None + self._gridname = None + self._pesfile = None + self._gridfile = None + self._components = [] + self._component_classes = [] + self._component_description = {} + self._is_env_loaded = False + self._loaded_envs = None + + # these are user_mods as defined in the compset + # Command Line user_mods are handled seperately + + # Derived attributes + self.thread_count = None + self.total_tasks = None + self.tasks_per_node = None + self.ngpus_per_node = 0 + self.num_nodes = None + self.spare_nodes = None + self.tasks_per_numa = None + self.cores_per_task = None + self.srun_binding = None + self.async_io = False + self.iotasks = 0 + + # check if case has been configured and if so initialize derived + if self.get_value("CASEROOT") is not None: + if not self._non_local: + mach = self.get_value("MACH") + extra_machdir = self.get_value("EXTRA_MACHDIR") + if extra_machdir: + machobj = Machines(machine=mach, extra_machines_dir=extra_machdir) + else: + machobj = Machines(machine=mach) + + # This check should only be done on systems with a common filesystem but separate login nodes (ncar) + if "NCAR_HOST" in os.environ: + probed_machine = machobj.probe_machine_name() + if probed_machine: + expect( + mach == probed_machine, + f"Current machine {probed_machine} does not match case machine {mach}.", + ) + + self.initialize_derived_attributes() + +
+[docs] + def get_baseline_dir(self): + baseline_root = self.get_value("BASELINE_ROOT") + + baseline_name = self.get_value("BASECMP_CASE") + + return os.path.join(baseline_root, baseline_name)
+ + +
+[docs] + def check_if_comp_var(self, vid): + for env_file in self._env_entryid_files: + new_vid, new_comp, iscompvar = env_file.check_if_comp_var(vid) + if iscompvar: + return new_vid, new_comp, iscompvar + + return vid, None, False
+ + +
+[docs] + def initialize_derived_attributes(self): + """ + These are derived variables which can be used in the config_* files + for variable substitution using the {{ var }} syntax + """ + set_model(self.get_value("MODEL")) + env_mach_pes = self.get_env("mach_pes") + env_mach_spec = self.get_env("mach_specific") + comp_classes = self.get_values("COMP_CLASSES") + max_mpitasks_per_node = self.get_value("MAX_MPITASKS_PER_NODE") + self.async_io = {} + asyncio = False + for comp in comp_classes: + self.async_io[comp] = self.get_value("PIO_ASYNC_INTERFACE", subgroup=comp) + if self.async_io[comp]: + asyncio = True + + self.iotasks = ( + self.get_value("PIO_ASYNCIO_NTASKS") + if self.get_value("PIO_ASYNCIO_NTASKS") + else 0 + ) + + self.thread_count = env_mach_pes.get_max_thread_count(comp_classes) + + mpi_attribs = { + "compiler": self.get_value("COMPILER"), + "mpilib": self.get_value("MPILIB"), + "threaded": self.get_build_threaded(), + } + + job = self.get_primary_job() + executable = env_mach_spec.get_mpirun(self, mpi_attribs, job, exe_only=True)[0] + if executable is not None and "aprun" in executable: + ( + _, + self.num_nodes, + self.total_tasks, + self.tasks_per_node, + self.thread_count, + ) = get_aprun_cmd_for_case(self, "e3sm.exe") + self.spare_nodes = env_mach_pes.get_spare_nodes(self.num_nodes) + self.num_nodes += self.spare_nodes + else: + self.total_tasks = env_mach_pes.get_total_tasks(comp_classes, asyncio) + self.tasks_per_node = env_mach_pes.get_tasks_per_node( + self.total_tasks, self.thread_count + ) + + self.num_nodes, self.spare_nodes = env_mach_pes.get_total_nodes( + self.total_tasks, self.thread_count + ) + self.num_nodes += self.spare_nodes + + logger.debug( + "total_tasks {} thread_count {}".format(self.total_tasks, self.thread_count) + ) + + max_gpus_per_node = self.get_value("MAX_GPUS_PER_NODE") + + if max_gpus_per_node: + self.ngpus_per_node = self.get_value("NGPUS_PER_NODE") + # update the maximum MPI tasks for a GPU node (could differ from a pure-CPU node) + if self.ngpus_per_node > 0: + max_mpitasks_per_node = self.get_value("MAX_CPUTASKS_PER_GPU_NODE") + + self.tasks_per_numa = int(math.ceil(self.tasks_per_node / 2.0)) + smt_factor = max( + 1, int(self.get_value("MAX_TASKS_PER_NODE") / max_mpitasks_per_node) + ) + + threads_per_node = self.tasks_per_node * self.thread_count + threads_per_core = ( + 1 if (threads_per_node <= max_mpitasks_per_node) else smt_factor + ) + self.cores_per_task = self.thread_count / threads_per_core + + os.environ["OMP_NUM_THREADS"] = str(self.thread_count) + + self.srun_binding = math.floor( + smt_factor * max_mpitasks_per_node / self.tasks_per_node + ) + self.srun_binding = max(1, int(self.srun_binding))
+ + + # Define __enter__ and __exit__ so that we can use this as a context manager + # and force a flush on exit. + def __enter__(self): + if not self._force_read_only: + self._read_only_mode = False + return self + + def __exit__(self, *_): + self.flush() + self._read_only_mode = True + return False + +
+[docs] + def read_xml(self): + for env_file in self._files: + expect( + not env_file.needsrewrite, + "Potential loss of unflushed changes in {}".format(env_file.filename), + ) + + self._env_entryid_files = [] + self._env_entryid_files.append( + EnvCase(self._caseroot, components=None, read_only=self._force_read_only) + ) + components = self._env_entryid_files[0].get_values("COMP_CLASSES") + self._env_entryid_files.append( + EnvRun( + self._caseroot, components=components, read_only=self._force_read_only + ) + ) + self._env_entryid_files.append( + EnvBuild( + self._caseroot, components=components, read_only=self._force_read_only + ) + ) + self._comp_interface = self._env_entryid_files[-1].get_value("COMP_INTERFACE") + + self._env_entryid_files.append( + EnvMachPes( + self._caseroot, + components=components, + read_only=self._force_read_only, + comp_interface=self._comp_interface, + ) + ) + self._env_entryid_files.append( + EnvBatch(self._caseroot, read_only=self._force_read_only) + ) + self._env_entryid_files.append( + EnvWorkflow(self._caseroot, read_only=self._force_read_only) + ) + + if os.path.isfile(os.path.join(self._caseroot, "env_test.xml")): + self._env_entryid_files.append( + EnvTest( + self._caseroot, + components=components, + read_only=self._force_read_only, + ) + ) + self._env_generic_files = [] + self._env_generic_files.append( + EnvMachSpecific( + self._caseroot, + read_only=self._force_read_only, + comp_interface=self._comp_interface, + ) + ) + self._env_generic_files.append( + EnvArchive(self._caseroot, read_only=self._force_read_only) + ) + self._files = self._env_entryid_files + self._env_generic_files
+ + +
+[docs] + def get_case_root(self): + """Returns the root directory for this case.""" + return self._caseroot
+ + +
+[docs] + def get_env(self, short_name, allow_missing=False): + full_name = "env_{}.xml".format(short_name) + for env_file in self._files: + if os.path.basename(env_file.filename) == full_name: + return env_file + if allow_missing: + return None + expect(False, "Could not find object for {} in case".format(full_name))
+ + +
+[docs] + def check_timestamps(self, short_name=None): + if short_name is not None: + env_file = self.get_env(short_name) + env_file.check_timestamp() + else: + for env_file in self._files: + env_file.check_timestamp()
+ + +
+[docs] + def copy(self, newcasename, newcaseroot, newcimeroot=None, newsrcroot=None): + newcase = deepcopy(self) + for env_file in newcase._files: # pylint: disable=protected-access + basename = os.path.basename(env_file.filename) + newfile = os.path.join(newcaseroot, basename) + env_file.change_file(newfile, copy=True) + + if newcimeroot is not None: + newcase.set_value("CIMEROOT", newcimeroot) + + if newsrcroot is not None: + newcase.set_value("SRCROOT", newsrcroot) + + newcase.set_value("CASE", newcasename) + newcase.set_value("CASEROOT", newcaseroot) + newcase.set_value("CONTINUE_RUN", "FALSE") + newcase.set_value("RESUBMIT", 0) + newcase.set_value("CASE_HASH", newcase.new_hash()) + + # Important, and subtle: Writability should NOT be copied because + # this allows the copy to be modified without needing a "with" statement + # which opens the door to tricky errors such as unflushed writes. + newcase._read_only_mode = True # pylint: disable=protected-access + + return newcase
+ + +
+[docs] + def flush(self, flushall=False): + if not os.path.isdir(self._caseroot): + # do not flush if caseroot wasnt created + return + + for env_file in self._files: + env_file.write(force_write=flushall)
+ + +
+[docs] + def get_values(self, item, attribute=None, resolved=True, subgroup=None): + for env_file in self._files: + # Wait and resolve in self rather than in env_file + results = env_file.get_values( + item, attribute, resolved=False, subgroup=subgroup + ) + if len(results) > 0: + new_results = [] + if resolved: + for result in results: + if isinstance(result, str): + result = self.get_resolved_value(result) + vtype = env_file.get_type_info(item) + if vtype is not None or vtype != "char": + result = convert_to_type(result, vtype, item) + + new_results.append(result) + + else: + new_results.append(result) + + else: + new_results = results + + return new_results + + # Return empty result + return []
+ + +
+[docs] + def get_value(self, item, attribute=None, resolved=True, subgroup=None): + if item == "GPU_ENABLED": + if not self.gpu_enabled: + if ( + self.get_value("GPU_TYPE") != "none" + and self.get_value("NGPUS_PER_NODE") > 0 + ): + self.gpu_enabled = True + return "true" if self.gpu_enabled else "false" + + result = None + for env_file in self._files: + # Wait and resolve in self rather than in env_file + result = env_file.get_value( + item, attribute, resolved=False, subgroup=subgroup + ) + + if result is not None: + if resolved and isinstance(result, str): + result = self.get_resolved_value(result) + vtype = env_file.get_type_info(item) + if vtype is not None and vtype != "char": + result = convert_to_type(result, vtype, item) + + return result + + # Return empty result + return result
+ + +
+[docs] + def get_record_fields(self, variable, field): + """get_record_fields gets individual requested field from an entry_id file + this routine is used only by xmlquery""" + # Empty result + result = [] + + for env_file in self._env_entryid_files: + # Wait and resolve in self rather than in env_file + logger.debug( + "(get_record_field) Searching in {}".format(env_file.__class__.__name__) + ) + if field == "varid": + roots = env_file.scan_children("entry") + else: + roots = env_file.get_nodes_by_id(variable) + + for root in roots: + if root is not None: + if field == "raw": + result.append(env_file.get_raw_record(root)) + elif field == "desc": + result.append(env_file.get_description(root)) + elif field == "varid": + result.append(env_file.get(root, "id")) + elif field == "group": + result.extend(env_file.get_groups(root)) + elif field == "valid_values": + # pylint: disable=protected-access + vv = env_file._get_valid_values(root) + if vv: + result.extend(vv) + elif field == "file": + result.append(env_file.filename) + + if not result: + for env_file in self._env_generic_files: + roots = env_file.scan_children(variable) + for root in roots: + if root is not None: + if field == "raw": + result.append(env_file.get_raw_record(root)) + elif field == "group": + result.extend(env_file.get_groups(root)) + elif field == "file": + result.append(env_file.filename) + + return list(set(result))
+ + +
+[docs] + def get_type_info(self, item): + result = None + for env_file in self._env_entryid_files: + result = env_file.get_type_info(item) + if result is not None: + return result + + return result
+ + +
+[docs] + def get_resolved_value(self, item, recurse=0, allow_unresolved_envvars=False): + num_unresolved = item.count("$") if item else 0 + recurse_limit = 10 + if num_unresolved > 0 and recurse < recurse_limit: + for env_file in self._env_entryid_files: + item = env_file.get_resolved_value( + item, allow_unresolved_envvars=allow_unresolved_envvars + ) + if "$" not in item: + return item + else: + item = self.get_resolved_value( + item, + recurse=recurse + 1, + allow_unresolved_envvars=allow_unresolved_envvars, + ) + + return item
+ + +
+[docs] + def set_value( + self, + item, + value, + subgroup=None, + ignore_type=False, + allow_undefined=False, + return_file=False, + ): + """ + If a file has been defined, and the variable is in the file, + then that value will be set in the file object and the resovled value + is returned unless return_file is True, in which case (resolved_value, filename) + is returned where filename is the name of the modified file. + """ + expect( + not self._read_only_mode, + "Cannot modify case, read_only. " + "Case must be opened with read_only=False and can only be modified within a context manager", + ) + + if item == "CASEROOT": + self._caseroot = value + result = None + + for env_file in self._files: + result = env_file.set_value(item, value, subgroup, ignore_type) + if result is not None: + logger.debug("Will rewrite file {} {}".format(env_file.filename, item)) + return (result, env_file.filename) if return_file else result + + if len(self._files) == 1: + expect( + allow_undefined or result is not None, + "No variable {} found in file {}".format(item, self._files[0].filename), + ) + else: + expect( + allow_undefined or result is not None, + "No variable {} found in case".format(item), + )
+ + +
+[docs] + def set_valid_values(self, item, valid_values): + """ + Update or create a valid_values entry for item and populate it + """ + expect( + not self._read_only_mode, + "Cannot modify case, read_only. " + "Case must be opened with read_only=False and can only be modified within a context manager", + ) + + result = None + for env_file in self._env_entryid_files: + result = env_file.set_valid_values(item, valid_values) + if result is not None: + logger.debug("Will rewrite file {} {}".format(env_file.filename, item)) + return result
+ + +
+[docs] + def set_lookup_value(self, item, value): + if item in self.lookups and self.lookups[item] is not None: + logger.warning( + "Item {} already in lookups with value {}".format( + item, self.lookups[item] + ) + ) + else: + logger.debug("Setting in lookups: item {}, value {}".format(item, value)) + self.lookups[item] = value
+ + +
+[docs] + def clean_up_lookups(self, allow_undefined=False): + # put anything in the lookups table into existing env objects + for key, value in list(self.lookups.items()): + logger.debug("lookup key {} value {}".format(key, value)) + result = self.set_value(key, value, allow_undefined=allow_undefined) + if result is not None: + del self.lookups[key]
+ + + def _set_compset(self, compset_name, files): + """ + Loop through all the compset files and find the compset + specifation file that matches either the input 'compset_name'. + Note that the input compset name (i.e. compset_name) can be + either a longname or an alias. This will set various compset-related + info. + + Returns a tuple: (compset_alias, science_support, component_defining_compset) + (For a user-defined compset - i.e., a compset without an alias - these + return values will be None, [], None.) + """ + science_support = [] + compset_alias = None + components = files.get_components("COMPSETS_SPEC_FILE") + logger.debug( + " Possible components for COMPSETS_SPEC_FILE are {}".format(components) + ) + + self.set_lookup_value("COMP_INTERFACE", self._comp_interface) + if config.set_comp_root_dir_cpl: + if config.use_nems_comp_root_dir: + ufs_driver = os.environ.get("UFS_DRIVER") + attribute = None + if ufs_driver: + attribute = {"component": "nems"} + comp_root_dir_cpl = files.get_value( + "COMP_ROOT_DIR_CPL", attribute=attribute + ) + else: + comp_root_dir_cpl = files.get_value("COMP_ROOT_DIR_CPL") + + self.set_lookup_value("COMP_ROOT_DIR_CPL", comp_root_dir_cpl) + + # Loop through all of the files listed in COMPSETS_SPEC_FILE and find the file + # that has a match for either the alias or the longname in that order + for component in components: + + # Determine the compsets file for this component + compsets_filename = files.get_value( + "COMPSETS_SPEC_FILE", {"component": component} + ) + + # If the file exists, read it and see if there is a match for the compset alias or longname + if os.path.isfile(compsets_filename): + compsets = Compsets(compsets_filename) + match, compset_alias, science_support = compsets.get_compset_match( + name=compset_name + ) + if match is not None: + self._compsetname = match + logger.info("Compset longname is {}".format(match)) + logger.info( + "Compset specification file is {}".format(compsets_filename) + ) + break + + if compset_alias is None: + logger.info( + "Did not find an alias or longname compset match for {} ".format( + compset_name + ) + ) + self._compsetname = compset_name + + # Fill in compset name + self._compsetname, self._components = self.valid_compset( + self._compsetname, compset_alias, files + ) + + # if this is a valiid compset longname there will be at least 7 components. + components = self.get_compset_components() + expect( + len(components) > 6, + "No compset alias {} found and this does not appear to be a compset longname.".format( + compset_name + ), + ) + + return compset_alias, science_support + +
+[docs] + def get_primary_component(self): + if self._primary_component is None: + self._primary_component = self._find_primary_component() + return self._primary_component
+ + + def _find_primary_component(self): + """ + try to glean the primary component based on compset name + """ + progcomps = {} + spec = {} + primary_component = None + for comp in self._component_classes: + if comp == "CPL": + continue + spec[comp] = self.get_value("COMP_{}".format(comp)) + notprogcomps = ("D{}".format(comp), "X{}".format(comp), "S{}".format(comp)) + if spec[comp].upper() in notprogcomps: + progcomps[comp] = False + else: + progcomps[comp] = True + expect( + "ATM" in progcomps + and "LND" in progcomps + and "OCN" in progcomps + and "ICE" in progcomps, + " Not finding expected components in {}".format(self._component_classes), + ) + if ( + progcomps["ATM"] + and progcomps["LND"] + and progcomps["OCN"] + and progcomps["ICE"] + ): + primary_component = "allactive" + elif progcomps["LND"] and progcomps["OCN"] and progcomps["ICE"]: + # this is a "J" compset + primary_component = "allactive" + elif progcomps["ATM"] and progcomps["OCN"] and progcomps["ICE"]: + # this is a ufs s2s compset + primary_component = "allactive" + elif progcomps["ATM"]: + if "DOCN%SOM" in self._compsetname and progcomps["LND"]: + # This is an "E" compset + primary_component = "allactive" + else: + # This is an "F" or "Q" compset + primary_component = spec["ATM"] + elif progcomps["LND"]: + # This is an "I" compset + primary_component = spec["LND"] + elif progcomps["OCN"]: + # This is a "C" or "G" compset + primary_component = spec["OCN"] + elif progcomps["ICE"]: + # This is a "D" compset + primary_component = spec["ICE"] + elif "GLC" in progcomps and progcomps["GLC"]: + # This is a "TG" compset + primary_component = spec["GLC"] + elif progcomps["ROF"]: + # This is a "R" compset + primary_component = spec["ROF"] + elif progcomps["WAV"]: + # This is a "V" compset + primary_component = spec["WAV"] + else: + # This is "A", "X" or "S" + primary_component = "drv" + + return primary_component + + def _valid_compset_impl(self, compset_name, compset_alias, comp_classes, comp_hash): + """Add stub models missing in <compset_name>, return full compset name. + <comp_classes> is a list of all supported component classes. + <comp_hash> is a dictionary where each key is a supported component + (e.g., datm) and the associated value is the index in <comp_classes> of + that component's class (e.g., 1 for atm). + >>> import os, shutil, tempfile + >>> workdir = tempfile.mkdtemp() + >>> caseroot = os.path.join(workdir, 'caseroot') # use non-existent caseroot to avoid error about not being a valid case directory in Case __init__ method + >>> Case(caseroot, read_only=False)._valid_compset_impl('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1,'dlnd':2,'slnd':2,'dice':3,'sice':3,'docn':4,'socn':4,'drof':5,'srof':5,'sglc':6,'swav':7,'ww3':7,'sesp':8}) + ('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_SESP', ['2000', 'DATM%NYF', 'SLND', 'DICE%SSMI', 'DOCN%DOM', 'DROF%NYF', 'SGLC', 'SWAV', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1,'dlnd':2,'slnd':2,'dice':3,'sice':3,'docn':4,'socn':4,'drof':5,'srof':5,'sglc':6,'swav':7,'ww3':7,'sesp':8}) + ('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_SESP', ['2000', 'DATM%NYF', 'SLND', 'DICE%SSMI', 'DOCN%DOM', 'DROF%NYF', 'SGLC', 'SWAV', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('atm:DATM%NYF_rof:DROF%NYF_scn:2000_ice:DICE%SSMI_ocn:DOCN%DOM', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1,'dlnd':2,'slnd':2,'dice':3,'sice':3,'docn':4,'socn':4,'drof':5,'srof':5,'sglc':6,'swav':7,'ww3':7,'sesp':8}) + ('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_SESP', ['2000', 'DATM%NYF', 'SLND', 'DICE%SSMI', 'DOCN%DOM', 'DROF%NYF', 'SGLC', 'SWAV', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('2000_DATM%NYF_DICE%SSMI_DOCN%DOM_DROF%NYF', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1,'dlnd':2,'slnd':2,'dice':3,'sice':3,'docn':4,'socn':4,'drof':5,'srof':5,'sglc':6,'swav':7,'ww3':7,'sesp':8}) + ('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_SESP', ['2000', 'DATM%NYF', 'SLND', 'DICE%SSMI', 'DOCN%DOM', 'DROF%NYF', 'SGLC', 'SWAV', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('2000_DICE%SSMI_DOCN%DOM_DATM%NYF_DROF%NYF', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1,'dlnd':2,'slnd':2,'dice':3,'sice':3,'docn':4,'socn':4,'drof':5,'srof':5,'sglc':6,'swav':7,'ww3':7,'sesp':8}) + ('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_SESP', ['2000', 'DATM%NYF', 'SLND', 'DICE%SSMI', 'DOCN%DOM', 'DROF%NYF', 'SGLC', 'SWAV', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('2000_DICE%SSMI_DOCN%DOM_DATM%NYF_DROF%NYF_TEST', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1,'dlnd':2,'slnd':2,'dice':3,'sice':3,'docn':4,'socn':4,'drof':5,'srof':5,'sglc':6,'swav':7,'ww3':7,'sesp':8}) + ('2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_SESP_TEST', ['2000', 'DATM%NYF', 'SLND', 'DICE%SSMI', 'DOCN%DOM', 'DROF%NYF', 'SGLC', 'SWAV', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('1850_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO%ABIO-DIC_MOSART_CISM2%NOEVOLVE_WW3_BGC%BDRD', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'ESP'], {'datm':1,'satm':1, 'cam':1,'dlnd':2,'clm':2,'slnd':2,'cice':3,'dice':3,'sice':3,'pop':4,'docn':4,'socn':4,'mosart':5,'drof':5,'srof':5,'cism':6,'sglc':6,'ww':7,'swav':7,'ww3':7,'sesp':8}) + ('1850_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO%ABIO-DIC_MOSART_CISM2%NOEVOLVE_WW3_SESP_BGC%BDRD', ['1850', 'CAM60', 'CLM50%BGC-CROP', 'CICE', 'POP2%ECO%ABIO-DIC', 'MOSART', 'CISM2%NOEVOLVE', 'WW3', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('1850_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO%ABIO-DIC_MOSART_CISM2%NOEVOLVE_WW3_BGC%BDRD_TEST', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'IAC', 'ESP'], {'datm':1,'satm':1, 'cam':1,'dlnd':2,'clm':2,'slnd':2,'cice':3,'dice':3,'sice':3,'pop':4,'docn':4,'socn':4,'mosart':5,'drof':5,'srof':5,'cism':6,'sglc':6,'ww':7,'swav':7,'ww3':7,'sesp':8}) + ('1850_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO%ABIO-DIC_MOSART_CISM2%NOEVOLVE_WW3_SIAC_SESP_BGC%BDRD_TEST', ['1850', 'CAM60', 'CLM50%BGC-CROP', 'CICE', 'POP2%ECO%ABIO-DIC', 'MOSART', 'CISM2%NOEVOLVE', 'WW3', 'SIAC', 'SESP']) + >>> Case(caseroot, read_only=False)._valid_compset_impl('1850_SATM_SLND_SICE_SOCN_SGLC_SWAV', 'S', ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'IAC', 'ESP'], {'datm':1,'satm':1, 'cam':1,'dlnd':2,'clm':2,'slnd':2,'cice':3,'dice':3,'sice':3,'pop':4,'docn':4,'socn':4,'mosart':5,'drof':5,'srof':5,'cism':6,'sglc':6,'ww':7,'swav':7,'ww3':7,'sesp':8}) + ('1850_SATM_SLND_SICE_SOCN_SROF_SGLC_SWAV_SIAC_SESP', ['1850', 'SATM', 'SLND', 'SICE', 'SOCN', 'SROF', 'SGLC', 'SWAV', 'SIAC', 'SESP']) + + >>> Case(caseroot, read_only=False)._valid_compset_impl('1850_SATM_SLND_SICE_SOCN_SGLC_SWAV', None, ['CPL', 'ATM', 'LND', 'ICE', 'OCN', 'ROF', 'GLC', 'WAV', 'IAC', 'ESP'], {'datm':1,'satm':1, 'cam':1,'dlnd':2,'clm':2,'slnd':2,'cice':3,'dice':3,'sice':3,'pop':4,'docn':4,'socn':4,'mosart':5,'drof':5,'srof':5,'cism':6,'sglc':6,'ww':7,'swav':7,'ww3':7,'sesp':8}) #doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + CIMEError: ERROR: Invalid compset name, 1850_SATM_SLND_SICE_SOCN_SGLC_SWAV, all stub components generated + >>> shutil.rmtree(workdir, ignore_errors=True) + """ + # Find the models declared in the compset + model_set = [None] * len(comp_classes) + components = compset_name.split("_") + noncomps = [] + allstubs = True + colonformat = ":" in compset_name + if colonformat: + # make sure that scn: is component[0] as expected + for i in range(1, len(components)): + if components[i].startswith("scn:"): + tmp = components[0] + components[0] = components[i] + components[i] = tmp + break + + model_set[0] = components[0][4:] + else: + model_set[0] = components[0] + + for model in components[1:]: + match = Case.__mod_match_re__.match(model.lower()) + expect(match is not None, "No model match for {}".format(model)) + mod_match = match.group(1) + # Check for noncomponent appends (BGC & TEST) + if mod_match in ("bgc", "test"): + noncomps.append(model) + elif ":" in mod_match: + comp_ind = comp_hash[mod_match[4:]] + model_set[comp_ind] = model + else: + expect(mod_match in comp_hash, "Unknown model type, {}".format(model)) + comp_ind = comp_hash[mod_match] + model_set[comp_ind] = model + + # Fill in missing components with stubs + for comp_ind in range(1, len(model_set)): + if model_set[comp_ind] is None: + comp_class = comp_classes[comp_ind] + stub = "S" + comp_class + logger.info("Automatically adding {} to compset".format(stub)) + model_set[comp_ind] = stub + elif ":" in model_set[comp_ind]: + model_set[comp_ind] = model_set[comp_ind][4:] + + if model_set[comp_ind][0] != "S": + allstubs = False + + expect( + (compset_alias is not None) or (not allstubs), + "Invalid compset name, {}, all stub components generated".format( + compset_name + ), + ) + # Return the completed compset + compsetname = "_".join(model_set) + for noncomp in noncomps: + compsetname = compsetname + "_" + noncomp + return compsetname, model_set + + # RE to match component type name without optional piece (stuff after %). + # Drop any trailing digits (e.g., the 60 in CAM60) to ensure match + # Note, this will also drop trailing digits such as in ww3 but since it + # is handled consistenly, this should not affect functionality. + # Note: interstitial digits are included (e.g., in FV3GFS). + __mod_match_re__ = re.compile(r"([^%]*[^0-9%]+)") + +
+[docs] + def valid_compset(self, compset_name, compset_alias, files): + """Add stub models missing in <compset_name>, return full compset name. + <files> is used to collect set of all supported components. + """ + # First, create hash of model names + # A note about indexing. Relevant component classes start at 1 + # because we ignore CPL for finding model components. + # Model components would normally start at zero but since we are + # dealing with a compset, 0 is reserved for the time field + drv_config_file = files.get_value("CONFIG_CPL_FILE") + drv_comp = Component(drv_config_file, "CPL") + comp_classes = drv_comp.get_valid_model_components() + comp_hash = {} # Hash model name to component class index + for comp_ind in range(1, len(comp_classes)): + comp = comp_classes[comp_ind] + # Find list of models for component class + # List can be in different locations, check CONFIG_XXX_FILE + node_name = "CONFIG_{}_FILE".format(comp) + models = files.get_components(node_name) + if (models is None) or (None in models): + # Backup, check COMP_ROOT_DIR_XXX + node_name = "COMP_ROOT_DIR_" + comp + models = files.get_components(node_name) + + expect( + (models is not None) and (None not in models), + "Unable to find list of supported components", + ) + + for model in models: + mod_match = Case.__mod_match_re__.match(model.lower()).group(1) + comp_hash[mod_match] = comp_ind + + return self._valid_compset_impl( + compset_name, compset_alias, comp_classes, comp_hash + )
+ + + def _set_info_from_primary_component(self, files, pesfile=None): + """ + Sets file and directory paths that depend on the primary component of + this compset. + + Assumes that self._primary_component has already been set. + """ + component = self.get_primary_component() + + compset_spec_file = files.get_value( + "COMPSETS_SPEC_FILE", {"component": component}, resolved=False + ) + + self.set_lookup_value("COMPSETS_SPEC_FILE", compset_spec_file) + if pesfile is None: + self._pesfile = files.get_value("PES_SPEC_FILE", {"component": component}) + pesfile_unresolved = files.get_value( + "PES_SPEC_FILE", {"component": component}, resolved=False + ) + logger.info("Pes specification file is {}".format(self._pesfile)) + else: + self._pesfile = pesfile + pesfile_unresolved = pesfile + expect( + self._pesfile is not None, + "No pesfile found for component {}".format(component), + ) + + self.set_lookup_value("PES_SPEC_FILE", pesfile_unresolved) + + tests_filename = files.get_value( + "TESTS_SPEC_FILE", {"component": component}, resolved=False + ) + tests_mods_dir = files.get_value( + "TESTS_MODS_DIR", {"component": component}, resolved=False + ) + user_mods_dir = files.get_value( + "USER_MODS_DIR", {"component": component}, resolved=False + ) + self.set_lookup_value("TESTS_SPEC_FILE", tests_filename) + self.set_lookup_value("TESTS_MODS_DIR", tests_mods_dir) + self.set_lookup_value("USER_MODS_DIR", user_mods_dir) + +
+[docs] + def get_compset_components(self): + # If are doing a create_clone then, self._compsetname is not set yet + components = [] + compset = self.get_value("COMPSET") + if compset is None: + compset = self._compsetname + expect(compset is not None, "compset is not set") + # the first element is always the date operator - skip it + elements = compset.split("_")[1:] # pylint: disable=maybe-no-member + for element in elements: + if ":" in element: + element = element[4:] + # ignore the possible BGC or TEST modifier + if element.startswith("BGC%") or element.startswith("TEST"): + continue + else: + element_component = element.split("%")[0].lower() + if ( + "ww" not in element_component + and "fv3" not in element_component + and "cice" not in element_component + ): + element_component = re.sub(r"[0-9]*", "", element_component) + components.append(element_component) + return components
+ + + def __iter__(self): + for entryid_file in self._env_entryid_files: + for key, val in entryid_file: + if isinstance(val, str) and "$" in val: + yield key, self.get_resolved_value(val) + else: + yield key, val + +
+[docs] + def set_comp_classes(self, comp_classes): + self._component_classes = comp_classes + for env_file in self._env_entryid_files: + env_file.set_components(comp_classes)
+ + + def _get_component_config_data(self, files): + # attributes used for multi valued defaults + # attlist is a dictionary used to determine the value element that has the most matches + attlist = { + "compset": self._compsetname, + "grid": self._gridname, + "cime_model": self._cime_model, + } + + # Determine list of component classes that this coupler/driver knows how + # to deal with. This list follows the same order as compset longnames follow. + + # Add the group and elements for the config_files.xml + for env_file in self._env_entryid_files: + env_file.add_elements_by_group(files, attlist) + + drv_config_file = files.get_value("CONFIG_CPL_FILE") + drv_comp = Component(drv_config_file, "CPL") + for env_file in self._env_entryid_files: + env_file.add_elements_by_group(drv_comp, attributes=attlist) + + drv_config_file_model_specific = files.get_value( + "CONFIG_CPL_FILE_MODEL_SPECIFIC" + ) + expect( + os.path.isfile(drv_config_file_model_specific), + "No {} specific file found for driver {}".format( + get_model(), self._comp_interface + ), + ) + drv_comp_model_specific = Component(drv_config_file_model_specific, "CPL") + + self._component_description[ + "forcing" + ] = drv_comp_model_specific.get_forcing_description(self._compsetname) + logger.info( + "Compset forcing is {}".format(self._component_description["forcing"]) + ) + self._component_description["CPL"] = drv_comp_model_specific.get_description( + self._compsetname + ) + if len(self._component_description["CPL"]) > 0: + logger.info("Com forcing is {}".format(self._component_description["CPL"])) + for env_file in self._env_entryid_files: + env_file.add_elements_by_group(drv_comp_model_specific, attributes=attlist) + + self.clean_up_lookups(allow_undefined=True) + + # loop over all elements of both component_classes and components - and get config_component_file for + # for each component + self.set_comp_classes(drv_comp.get_valid_model_components()) + + # will need a change here for new cpl components + root_dir_node_name = "COMP_ROOT_DIR_CPL" + comp_root_dir = files.get_value( + root_dir_node_name, {"component": self._comp_interface}, resolved=False + ) + + if comp_root_dir is not None: + self.set_value(root_dir_node_name, comp_root_dir) + + for i in range(1, len(self._component_classes)): + comp_class = self._component_classes[i] + comp_name = self._components[i - 1] + if ":" in comp_name: + comp_name = comp_name[4:] + root_dir_node_name = "COMP_ROOT_DIR_" + comp_class + node_name = "CONFIG_" + comp_class + "_FILE" + compatt = {"component": comp_name} + comp_root_dir = files.get_value(root_dir_node_name, compatt, resolved=False) + if comp_root_dir is not None: + self.set_value(root_dir_node_name, comp_root_dir) + + # Add the group and elements for the config_files.xml + + comp_config_file = files.get_value(node_name, compatt, resolved=False) + expect( + comp_config_file is not None, + "No component {} found for class {}".format(comp_name, comp_class), + ) + self.set_value(node_name, comp_config_file) + comp_config_file = files.get_value(node_name, compatt) + + expect( + comp_config_file is not None and os.path.isfile(comp_config_file), + "Config file {} for component {} not found.".format( + comp_config_file, comp_name + ), + ) + compobj = Component(comp_config_file, comp_class) + # For files following version 3 schema this also checks the compsetname validity + + self._component_description[comp_class] = compobj.get_description( + self._compsetname + ) + expect( + self._component_description[comp_class] is not None, + "No description found in file {} for component {} in comp_class {}".format( + comp_config_file, comp_name, comp_class + ), + ) + logger.info( + "{} component is {}".format( + comp_class, self._component_description[comp_class] + ) + ) + for env_file in self._env_entryid_files: + env_file.add_elements_by_group(compobj, attributes=attlist) + self.clean_up_lookups(allow_undefined=self._comp_interface == "nuopc") + + def _setup_mach_pes(self, pecount, multi_driver, ninst, machine_name, mpilib): + # -------------------------------------------- + # pe layout + # -------------------------------------------- + mach_pes_obj = None + # self._pesfile may already be env_mach_pes.xml if so we can just return + gfile = GenericXML(infile=self._pesfile) + ftype = gfile.get_id() + expect( + ftype == "env_mach_pes.xml" or ftype == "config_pes", + " Do not recognize {} as a valid CIME pes file {}".format( + self._pesfile, ftype + ), + ) + if ftype == "env_mach_pes.xml": + new_mach_pes_obj = EnvMachPes( + infile=self._pesfile, + components=self._component_classes, + comp_interface=self._comp_interface, + ) + self.update_env(new_mach_pes_obj, "mach_pes", blow_away=True) + return new_mach_pes_obj.get_value("TOTALPES") + + pesobj = Pes(self._pesfile) + + match1 = re.match("(.+)x([0-9]+)", "" if pecount is None else pecount) + match2 = re.match("([0-9]+)", "" if pecount is None else pecount) + + pes_ntasks = {} + pes_nthrds = {} + pes_rootpe = {} + pes_pstrid = {} + other = {} + comment = None + force_tasks = None + force_thrds = None + if match1: + opti_tasks = match1.group(1) + if opti_tasks.isdigit(): + force_tasks = int(opti_tasks) + else: + pes_ntasks = pesobj.find_pes_layout( + self._gridname, + self._compsetname, + machine_name, + pesize_opts=opti_tasks, + mpilib=mpilib, + )[0] + force_thrds = int(match1.group(2)) + elif match2: + force_tasks = int(match2.group(1)) + pes_nthrds = pesobj.find_pes_layout( + self._gridname, self._compsetname, machine_name, mpilib=mpilib + )[1] + else: + ( + pes_ntasks, + pes_nthrds, + pes_rootpe, + pes_pstrid, + other, + comment, + ) = pesobj.find_pes_layout( + self._gridname, + self._compsetname, + machine_name, + pesize_opts=pecount, + mpilib=mpilib, + ) + + if match1 or match2: + for component_class in self._component_classes: + if force_tasks is not None: + string_ = "NTASKS_" + component_class + pes_ntasks[string_] = force_tasks + + if force_thrds is not None: + string_ = "NTHRDS_" + component_class + pes_nthrds[string_] = force_thrds + + # Always default to zero rootpe if user forced procs and or threads + string_ = "ROOTPE_" + component_class + pes_rootpe[string_] = 0 + + mach_pes_obj = self.get_env("mach_pes") + mach_pes_obj.add_comment(comment) + + if other is not None: + logger.info("setting additional fields from config_pes: {}".format(other)) + for key, value in list(other.items()): + self.set_value(key, value) + + totaltasks = [] + for comp_class in self._component_classes: + ntasks_str = "NTASKS_{}".format(comp_class) + nthrds_str = "NTHRDS_{}".format(comp_class) + rootpe_str = "ROOTPE_{}".format(comp_class) + pstrid_str = "PSTRID_{}".format(comp_class) + + ntasks = pes_ntasks[ntasks_str] if ntasks_str in pes_ntasks else 1 + nthrds = pes_nthrds[nthrds_str] if nthrds_str in pes_nthrds else 1 + rootpe = pes_rootpe[rootpe_str] if rootpe_str in pes_rootpe else 0 + pstrid = pes_pstrid[pstrid_str] if pstrid_str in pes_pstrid else 1 + + totaltasks.append((ntasks + rootpe) * nthrds) + mach_pes_obj.set_value(ntasks_str, ntasks) + mach_pes_obj.set_value(nthrds_str, nthrds) + mach_pes_obj.set_value(rootpe_str, rootpe) + mach_pes_obj.set_value(pstrid_str, pstrid) + + # Make sure that every component has been accounted for + # set, nthrds and ntasks to 1 otherwise. Also set the ninst values here. + for compclass in self._component_classes: + key = "NINST_{}".format(compclass) + if compclass == "CPL": + continue + mach_pes_obj.set_value(key, ninst) + + key = "NTASKS_{}".format(compclass) + if key not in pes_ntasks: + mach_pes_obj.set_value(key, 1) + + key = "NTHRDS_{}".format(compclass) + if key not in pes_nthrds: + mach_pes_obj.set_value(key, 1) + + if multi_driver: + mach_pes_obj.set_value("MULTI_DRIVER", True) + +
+[docs] + def configure( + self, + compset_name, + grid_name, + machine_name=None, + project=None, + pecount=None, + compiler=None, + mpilib=None, + pesfile=None, + gridfile=None, + multi_driver=False, + ninst=1, + test=False, + walltime=None, + queue=None, + output_root=None, + run_unsupported=False, + answer=None, + input_dir=None, + driver=None, + workflowid="default", + non_local=False, + extra_machines_dir=None, + case_group=None, + ngpus_per_node=0, + gpu_type=None, + gpu_offload=None, + ): + + expect( + check_name(compset_name, additional_chars="."), + "Invalid compset name {}".format(compset_name), + ) + + self._comp_interface = driver + # -------------------------------------------- + # compset, pesfile, and compset components + # -------------------------------------------- + files = Files(comp_interface=self._comp_interface) + + # -------------------------------------------- + # find and/or fill out compset name + # -------------------------------------------- + + compset_alias, science_support = self._set_compset(compset_name, files) + + self._components = self.get_compset_components() + + # -------------------------------------------- + # grid + # -------------------------------------------- + grids = Grids(gridfile, comp_interface=driver) + + gridinfo = grids.get_grid_info( + name=grid_name, compset=self._compsetname, driver=self._comp_interface + ) + self._gridname = gridinfo["GRID"] + for key, value in list(gridinfo.items()): + logger.debug("Set grid {} {}".format(key, value)) + self.set_lookup_value(key, value) + + # -------------------------------------------- + # component config data + # -------------------------------------------- + + self._get_component_config_data(files) + + # This needs to be called after self.set_comp_classes, which is called + # from self._get_component_config_data + self._primary_component = self.get_primary_component() + + self._set_info_from_primary_component(files, pesfile=pesfile) + + self.clean_up_lookups(allow_undefined=True) + + self.get_compset_var_settings(files) + + self.clean_up_lookups(allow_undefined=True) + + # -------------------------------------------- + # machine + # -------------------------------------------- + # set machine values in env_xxx files + if extra_machines_dir: + self.set_value("EXTRA_MACHDIR", extra_machines_dir) + machobj = Machines(machine=machine_name, extra_machines_dir=extra_machines_dir) + probed_machine = machobj.probe_machine_name() + machine_name = machobj.get_machine_name() + self.set_value("MACH", machine_name) + if probed_machine != machine_name and probed_machine is not None: + logger.warning( + "WARNING: User-selected machine '{}' does not match probed machine '{}'".format( + machine_name, probed_machine + ) + ) + else: + logger.info("Machine is {}".format(machine_name)) + + nodenames = machobj.get_node_names() + nodenames = [ + x + for x in nodenames + if "_system" not in x + and "_variables" not in x + and "mpirun" not in x + and "COMPILER" not in x + and "MPILIB" not in x + and "MAX_MPITASKS_PER_NODE" not in x + and "MAX_TASKS_PER_NODE" not in x + and "MAX_CPUTASKS_PER_GPU_NODE" not in x + and "MAX_GPUS_PER_NODE" not in x + ] + + for nodename in nodenames: + value = machobj.get_value(nodename, resolved=False) + if value: + type_str = self.get_type_info(nodename) + if type_str is not None: + logger.debug("machine nodename {} value {}".format(nodename, value)) + self.set_value(nodename, convert_to_type(value, type_str, nodename)) + + if compiler is None: + compiler = machobj.get_default_compiler() + else: + expect( + machobj.is_valid_compiler(compiler), + "compiler {} is not supported on machine {}".format( + compiler, machine_name + ), + ) + + self.set_value("COMPILER", compiler) + + if mpilib is None: + mpilib = machobj.get_default_MPIlib({"compiler": compiler}) + else: + expect( + machobj.is_valid_MPIlib(mpilib, {"compiler": compiler}), + "MPIlib {} is not supported on machine {}".format(mpilib, machine_name), + ) + self.set_value("MPILIB", mpilib) + for name in ( + "MAX_TASKS_PER_NODE", + "MAX_MPITASKS_PER_NODE", + "MAX_CPUTASKS_PER_GPU_NODE", + "MAX_GPUS_PER_NODE", + ): + dmax = machobj.get_value(name, {"compiler": compiler}) + if not dmax: + dmax = machobj.get_value(name) + if dmax: + self.set_value(name, dmax) + elif name == "MAX_CPUTASKS_PER_GPU_NODE": + logger.debug( + "Variable {} not defined for machine {} and compiler {}".format( + name, machine_name, compiler + ) + ) + elif name == "MAX_GPUS_PER_NODE": + logger.debug( + "Variable {} not defined for machine {} and compiler {}".format( + name, machine_name, compiler + ) + ) + else: + logger.warning( + "Variable {} not defined for machine {} and compiler {}".format( + name, machine_name, compiler + ) + ) + + machdir = machobj.get_machines_dir() + self.set_value("MACHDIR", machdir) + + # Create env_mach_specific settings from machine info. + env_mach_specific_obj = self.get_env("mach_specific") + env_mach_specific_obj.populate( + machobj, + attributes={ + "mpilib": mpilib, + "compiler": compiler, + "threaded": self.get_build_threaded(), + }, + ) + + self._setup_mach_pes(pecount, multi_driver, ninst, machine_name, mpilib) + + if multi_driver and int(ninst) > 1: + logger.info(" Driver/Coupler has %s instances" % ninst) + + # -------------------------------------------- + # archiving system + # -------------------------------------------- + env_archive = self.get_env("archive") + infile_node = files.get_child("entry", {"id": "ARCHIVE_SPEC_FILE"}) + infile = files.get_default_value(infile_node) + infile = self.get_resolved_value(infile) + logger.debug("archive defaults located in {}".format(infile)) + archive = Archive(infile=infile, files=files) + archive.setup(env_archive, self._components, files=files) + + self.set_value("COMPSET", self._compsetname) + + self._set_pio_xml() + logger.info(" Compset is: {} ".format(self._compsetname)) + logger.info(" Grid is: {} ".format(self._gridname)) + logger.info(" Components in compset are: {} ".format(self._components)) + + if not test and not run_unsupported and self._cime_model == "cesm": + if grid_name in science_support: + logger.info( + "\nThis is a CESM scientifically supported compset at this resolution.\n" + ) + else: + self._check_testlists(compset_alias, grid_name, files) + + self.set_value("REALUSER", os.environ["USER"]) + + # Set project id + if project is None: + project = get_project(machobj) + if project is not None: + self.set_value("PROJECT", project) + elif machobj.get_value("PROJECT_REQUIRED"): + expect(project is not None, "PROJECT_REQUIRED is true but no project found") + # Get charge_account id if it exists + charge_account = get_charge_account(machobj, project) + if charge_account is not None: + self.set_value("CHARGE_ACCOUNT", charge_account) + + # Resolve the CIME_OUTPUT_ROOT variable, other than this + # we don't want to resolve variables until we need them + if output_root is None: + output_root = self.get_value("CIME_OUTPUT_ROOT") + output_root = os.path.abspath(output_root) + self.set_value("CIME_OUTPUT_ROOT", output_root) + if non_local: + self.set_value( + "EXEROOT", os.path.join(output_root, self.get_value("CASE"), "bld") + ) + self.set_value( + "RUNDIR", os.path.join(output_root, self.get_value("CASE"), "run") + ) + self.set_value("NONLOCAL", True) + + # Overwriting an existing exeroot or rundir can cause problems + exeroot = self.get_value("EXEROOT") + rundir = self.get_value("RUNDIR") + for wdir in (exeroot, rundir): + logging.debug("wdir is {}".format(wdir)) + if os.path.exists(wdir): + expect( + not test, "Directory {} already exists, aborting test".format(wdir) + ) + if answer is None: + response = input( + "\nDirectory {} already exists, (r)eplace, (a)bort, or (u)se existing?".format( + wdir + ) + ) + else: + response = answer + + if response.startswith("r"): + shutil.rmtree(wdir) + else: + expect(response.startswith("u"), "Aborting by user request") + + # miscellaneous settings + if self.get_value("RUN_TYPE") == "hybrid": + self.set_value("GET_REFCASE", True) + + if case_group: + self.set_value("CASE_GROUP", case_group) + + # Turn on short term archiving as cesm default setting + model = get_model() + self.set_model_version(model) + if config.default_short_term_archiving and not test: + self.set_value("DOUT_S", True) + self.set_value("TIMER_LEVEL", 4) + + if test: + self.set_value("TEST", True) + + # ---------------------------------------------------------------------------------------------------------- + # Sanity check for a GPU run: + # 1. GPU_TYPE and GPU_OFFLOAD must both be defined to use GPUS + # 2. if ngpus_per_node argument is larger than the value of MAX_GPUS_PER_NODE, the NGPUS_PER_NODE + # XML variable in the env_mach_pes.xml file would be set to MAX_GPUS_PER_NODE automatically. + # 3. if ngpus-per-node argument is equal to 0, it will be updated to 1 automatically. + # ---------------------------------------------------------------------------------------------------------- + max_gpus_per_node = self.get_value("MAX_GPUS_PER_NODE") + if gpu_type and str(gpu_type).lower() != "none": + expect( + max_gpus_per_node, + f"GPUS are not defined for machine={machine_name} and compiler={compiler}", + ) + expect( + gpu_offload, + "Both gpu-type and gpu-offload must be defined if either is defined", + ) + expect( + compiler in ["nvhpc", "cray"], + f"Only nvhpc and cray compilers are expected for a GPU run; the user given compiler is {compiler}, ", + ) + valid_gpu_type = self.get_value("GPU_TYPE").split(",") + valid_gpu_type.remove("none") + expect( + gpu_type in valid_gpu_type, + f"Unsupported GPU type is given: {gpu_type} ; valid values are {valid_gpu_type}", + ) + valid_gpu_offload = self.get_value("GPU_OFFLOAD").split(",") + valid_gpu_offload.remove("none") + expect( + gpu_offload in valid_gpu_offload, + f"Unsupported GPU programming model is given: {gpu_offload} ; valid values are {valid_gpu_offload}", + ) + self.gpu_enabled = True + if ngpus_per_node >= 0: + self.set_value( + "NGPUS_PER_NODE", + max(1, ngpus_per_node) + if ngpus_per_node <= max_gpus_per_node + else max_gpus_per_node, + ) + elif gpu_offload and str(gpu_offload).lower() != "none": + expect( + False, + "Both gpu-type and gpu-offload must be defined if either is defined", + ) + elif ngpus_per_node != 0: + expect( + False, + f"ngpus_per_node is expected to be 0 for a pure CPU run ; {ngpus_per_node} is provided instead ;", + ) + + # Set these two GPU XML variables here to overwrite the default values + # Only set them for "cesm" model + if self._cime_model == "cesm": + self.set_value("GPU_TYPE", str(gpu_type).lower()) + self.set_value("GPU_OFFLOAD", str(gpu_offload).lower()) + + self.initialize_derived_attributes() + + # -------------------------------------------- + # batch system (must come after initialize_derived_attributes) + # -------------------------------------------- + env_batch = self.get_env("batch") + + batch_system_type = machobj.get_value("BATCH_SYSTEM") + + logger.info("Batch_system_type is {}".format(batch_system_type)) + batch = Batch( + batch_system=batch_system_type, + machine=machine_name, + files=files, + extra_machines_dir=extra_machines_dir, + ) + + workflow = Workflow(files=files) + + env_batch.set_batch_system(batch, batch_system_type=batch_system_type) + + bjobs = workflow.get_workflow_jobs(machine=machine_name, workflowid=workflowid) + env_workflow = self.get_env("workflow") + env_workflow.create_job_groups(bjobs, test) + + if walltime: + self.set_value( + "USER_REQUESTED_WALLTIME", walltime, subgroup=self.get_primary_job() + ) + if queue: + self.set_value( + "USER_REQUESTED_QUEUE", queue, subgroup=self.get_primary_job() + ) + + env_batch.set_job_defaults(bjobs, self) + # Set BATCH_COMMAND_FLAGS to the default values + + for job in bjobs: + if test and job[0] == "case.run" or not test and job[0] == "case.test": + continue + submitargs = env_batch.get_submit_args(self, job[0], resolve=False) + self.set_value("BATCH_COMMAND_FLAGS", submitargs, subgroup=job[0]) + + # Make sure that parallel IO is not specified if total_tasks==1 + if self.total_tasks == 1: + for compclass in self._component_classes: + key = "PIO_TYPENAME_{}".format(compclass) + pio_typename = self.get_value(key) + if pio_typename in ("pnetcdf", "netcdf4p"): + self.set_value(key, "netcdf") + + if input_dir is not None: + self.set_value("DIN_LOC_ROOT", os.path.abspath(input_dir))
+ + +
+[docs] + def get_compset_var_settings(self, files): + infile = files.get_value( + "COMPSETS_SPEC_FILE", attribute={"component": self._primary_component} + ) + compset_obj = Compsets(infile=infile, files=files) + matches = compset_obj.get_compset_var_settings( + self._compsetname, self._gridname + ) + for name, value in matches: + if len(value) > 0: + logger.info( + "Compset specific settings: name is {} and value is {}".format( + name, value + ) + ) + self.set_lookup_value(name, value)
+ + +
+[docs] + def set_initial_test_values(self): + testobj = self.get_env("test") + testobj.set_initial_values(self)
+ + +
+[docs] + def get_batch_jobs(self): + batchobj = self.get_env("batch") + return batchobj.get_jobs()
+ + + def _set_pio_xml(self): + pioobj = PIO(self._component_classes) + grid = self.get_value("GRID") + compiler = self.get_value("COMPILER") + mach = self.get_value("MACH") + compset = self.get_value("COMPSET") + mpilib = self.get_value("MPILIB") + + defaults = pioobj.get_defaults( + grid=grid, compset=compset, mach=mach, compiler=compiler, mpilib=mpilib + ) + + for vid, value in list(defaults.items()): + self.set_value(vid, value) + + def _create_caseroot_tools(self): + machines_dir = os.path.abspath(self.get_value("MACHDIR")) + machine = self.get_value("MACH") + toolsdir = os.path.join(self.get_value("CIMEROOT"), "CIME", "Tools") + casetools = os.path.join(self._caseroot, "Tools") + # setup executable files in caseroot/ + exefiles = ( + os.path.join(toolsdir, "case.setup"), + os.path.join(toolsdir, "case.build"), + os.path.join(toolsdir, "case.submit"), + os.path.join(toolsdir, "case.qstatus"), + os.path.join(toolsdir, "case.cmpgen_namelists"), + os.path.join(toolsdir, "preview_namelists"), + os.path.join(toolsdir, "preview_run"), + os.path.join(toolsdir, "check_input_data"), + os.path.join(toolsdir, "check_case"), + os.path.join(toolsdir, "xmlchange"), + os.path.join(toolsdir, "xmlquery"), + os.path.join(toolsdir, "pelayout"), + ) + try: + for exefile in exefiles: + destfile = os.path.join(self._caseroot, os.path.basename(exefile)) + os.symlink(exefile, destfile) + except Exception as e: + logger.warning("FAILED to set up exefiles: {}".format(str(e))) + + toolfiles = [ + os.path.join(toolsdir, "check_lockedfiles"), + os.path.join(toolsdir, "get_standard_makefile_args"), + os.path.join(toolsdir, "getTiming"), + os.path.join(toolsdir, "save_provenance"), + os.path.join(toolsdir, "Makefile"), + os.path.join(toolsdir, "mkSrcfiles"), + os.path.join(toolsdir, "mkDepends"), + ] + + # used on Titan + if os.path.isfile(os.path.join(toolsdir, "mdiag_reduce.csh")): + toolfiles.append(os.path.join(toolsdir, "mdiag_reduce.csh")) + toolfiles.append(os.path.join(toolsdir, "mdiag_reduce.pl")) + + for toolfile in toolfiles: + destfile = os.path.join(casetools, os.path.basename(toolfile)) + expect(os.path.isfile(toolfile), " File {} does not exist".format(toolfile)) + try: + os.symlink(toolfile, destfile) + except Exception as e: + logger.warning( + "FAILED to set up toolfiles: {} {} {}".format( + str(e), toolfile, destfile + ) + ) + + if config.copy_e3sm_tools: + if os.path.exists(os.path.join(machines_dir, "syslog.{}".format(machine))): + safe_copy( + os.path.join(machines_dir, "syslog.{}".format(machine)), + os.path.join(casetools, "mach_syslog"), + ) + else: + safe_copy( + os.path.join(machines_dir, "syslog.noop"), + os.path.join(casetools, "mach_syslog"), + ) + + srcroot = self.get_value("SRCROOT") + customize_path = os.path.join(srcroot, "cime_config", "customize") + safe_copy(os.path.join(customize_path, "e3sm_compile_wrap.py"), casetools) + + # add archive_metadata to the CASEROOT but only for CESM + if config.copy_cesm_tools: + try: + exefile = os.path.join(toolsdir, "archive_metadata") + destfile = os.path.join(self._caseroot, os.path.basename(exefile)) + os.symlink(exefile, destfile) + except Exception as e: + logger.warning("FAILED to set up exefiles: {}".format(str(e))) + + def _create_caseroot_sourcemods(self): + components = self.get_compset_components() + components.extend(["share", "drv"]) + if self._comp_interface == "nuopc": + components.extend(["cdeps"]) + + readme_message_start = ( + "Put source mods for the {component} library in this directory." + ) + readme_message_end = """ + +WARNING: SourceMods are not kept under version control, and can easily +become out of date if changes are made to the source code on which they +are based. We only recommend using SourceMods for small, short-term +changes that just apply to one or two cases. For larger or longer-term +changes, including gradual, incremental changes towards a final +solution, we highly recommend making changes in the main source tree, +leveraging version control (git or svn). +""" + + for component in components: + directory = os.path.join( + self._caseroot, "SourceMods", "src.{}".format(component) + ) + # don't make SourceMods for stub components + if not os.path.exists(directory) and component not in ( + "satm", + "slnd", + "sice", + "socn", + "sesp", + "sglc", + "swav", + ): + os.makedirs(directory) + # Besides giving some information on SourceMods, this + # README file serves one other important purpose: By + # putting a file inside each SourceMods subdirectory, we + # prevent aggressive scrubbers from scrubbing these + # directories due to being empty (which can cause builds + # to fail). + readme_file = os.path.join(directory, "README") + with open(readme_file, "w") as fd: + fd.write(readme_message_start.format(component=component)) + + if component == "cdeps": + readme_message_extra = """ + +Note that this subdirectory should only contain files from CDEPS's +dshr and streams source code directories. +Files related to specific data models should go in SourceMods subdirectories +for those data models (e.g., src.datm).""" + fd.write(readme_message_extra) + + fd.write(readme_message_end) + + if config.copy_cism_source_mods: + # Note: this is CESM specific, given that we are referencing cism explitly + if "cism" in components: + directory = os.path.join( + self._caseroot, "SourceMods", "src.cism", "source_cism" + ) + if not os.path.exists(directory): + os.makedirs(directory) + readme_file = os.path.join(directory, "README") + str_to_write = """Put source mods for the source_cism library in this subdirectory. +This includes any files from $COMP_ROOT_DIR_GLC/source_cism. Anything +else (e.g., mods to source_glc or drivers) goes in the src.cism +directory, NOT in this subdirectory.""" + + with open(readme_file, "w") as fd: + fd.write(str_to_write) + +
+[docs] + def create_caseroot(self, clone=False): + if not os.path.exists(self._caseroot): + # Make the case directory + logger.info(" Creating Case directory {}".format(self._caseroot)) + os.makedirs(self._caseroot) + os.chdir(self._caseroot) + + # Create relevant directories in $self._caseroot + if clone: + newdirs = (LOCKED_DIR, "Tools") + else: + newdirs = ("SourceMods", LOCKED_DIR, "Buildconf", "Tools") + for newdir in newdirs: + os.makedirs(newdir) + + # Open a new README.case file in $self._caseroot + append_status(" ".join(sys.argv), "README.case", caseroot=self._caseroot) + compset_info = "Compset longname is {}".format(self.get_value("COMPSET")) + append_status(compset_info, "README.case", caseroot=self._caseroot) + append_status( + "Compset specification file is {}".format( + self.get_value("COMPSETS_SPEC_FILE") + ), + "README.case", + caseroot=self._caseroot, + ) + append_status( + "Pes specification file is {}".format(self.get_value("PES_SPEC_FILE")), + "README.case", + caseroot=self._caseroot, + ) + if "forcing" in self._component_description: + append_status( + "Forcing is {}".format(self._component_description["forcing"]), + "README.case", + caseroot=self._caseroot, + ) + for component_class in self._component_classes: + if ( + component_class in self._component_description + and len(self._component_description[component_class]) > 0 + ): + append_status( + "Component {} is {}".format( + component_class, self._component_description[component_class] + ), + "README.case", + caseroot=self._caseroot, + ) + if component_class == "CPL": + append_status( + "Using %s coupler instances" % (self.get_value("NINST_CPL")), + "README.case", + caseroot=self._caseroot, + ) + continue + comp_grid = "{}_GRID".format(component_class) + + append_status( + "{} is {}".format(comp_grid, self.get_value(comp_grid)), + "README.case", + caseroot=self._caseroot, + ) + comp = str(self.get_value("COMP_{}".format(component_class))) + user_mods = self._get_comp_user_mods(comp) + if user_mods is not None: + note = "This component includes user_mods {}".format(user_mods) + append_status(note, "README.case", caseroot=self._caseroot) + logger.info(note) + if not clone: + self._create_caseroot_sourcemods() + self._create_caseroot_tools()
+ + +
+[docs] + def apply_user_mods(self, user_mods_dirs=None): + """ + User mods can be specified on the create_newcase command line (usually when called from create test) + or they can be in the compset definition, or both. + + If user_mods_dirs is specified, it should be a list of paths giving the user mods + specified on the create_newcase command line. + """ + all_user_mods = [] + for comp in self._component_classes: + component = str(self.get_value("COMP_{}".format(comp))) + if component == self._primary_component: + continue + comp_user_mods = self._get_comp_user_mods(component) + if comp_user_mods is not None: + all_user_mods.append(comp_user_mods) + # get the primary last so that it takes precidence over other components + comp_user_mods = self._get_comp_user_mods(self._primary_component) + if comp_user_mods is not None: + all_user_mods.append(comp_user_mods) + if user_mods_dirs is not None: + all_user_mods.extend(user_mods_dirs) + + # This looping order will lead to the specified user_mods_dirs taking + # precedence over self._user_mods, if there are any conflicts. + for user_mods in all_user_mods: + if os.path.isabs(user_mods): + user_mods_path = user_mods + else: + user_mods_path = self.get_value("USER_MODS_DIR") + user_mods_path = os.path.join(user_mods_path, user_mods) + apply_user_mods(self._caseroot, user_mods_path) + + # User mods may have modified underlying XML files + if all_user_mods: + self.read_xml()
+ + + def _get_comp_user_mods(self, component): + """ + For a component 'foo', gets the value of FOO_USER_MODS. + + Returns None if no value was found, or if the value is an empty string. + """ + comp_user_mods = self.get_value("{}_USER_MODS".format(component.upper())) + # pylint: disable=no-member + if comp_user_mods is None or comp_user_mods == "" or comp_user_mods.isspace(): + return None + else: + return comp_user_mods + +
+[docs] + def submit_jobs( + self, + no_batch=False, + job=None, + skip_pnl=None, + prereq=None, + allow_fail=False, + resubmit_immediate=False, + mail_user=None, + mail_type=None, + batch_args=None, + dry_run=False, + workflow=True, + ): + env_batch = self.get_env("batch") + result = env_batch.submit_jobs( + self, + no_batch=no_batch, + skip_pnl=skip_pnl, + job=job, + user_prereq=prereq, + allow_fail=allow_fail, + resubmit_immediate=resubmit_immediate, + mail_user=mail_user, + mail_type=mail_type, + batch_args=batch_args, + dry_run=dry_run, + workflow=workflow, + ) + return result
+ + +
+[docs] + def get_job_info(self): + """ + Get information on batch jobs associated with this case + """ + xml_job_ids = self.get_value("JOB_IDS") + if not xml_job_ids: + return {} + else: + result = {} + job_infos = xml_job_ids.split(", ") # pylint: disable=no-member + for job_info in job_infos: + jobname, jobid = job_info.split(":") + result[jobname] = jobid + + return result
+ + +
+[docs] + def get_job_id(self, output): + env_batch = self.get_env("batch") + return env_batch.get_job_id(output)
+ + +
+[docs] + def report_job_status(self): + jobmap = self.get_job_info() + if not jobmap: + logger.info( + "No job ids associated with this case. Either case.submit was not run or was run with no-batch" + ) + else: + for jobname, jobid in list(jobmap.items()): + status = self.get_env("batch").get_status(jobid) + if status: + logger.info("{}: {}".format(jobname, status)) + else: + logger.info( + "{}: Unable to get status. Job may be complete already.".format( + jobname + ) + )
+ + +
+[docs] + def cancel_batch_jobs(self, jobids): + env_batch = self.get_env("batch") + for jobid in jobids: + success = env_batch.cancel_job(jobid) + if not success: + logger.warning("Failed to kill {}".format(jobid))
+ + +
+[docs] + def get_mpirun_cmd(self, job=None, allow_unresolved_envvars=True, overrides=None): + if job is None: + job = self.get_primary_job() + + env_mach_specific = self.get_env("mach_specific") + run_exe = env_mach_specific.get_value("run_exe") + run_misc_suffix = env_mach_specific.get_value("run_misc_suffix") + run_misc_suffix = "" if run_misc_suffix is None else run_misc_suffix + + mpirun_cmd_override = self.get_value("MPI_RUN_COMMAND") + if mpirun_cmd_override not in ["", None, "UNSET"]: + return self.get_resolved_value( + mpirun_cmd_override + " " + run_exe + " " + run_misc_suffix + ) + queue = self.get_value("JOB_QUEUE", subgroup=job) + + # Things that will have to be matched against mpirun element attributes + mpi_attribs = { + "compiler": self.get_value("COMPILER"), + "mpilib": self.get_value("MPILIB"), + "threaded": self.get_build_threaded(), + "queue": queue, + "unit_testing": False, + "comp_interface": self._comp_interface, + } + + ( + executable, + mpi_arg_list, + custom_run_exe, + custom_run_misc_suffix, + ) = env_mach_specific.get_mpirun(self, mpi_attribs, job) + if custom_run_exe: + logger.info("Using a custom run_exe {}".format(custom_run_exe)) + run_exe = custom_run_exe + if custom_run_misc_suffix: + logger.info( + "Using a custom run_misc_suffix {}".format(custom_run_misc_suffix) + ) + run_misc_suffix = custom_run_misc_suffix + + aprun_mode = env_mach_specific.get_aprun_mode(mpi_attribs) + + # special case for aprun + if ( + executable is not None + and "aprun" in executable + and aprun_mode != "ignore" + # and not "theta" in self.get_value("MACH") + ): + extra_args = env_mach_specific.get_aprun_args( + self, mpi_attribs, job, overrides=overrides + ) + + aprun_args, num_nodes, _, _, _ = get_aprun_cmd_for_case( + self, + run_exe, + overrides=overrides, + extra_args=extra_args, + ) + if job in ("case.run", "case.test"): + expect( + (num_nodes + self.spare_nodes) == self.num_nodes, + "Not using optimized num nodes", + ) + return self.get_resolved_value( + executable + aprun_args + " " + run_misc_suffix, + allow_unresolved_envvars=allow_unresolved_envvars, + ) + + else: + mpi_arg_string = " ".join(mpi_arg_list) + + if self.get_value("BATCH_SYSTEM") == "cobalt": + mpi_arg_string += " : " + + ngpus_per_node = self.get_value("NGPUS_PER_NODE") + if ngpus_per_node and ngpus_per_node > 0: + mpi_gpu_run_script = self.get_value("MPI_GPU_WRAPPER_SCRIPT") + if mpi_gpu_run_script: + mpi_arg_string = mpi_arg_string + " " + mpi_gpu_run_script + + return self.get_resolved_value( + "{} {} {} {}".format( + executable if executable is not None else "", + mpi_arg_string, + run_exe, + run_misc_suffix, + ), + allow_unresolved_envvars=allow_unresolved_envvars, + )
+ + +
+[docs] + def set_model_version(self, model): + version = "unknown" + srcroot = self.get_value("SRCROOT") + version = get_current_commit(True, srcroot, tag=(model == "cesm")) + + self.set_value("MODEL_VERSION", version) + + if version != "unknown": + logger.info("{} model version found: {}".format(model, version)) + else: + logger.warning("WARNING: No {} Model version found.".format(model))
+ + +
+[docs] + def load_env(self, reset=False, job=None, verbose=False): + if not self._is_env_loaded or reset: + if job is None: + job = self.get_primary_job() + os.environ["OMP_NUM_THREADS"] = str(self.thread_count) + env_module = self.get_env("mach_specific") + self._loaded_envs = env_module.load_env(self, job=job, verbose=verbose) + self._loaded_envs.append(("OMP_NUM_THREADS", os.environ["OMP_NUM_THREADS"])) + self._is_env_loaded = True + + return self._loaded_envs
+ + +
+[docs] + def get_build_threaded(self): + """ + Returns True if current settings require a threaded build/run. + """ + force_threaded = self.get_value("FORCE_BUILD_SMP") + if not self.thread_count: + return False + smp_present = force_threaded or self.thread_count > 1 + return smp_present
+ + + def _check_testlists(self, compset_alias, grid_name, files): + """ + CESM only: check the testlist file for tests of this compset grid combination + + compset_alias should be None for a user-defined compset (i.e., a compset + without an alias) + """ + if "TESTS_SPEC_FILE" in self.lookups: + tests_spec_file = self.get_resolved_value(self.lookups["TESTS_SPEC_FILE"]) + else: + tests_spec_file = self.get_value("TESTS_SPEC_FILE") + + testcnt = 0 + if os.path.isfile(tests_spec_file) and compset_alias is not None: + # It's important that we not try to find matching tests if + # compset_alias is None, since compset=None tells get_tests to find + # tests of all compsets! + # Only collect supported tests as this _check_testlists is only + # called if run_unsupported is False. + tests = Testlist(tests_spec_file, files) + testlist = tests.get_tests( + compset=compset_alias, grid=grid_name, supported_only=True + ) + test_categories = ["prealpha", "prebeta"] + for test in testlist: + if ( + test["category"] in test_categories + or "aux_" in test["category"] + or get_cime_default_driver() in test["category"] + ): + testcnt += 1 + if testcnt > 0: + logger.warning( + "\n*********************************************************************************************************************************" + ) + logger.warning( + "This compset and grid combination is not scientifically supported, however it is used in {:d} tests.".format( + testcnt + ) + ) + logger.warning( + "*********************************************************************************************************************************\n" + ) + else: + expect( + False, + "\nThis compset and grid combination is untested in CESM. " + "Override this warning with the --run-unsupported option to create_newcase.", + error_prefix="STOP: ", + ) + +
+[docs] + def set_file(self, xmlfile): + """ + force the case object to consider only xmlfile + """ + expect(os.path.isfile(xmlfile), "Could not find file {}".format(xmlfile)) + + if not self._read_only_mode: + self.flush(flushall=True) + + gfile = GenericXML(infile=xmlfile) + ftype = gfile.get_id() + + logger.warning("setting case file to {}".format(xmlfile)) + components = self.get_value("COMP_CLASSES") + new_env_file = None + for env_file in self._files: + if os.path.basename(env_file.filename) == ftype: + if ftype == "env_run.xml": + new_env_file = EnvRun(infile=xmlfile, components=components) + elif ftype == "env_build.xml": + new_env_file = EnvBuild(infile=xmlfile, components=components) + elif ftype == "env_case.xml": + new_env_file = EnvCase(infile=xmlfile, components=components) + elif ftype == "env_mach_pes.xml": + new_env_file = EnvMachPes( + infile=xmlfile, + components=components, + comp_interface=self._comp_interface, + ) + elif ftype == "env_batch.xml": + new_env_file = EnvBatch(infile=xmlfile) + elif ftype == "env_workflow.xml": + new_env_file = EnvWorkflow(infile=xmlfile) + elif ftype == "env_test.xml": + new_env_file = EnvTest(infile=xmlfile) + elif ftype == "env_archive.xml": + new_env_file = EnvArchive(infile=xmlfile) + elif ftype == "env_mach_specific.xml": + new_env_file = EnvMachSpecific( + infile=xmlfile, comp_interface=self._comp_interface + ) + else: + expect(False, "No match found for file type {}".format(ftype)) + + if new_env_file is not None: + self._env_entryid_files = [] + self._env_generic_files = [] + if ftype in ["env_archive.xml", "env_mach_specific.xml"]: + self._env_generic_files = [new_env_file] + else: + self._env_entryid_files = [new_env_file] + + break + + expect( + new_env_file is not None, "No match found for file type {}".format(ftype) + ) + self._files = [new_env_file]
+ + +
+[docs] + def update_env(self, new_object, env_file, blow_away=False): + """ + Replace a case env object file + """ + old_object = self.get_env(env_file) + if not blow_away: + expect( + not old_object.needsrewrite, + "Potential loss of unflushed changes in {}".format(env_file), + ) + + new_object.filename = old_object.filename + if old_object in self._env_entryid_files: + self._env_entryid_files.remove(old_object) + self._env_entryid_files.append(new_object) + elif old_object in self._env_generic_files: + self._env_generic_files.remove(old_object) + self._env_generic_files.append(new_object) + self._files.remove(old_object) + self._files.append(new_object)
+ + +
+[docs] + def get_latest_cpl_log(self, coupler_log_path=None, cplname="cpl"): + """ + find and return the latest cpl log file in the + coupler_log_path directory + """ + if coupler_log_path is None: + coupler_log_path = self.get_value("RUNDIR") + cpllog = None + cpllogs = glob.glob(os.path.join(coupler_log_path, "{}.log.*".format(cplname))) + if cpllogs: + cpllog = max(cpllogs, key=os.path.getctime) + return cpllog + else: + return None
+ + +
+[docs] + def record_cmd(self, cmd=None, init=False): + lines = [] + caseroot = self.get_value("CASEROOT") + cimeroot = self.get_value("CIMEROOT") + + if cmd is None: + cmd = self.fix_sys_argv_quotes(list(sys.argv)) + + if init: + ctime = time.strftime("%Y-%m-%d %H:%M:%S") + + lines.append("#!/bin/bash\n\n") + # stop script on error, prevents `create_newcase` from failing + # and continuing to execute commands + lines.append("set -e\n\n") + lines.append("# Created {}\n\n".format(ctime)) + lines.append('CASEDIR="{}"\n\n'.format(caseroot)) + lines.append('cd "${CASEDIR}"\n\n') + + # Ensure program path is absolute + cmd[0] = re.sub("^./", "{}/scripts/".format(cimeroot), cmd[0]) + else: + expect( + caseroot + and os.path.isdir(caseroot) + and os.path.isfile(os.path.join(caseroot, "env_case.xml")), + "Directory {} does not appear to be a valid case directory".format( + caseroot + ), + ) + + cmd = " ".join(cmd) + + # Replace instances of caseroot with variable + cmd = re.sub(caseroot, '"${CASEDIR}"', cmd) + + lines_len = len(lines) + lines.insert(lines_len - 1 if init else lines_len, "{}\n\n".format(cmd)) + + try: + with open(os.path.join(caseroot, "replay.sh"), "a") as fd: + fd.writelines(lines) + except PermissionError: + logger.warning("Could not write to 'replay.sh' script")
+ + +
+[docs] + def fix_sys_argv_quotes(self, cmd): + """Fixes removed quotes from argument list. + + Restores quotes to `--val` and `KEY=VALUE` from sys.argv. + """ + # handle fixing quotes + # case 1: "--val", " -nlev 276 " + # case 2: "-val" , " -nlev 276 " + # case 3: CAM_CONFIG_OPTS=" -nlev 276 " + for i, item in enumerate(cmd): + if re.match("[-]{1,2}val", item) is not None: + if i + 1 >= len(cmd): + continue + + # only quote if value contains spaces + if " " in cmd[i + 1]: + cmd[i + 1] = f'"{cmd[i + 1]}"' + else: + m = re.search("([^=]*)=(.*)", item) + + if m is None: + continue + + g = m.groups() + + # only quote if value contains spaces + if " " in g[1]: + cmd[i] = f'{g[0]}="{g[1]}"' + + return cmd
+ + +
+[docs] + def create( + self, + casename, + srcroot, + compset_name, + grid_name, + user_mods_dirs=None, + machine_name=None, + project=None, + pecount=None, + compiler=None, + mpilib=None, + pesfile=None, + gridfile=None, + multi_driver=False, + ninst=1, + test=False, + walltime=None, + queue=None, + output_root=None, + run_unsupported=False, + answer=None, + input_dir=None, + driver=None, + workflowid="default", + non_local=False, + extra_machines_dir=None, + case_group=None, + ngpus_per_node=0, + gpu_type=None, + gpu_offload=None, + ): + try: + # Set values for env_case.xml + self.set_lookup_value("CASE", os.path.basename(casename)) + self.set_lookup_value("CASEROOT", self._caseroot) + self.set_lookup_value("SRCROOT", srcroot) + self.set_lookup_value("CASE_HASH", self.new_hash()) + + # Propagate to `GenericXML` to resolve $SRCROOT + utils.GLOBAL["SRCROOT"] = srcroot + + customize_path = os.path.join(srcroot, "cime_config", "customize") + + config.load(customize_path) + + # If any of the top level user_mods_dirs contain a config_grids.xml file and + # gridfile was not set on the command line, use it. However, if there are + # multiple user_mods_dirs, it is an error for more than one of them to contain + # a config_grids.xml file, because it would be ambiguous which one we should + # use. + if user_mods_dirs: + found_um_config_grids = False + for this_user_mods_dir in user_mods_dirs: + um_config_grids = os.path.join( + this_user_mods_dir, "config_grids.xml" + ) + if os.path.exists(um_config_grids): + if gridfile: + # Either a gridfile was found in an earlier user_mods + # directory or a gridfile was given on the command line. The + # first case (which would set found_um_config_grids to True) + # is an error; the second case just generates a warning. + expect( + not found_um_config_grids, + "Cannot handle multiple usermods directories with config_grids.xml files: {} and {}".format( + gridfile, um_config_grids + ), + ) + logger.warning( + "A config_grids file was found in {} but also provided on the command line {}, command line takes precedence".format( + um_config_grids, gridfile + ) + ) + else: + gridfile = um_config_grids + found_um_config_grids = True + + # Configure the Case + self.configure( + compset_name, + grid_name, + machine_name=machine_name, + project=project, + pecount=pecount, + compiler=compiler, + mpilib=mpilib, + pesfile=pesfile, + gridfile=gridfile, + multi_driver=multi_driver, + ninst=ninst, + test=test, + walltime=walltime, + queue=queue, + output_root=output_root, + run_unsupported=run_unsupported, + answer=answer, + input_dir=input_dir, + driver=driver, + workflowid=workflowid, + non_local=non_local, + extra_machines_dir=extra_machines_dir, + case_group=case_group, + ngpus_per_node=ngpus_per_node, + gpu_type=gpu_type, + gpu_offload=gpu_offload, + ) + + self.create_caseroot() + + # Write out the case files + self.flush(flushall=True) + self.apply_user_mods(user_mods_dirs) + + # Lock env_case.xml + lock_file("env_case.xml", self._caseroot) + except Exception: + if os.path.exists(self._caseroot): + if not logger.isEnabledFor(logging.DEBUG) and not test: + logger.warning( + "Failed to setup case, removing {}\nUse --debug to force me to keep caseroot".format( + self._caseroot + ) + ) + shutil.rmtree(self._caseroot) + else: + logger.warning("Leaving broken case dir {}".format(self._caseroot)) + + raise
+ + +
+[docs] + def new_hash(self): + """Creates a hash""" + args = "".join(sys.argv) + ctime = time.strftime("%Y-%m-%d %H:%M:%S") + hostname = socket.getfqdn() + user = getpass.getuser() + + data = "{}{}{}{}".format(args, ctime, hostname, user) + + return hashlib.sha256(data.encode()).hexdigest()
+ + +
+[docs] + def is_save_timing_dir_project(self, project): + """ + Check whether the project is permitted to archive performance data in the location + specified for the current machine + """ + save_timing_dir_projects = self.get_value("SAVE_TIMING_DIR_PROJECTS") + if not save_timing_dir_projects: + return False + else: + save_timing_dir_projects = save_timing_dir_projects.split( + "," + ) # pylint: disable=no-member + for save_timing_dir_project in save_timing_dir_projects: + regex = re.compile(save_timing_dir_project) + if regex.match(project): + return True + + return False
+ + +
+[docs] + def get_primary_job(self): + return "case.test" if self.get_value("TEST") else "case.run"
+ + +
+[docs] + def get_first_job(self): + env_workflow = self.get_env("workflow") + jobs = env_workflow.get_jobs() + return jobs[0]
+ + +
+[docs] + def preview_run(self, write, job): + write("CASE INFO:") + write(" nodes: {}".format(self.num_nodes)) + write(" total tasks: {}".format(self.total_tasks)) + write(" tasks per node: {}".format(self.tasks_per_node)) + write(" thread count: {}".format(self.thread_count)) + write(" ngpus per node: {}".format(self.ngpus_per_node)) + write("") + + write("BATCH INFO:") + if not job: + job = self.get_first_job() + + job_id_to_cmd = self.submit_jobs(dry_run=True, job=job) + + env_batch = self.get_env("batch") + for job_id, cmd in job_id_to_cmd: + write(" FOR JOB: {}".format(job_id)) + write(" ENV:") + loaded_envs = self.load_env(job=job_id, reset=True, verbose=False) + + for name, value in iter(sorted(loaded_envs, key=lambda x: x[0])): + write(" Setting Environment {}={}".format(name, value)) + + write("") + write(" SUBMIT CMD:") + write(" {}".format(self.get_resolved_value(cmd))) + write("") + if job_id in ("case.run", "case.test"): + # get_job_overrides must come after the case.load_env since the cmd may use + # env vars. + overrides = env_batch.get_job_overrides(job_id, self) + write(" MPIRUN (job={}):".format(job_id)) + write(" {}".format(self.get_resolved_value(overrides["mpirun"]))) + write("")
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_clone.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_clone.html new file mode 100644 index 00000000000..d59c55673c9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_clone.html @@ -0,0 +1,365 @@ + + + + + + CIME.case.case_clone — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case_clone

+"""
+create_clone is a member of the Case class from file case.py
+"""
+import os, glob, shutil
+from CIME.XML.standard_module_setup import *
+from CIME.utils import expect, check_name, safe_copy, get_model
+from CIME.simple_compare import compare_files
+from CIME.locked_files import lock_file
+from CIME.user_mod_support import apply_user_mods
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +def create_clone( + self, + newcaseroot, + keepexe=False, + mach_dir=None, + project=None, + cime_output_root=None, + exeroot=None, + rundir=None, + user_mods_dirs=None, +): + """ + Create a case clone + + If exeroot or rundir are provided (not None), sets these directories + to the given paths; if not provided, uses default values for these + directories. It is an error to provide exeroot if keepexe is True. + """ + if cime_output_root is None: + cime_output_root = self.get_value("CIME_OUTPUT_ROOT") + + newcaseroot = os.path.abspath(newcaseroot) + expect( + not os.path.isdir(newcaseroot), + "New caseroot directory {} already exists".format(newcaseroot), + ) + newcasename = os.path.basename(newcaseroot) + expect(check_name(newcasename), "New case name invalid {} ".format(newcasename)) + newcase_cimeroot = os.path.abspath(get_cime_root()) + + # create clone from case to case + clone_cimeroot = self.get_value("CIMEROOT") + if newcase_cimeroot != clone_cimeroot: + logger.warning(" case CIMEROOT is {} ".format(newcase_cimeroot)) + logger.warning(" clone CIMEROOT is {} ".format(clone_cimeroot)) + logger.warning( + " It is NOT recommended to clone cases from different versions of CIME." + ) + + # *** create case object as deepcopy of clone object *** + if os.path.isdir(os.path.join(newcase_cimeroot, "share")) and get_model() == "cesm": + srcroot = newcase_cimeroot + else: + srcroot = self.get_value("SRCROOT") + if not srcroot: + srcroot = os.path.join(newcase_cimeroot, "..") + + newcase = self.copy(newcasename, newcaseroot, newsrcroot=srcroot) + with newcase: + newcase.set_value("CIMEROOT", newcase_cimeroot) + + # if we are cloning to a different user modify the output directory + olduser = self.get_value("USER") + newuser = os.environ.get("USER") + if olduser != newuser: + cime_output_root = cime_output_root.replace(olduser, newuser) + newcase.set_value("USER", newuser) + newcase.set_value("CIME_OUTPUT_ROOT", cime_output_root) + + # try to make the new output directory and raise an exception + # on any error other than directory already exists. + if os.path.isdir(cime_output_root): + expect( + os.access(cime_output_root, os.W_OK), + "Directory {} is not writable " + "by this user. Use the --cime-output-root flag to provide a writable " + "scratch directory".format(cime_output_root), + ) + else: + if not os.path.isdir(cime_output_root): + os.makedirs(cime_output_root) + + # determine if will use clone executable or not + if keepexe: + orig_exeroot = self.get_value("EXEROOT") + newcase.set_value("EXEROOT", orig_exeroot) + newcase.set_value("BUILD_COMPLETE", "TRUE") + orig_bld_complete = self.get_value("BUILD_COMPLETE") + if not orig_bld_complete: + logger.warning( + "\nWARNING: Creating a clone with --keepexe before building the original case may cause PIO_TYPENAME to be invalid in the clone" + ) + logger.warning( + "Avoid this message by building case one before you clone.\n" + ) + else: + newcase.set_value("BUILD_COMPLETE", "FALSE") + + # set machdir + if mach_dir is not None: + newcase.set_value("MACHDIR", mach_dir) + + # set exeroot and rundir if requested + if exeroot is not None: + expect( + not keepexe, + "create_case_clone: if keepexe is True, " "then exeroot cannot be set", + ) + newcase.set_value("EXEROOT", exeroot) + if rundir is not None: + newcase.set_value("RUNDIR", rundir) + + # Set project id + # Note: we do not just copy this from the clone because it seems likely that + # users will want to change this sometimes, especially when cloning another + # user's case. However, note that, if a project is not given, the fallback will + # be to copy it from the clone, just like other xml variables are copied. + if project is None: + project = self.get_value("PROJECT", subgroup=self.get_primary_job()) + if project is not None: + newcase.set_value("PROJECT", project) + + # create caseroot + newcase.create_caseroot(clone=True) + + # Many files in the case will be links back to the source tree + # but users may have broken links to modify files locally. In this case + # copy the locally modified file. We only want to do this for files that + # already exist in the clone. + # pylint: disable=protected-access + self._copy_user_modified_to_clone( + self.get_value("CASEROOT"), newcase.get_value("CASEROOT") + ) + self._copy_user_modified_to_clone( + self.get_value("CASETOOLS"), newcase.get_value("CASETOOLS") + ) + + newcase.flush(flushall=True) + + # copy user_ files + cloneroot = self.get_case_root() + files = glob.glob(cloneroot + "/user_*") + + for item in files: + safe_copy(item, newcaseroot) + + # copy SourceMod and Buildconf files + # if symlinks exist, copy rather than follow links + for casesub in ("SourceMods", "Buildconf"): + shutil.copytree( + os.path.join(cloneroot, casesub), + os.path.join(newcaseroot, casesub), + symlinks=True, + ) + + # copy the postprocessing directory if it exists + if os.path.isdir(os.path.join(cloneroot, "postprocess")): + shutil.copytree( + os.path.join(cloneroot, "postprocess"), + os.path.join(newcaseroot, "postprocess"), + symlinks=True, + ) + + # lock env_case.xml in new case + lock_file("env_case.xml", newcaseroot) + + # apply user_mods if appropriate + newcase_root = newcase.get_value("CASEROOT") + if user_mods_dirs is not None: + if keepexe: + # If keepexe CANNOT change any env_build.xml variables - so make a temporary copy of + # env_build.xml and verify that it has not been modified + safe_copy( + os.path.join(newcaseroot, "env_build.xml"), + os.path.join(newcaseroot, "LockedFiles", "env_build.xml"), + ) + + # Now apply contents of all specified user_mods directories + for one_user_mods_dir in user_mods_dirs: + apply_user_mods(newcase_root, one_user_mods_dir, keepexe=keepexe) + + # Determine if env_build.xml has changed + if keepexe: + success, comment = compare_files( + os.path.join(newcaseroot, "env_build.xml"), + os.path.join(newcaseroot, "LockedFiles", "env_build.xml"), + ) + if not success: + logger.warning(comment) + shutil.rmtree(newcase_root) + expect( + False, + "env_build.xml cannot be changed via usermods if keepexe is an option: \n " + "Failed to clone case, removed {}\n".format(newcase_root), + ) + + # if keep executable, then remove the new case SourceMods directory and link SourceMods to + # the clone directory + if keepexe: + shutil.rmtree(os.path.join(newcase_root, "SourceMods")) + os.symlink( + os.path.join(cloneroot, "SourceMods"), + os.path.join(newcase_root, "SourceMods"), + ) + + # Update README.case + fclone = open(cloneroot + "/README.case", "r") + fnewcase = open(newcaseroot + "/README.case", "a") + fnewcase.write("\n *** original clone README follows ****") + fnewcase.write("\n " + fclone.read()) + + clonename = self.get_value("CASE") + logger.info( + " Successfully created new case {} from clone case {} ".format( + newcasename, clonename + ) + ) + + newcase.case_setup() + + return newcase
+ + + +# pylint: disable=unused-argument +def _copy_user_modified_to_clone(self, origpath, newpath): + """ + If file_ exists and is a link in newpath, and exists but is not a + link in origpath, copy origpath file to newpath + """ + for file_ in os.listdir(newpath): + if ( + os.path.islink(os.path.join(newpath, file_)) + and os.path.isfile(os.path.join(origpath, file_)) + and not os.path.islink(os.path.join(origpath, file_)) + ): + logger.info("Copying user modified file {} to clone".format(file_)) + os.unlink(os.path.join(newpath, file_)) + safe_copy(os.path.join(origpath, file_), newpath) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_cmpgen_namelists.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_cmpgen_namelists.html new file mode 100644 index 00000000000..c8441a57976 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_cmpgen_namelists.html @@ -0,0 +1,317 @@ + + + + + + CIME.case.case_cmpgen_namelists — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.case.case_cmpgen_namelists

+"""
+Library for case.cmpgen_namelists.
+case_cmpgen_namelists is a member of Class case from file case.py
+"""
+
+from CIME.XML.standard_module_setup import *
+
+from CIME.compare_namelists import is_namelist_file, compare_namelist_files
+from CIME.simple_compare import compare_files, compare_runconfigfiles
+from CIME.utils import append_status, safe_copy, SharedArea
+from CIME.test_status import *
+
+import os, shutil, traceback, stat, glob
+from distutils import dir_util
+
+logger = logging.getLogger(__name__)
+
+
+def _do_full_nl_comp(case, test, compare_name, baseline_root=None):
+    test_dir = case.get_value("CASEROOT")
+    casedoc_dir = os.path.join(test_dir, "CaseDocs")
+    baseline_root = (
+        case.get_value("BASELINE_ROOT") if baseline_root is None else baseline_root
+    )
+
+    all_match = True
+    baseline_dir = os.path.join(baseline_root, compare_name, test)
+    baseline_casedocs = os.path.join(baseline_dir, "CaseDocs")
+
+    # Start off by comparing everything in CaseDocs except a few arbitrary files (ugh!)
+    # TODO: Namelist files should have consistent suffix
+    all_items_to_compare = [
+        item
+        for item in glob.glob("{}/*".format(casedoc_dir))
+        if "README" not in os.path.basename(item)
+        and not item.endswith("doc")
+        and not item.endswith("prescribed")
+        and not os.path.basename(item).startswith(".")
+    ]
+
+    comments = "NLCOMP\n"
+    for item in all_items_to_compare:
+        baseline_counterpart = os.path.join(
+            baseline_casedocs
+            if os.path.dirname(item).endswith("CaseDocs")
+            else baseline_dir,
+            os.path.basename(item),
+        )
+        if not os.path.exists(baseline_counterpart):
+            comments += "Missing baseline namelist '{}'\n".format(baseline_counterpart)
+            all_match = False
+        else:
+            if item.endswith("runconfig") or item.endswith("runseq"):
+                success, current_comments = compare_runconfigfiles(
+                    baseline_counterpart, item, test
+                )
+            elif is_namelist_file(item):
+                success, current_comments = compare_namelist_files(
+                    baseline_counterpart, item, test
+                )
+            else:
+                success, current_comments = compare_files(
+                    baseline_counterpart, item, test
+                )
+
+            all_match &= success
+            if not success:
+                comments += "Comparison failed between '{}' with '{}'\n".format(
+                    item, baseline_counterpart
+                )
+
+            comments += current_comments
+
+    logging.info(comments)
+    return all_match, comments
+
+
+def _do_full_nl_gen_impl(case, test, generate_name, baseline_root=None):
+    test_dir = case.get_value("CASEROOT")
+    casedoc_dir = os.path.join(test_dir, "CaseDocs")
+    baseline_root = (
+        case.get_value("BASELINE_ROOT") if baseline_root is None else baseline_root
+    )
+
+    baseline_dir = os.path.join(baseline_root, generate_name, test)
+    baseline_casedocs = os.path.join(baseline_dir, "CaseDocs")
+
+    if not os.path.isdir(baseline_dir):
+        os.makedirs(
+            baseline_dir, stat.S_IRWXU | stat.S_IRWXG | stat.S_IXOTH | stat.S_IROTH
+        )
+
+    if os.path.isdir(baseline_casedocs):
+        shutil.rmtree(baseline_casedocs)
+
+    dir_util.copy_tree(casedoc_dir, baseline_casedocs, preserve_mode=False)
+
+    for item in glob.glob(os.path.join(test_dir, "user_nl*")):
+        preexisting_baseline = os.path.join(baseline_dir, os.path.basename(item))
+        if os.path.exists(preexisting_baseline):
+            os.remove(preexisting_baseline)
+
+        safe_copy(item, baseline_dir, preserve_meta=False)
+
+
+def _do_full_nl_gen(case, test, generate_name, baseline_root=None):
+    with SharedArea():
+        _do_full_nl_gen_impl(case, test, generate_name, baseline_root=baseline_root)
+
+
+
+[docs] +def case_cmpgen_namelists( + self, + compare=False, + generate=False, + compare_name=None, + generate_name=None, + baseline_root=None, + logfile_name="TestStatus.log", +): + expect(self.get_value("TEST"), "Only makes sense to run this for a test case") + + caseroot, casebaseid = self.get_value("CASEROOT"), self.get_value("CASEBASEID") + + if not compare: + compare = self.get_value("COMPARE_BASELINE") + if not generate: + generate = self.get_value("GENERATE_BASELINE") + + if not compare and not generate: + logging.debug("No namelists compares requested") + return True + + # create namelists for case if they haven't been already + casedocs = os.path.join(caseroot, "CaseDocs") + if not os.path.exists(os.path.join(casedocs, "drv_in")): + self.create_namelists() + + test_name = casebaseid if casebaseid is not None else self.get_value("CASE") + with TestStatus(test_dir=caseroot, test_name=test_name) as ts: + try: + # Inside this try are where we catch non-fatal errors, IE errors involving + # baseline operations which may not directly impact the functioning of the viability of this case + if compare and not compare_name: + compare_name = self.get_value("BASELINE_NAME_CMP") + expect( + compare_name, + "Was asked to do baseline compare but unable to determine baseline name", + ) + logging.info( + "Comparing namelists with baselines '{}'".format(compare_name) + ) + if generate and not generate_name: + generate_name = self.get_value("BASELINE_NAME_GEN") + expect( + generate_name, + "Was asked to do baseline generation but unable to determine baseline name", + ) + logging.info( + "Generating namelists to baselines '{}'".format(generate_name) + ) + + success = True + output = "" + if compare: + success, output = _do_full_nl_comp( + self, test_name, compare_name, baseline_root + ) + if not success and ts.get_status(RUN_PHASE) is not None: + run_warn = """NOTE: It is not necessarily safe to compare namelists after RUN +phase has completed. Running a case can pollute namelists. The namelists +kept in the baselines are pre-RUN namelists.""" + output += run_warn + logging.info(run_warn) + if generate: + _do_full_nl_gen(self, test_name, generate_name, baseline_root) + except Exception: + success = False + ts.set_status(NAMELIST_PHASE, TEST_FAIL_STATUS) + warn = "Exception during namelist operations:\n{}\n{}".format( + sys.exc_info()[1], traceback.format_exc() + ) + output += warn + logging.warning(warn) + finally: + ts.set_status( + NAMELIST_PHASE, TEST_PASS_STATUS if success else TEST_FAIL_STATUS + ) + try: + append_status(output, logfile_name, caseroot=caseroot) + except IOError: + pass + + return success
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_run.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_run.html new file mode 100644 index 00000000000..29fa63fc167 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_run.html @@ -0,0 +1,718 @@ + + + + + + CIME.case.case_run — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case_run

+"""
+case_run is a member of Class Case
+'"""
+from CIME.XML.standard_module_setup import *
+from CIME.config import Config
+from CIME.utils import gzip_existing_file, new_lid, run_and_log_case_status
+from CIME.utils import run_sub_or_cmd, append_status, safe_copy, model_log, CIMEError
+from CIME.utils import get_model, batch_jobid
+from CIME.get_timing import get_timing
+
+import shutil, time, sys, os, glob
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+def _pre_run_check(case, lid, skip_pnl=False, da_cycle=0):
+    ###############################################################################
+
+    # Pre run initialization code..
+    if da_cycle > 0:
+        case.create_namelists(component="cpl")
+        return
+
+    caseroot = case.get_value("CASEROOT")
+    din_loc_root = case.get_value("DIN_LOC_ROOT")
+    rundir = case.get_value("RUNDIR")
+
+    if case.get_value("TESTCASE") == "PFS":
+        for filename in ("env_mach_pes.xml", "software_environment.txt"):
+            fullpath = os.path.join(caseroot, filename)
+            safe_copy(fullpath, "{}.{}".format(filename, lid))
+
+    # check for locked files, may impact BUILD_COMPLETE
+    skip = None
+    if case.get_value("EXTERNAL_WORKFLOW"):
+        skip = "env_batch"
+    case.check_lockedfiles(skip=skip)
+    logger.debug("check_lockedfiles OK")
+    build_complete = case.get_value("BUILD_COMPLETE")
+
+    # check that build is done
+    expect(
+        build_complete,
+        "BUILD_COMPLETE is not true\nPlease rebuild the model interactively",
+    )
+    logger.debug("build complete is {} ".format(build_complete))
+
+    # load the module environment...
+    case.load_env(reset=True)
+
+    # create the timing directories, optionally cleaning them if needed.
+    if os.path.isdir(os.path.join(rundir, "timing")):
+        shutil.rmtree(os.path.join(rundir, "timing"))
+
+    os.makedirs(os.path.join(rundir, "timing", "checkpoints"))
+
+    # This needs to be done everytime the LID changes in order for log files to be set up correctly
+    # The following also needs to be called in case a user changes a user_nl_xxx file OR an env_run.xml
+    # variable while the job is in the queue
+    model_log(
+        "e3sm",
+        logger,
+        "{} NAMELIST CREATION BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+    )
+    if skip_pnl:
+        case.create_namelists(component="cpl")
+    else:
+        logger.info("Generating namelists for {}".format(caseroot))
+        case.create_namelists()
+
+    model_log(
+        "e3sm",
+        logger,
+        "{} NAMELIST CREATION HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+    )
+
+    logger.info(
+        "-------------------------------------------------------------------------"
+    )
+    logger.info(" - Prestage required restarts into {}".format(rundir))
+    logger.info(
+        " - Case input data directory (DIN_LOC_ROOT) is {} ".format(din_loc_root)
+    )
+    logger.info(" - Checking for required input datasets in DIN_LOC_ROOT")
+    logger.info(
+        "-------------------------------------------------------------------------"
+    )
+
+
+###############################################################################
+def _run_model_impl(case, lid, skip_pnl=False, da_cycle=0):
+    ###############################################################################
+
+    model_log(
+        "e3sm",
+        logger,
+        "{} PRE_RUN_CHECK BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+    )
+    _pre_run_check(case, lid, skip_pnl=skip_pnl, da_cycle=da_cycle)
+    model_log(
+        "e3sm",
+        logger,
+        "{} PRE_RUN_CHECK HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+    )
+
+    model = case.get_value("MODEL")
+
+    # Set OMP_NUM_THREADS
+    os.environ["OMP_NUM_THREADS"] = str(case.thread_count)
+
+    # Run the model
+    cmd = case.get_mpirun_cmd(allow_unresolved_envvars=False)
+    logger.info("run command is {} ".format(cmd))
+
+    rundir = case.get_value("RUNDIR")
+
+    # MPIRUN_RETRY_REGEX allows the mpi command to be reattempted if the
+    # failure described by that regular expression is matched in the model log
+    # case.spare_nodes is overloaded and may also represent the number of
+    # retries to attempt if ALLOCATE_SPARE_NODES is False
+    retry_run_re = case.get_value("MPIRUN_RETRY_REGEX")
+    node_fail_re = case.get_value("NODE_FAIL_REGEX")
+    retry_count = 0
+    if retry_run_re:
+        retry_run_regex = re.compile(re.escape(retry_run_re))
+        retry_count = case.get_value("MPIRUN_RETRY_COUNT")
+    if node_fail_re:
+        node_fail_regex = re.compile(re.escape(node_fail_re))
+
+    is_batch = case.get_value("BATCH_SYSTEM") is not None
+    msg_func = None
+
+    if is_batch:
+        jobid = batch_jobid()
+        msg_func = lambda *args: jobid if jobid else ""
+
+    loop = True
+    while loop:
+        loop = False
+
+        model_log(
+            "e3sm",
+            logger,
+            "{} SAVE_PRERUN_PROVENANCE BEGINS HERE".format(
+                time.strftime("%Y-%m-%d %H:%M:%S")
+            ),
+        )
+        try:
+            Config.instance().save_prerun_provenance(case)
+        except AttributeError:
+            logger.debug("No hook for saving prerun provenance was executed")
+        model_log(
+            "e3sm",
+            logger,
+            "{} SAVE_PRERUN_PROVENANCE HAS FINISHED".format(
+                time.strftime("%Y-%m-%d %H:%M:%S")
+            ),
+        )
+
+        model_log(
+            "e3sm",
+            logger,
+            "{} MODEL EXECUTION BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+        )
+        run_func = lambda: run_cmd_no_fail(cmd, from_dir=rundir)
+        case.flush()
+
+        try:
+            run_and_log_case_status(
+                run_func,
+                "model execution",
+                custom_starting_msg_functor=msg_func,
+                custom_success_msg_functor=msg_func,
+                caseroot=case.get_value("CASEROOT"),
+                is_batch=is_batch,
+            )
+            cmd_success = True
+        except CIMEError:
+            cmd_success = False
+
+        # The run will potentially take a very long time. We need to
+        # allow the user to xmlchange things in their case.
+        #
+        # WARNING: All case variables are reloaded after this call to get the
+        # new values of any variables that may have been changed by
+        # the user during model execution. Thus, any local variables
+        # set from case variables before this point may be
+        # inconsistent with their latest values in the xml files, so
+        # should generally be reloaded (via case.get_value(XXX)) if they are still needed.
+        case.read_xml()
+
+        model_log(
+            "e3sm",
+            logger,
+            "{} MODEL EXECUTION HAS FINISHED".format(
+                time.strftime("%Y-%m-%d %H:%M:%S")
+            ),
+        )
+
+        model_logfile = os.path.join(rundir, model + ".log." + lid)
+        # Determine if failure was due to a failed node, if so, try to restart
+        if retry_run_re or node_fail_re:
+            model_logfile = os.path.join(rundir, model + ".log." + lid)
+            if os.path.exists(model_logfile):
+                num_node_fails = 0
+                num_retry_fails = 0
+                if node_fail_re:
+                    num_node_fails = len(
+                        node_fail_regex.findall(open(model_logfile, "r").read())
+                    )
+                if retry_run_re:
+                    num_retry_fails = len(
+                        retry_run_regex.findall(open(model_logfile, "r").read())
+                    )
+                logger.debug(
+                    "RETRY: num_retry_fails {} spare_nodes {} retry_count {}".format(
+                        num_retry_fails, case.spare_nodes, retry_count
+                    )
+                )
+                if num_node_fails > 0 and case.spare_nodes >= num_node_fails:
+                    # We failed due to node failure!
+                    logger.warning(
+                        "Detected model run failed due to node failure, restarting"
+                    )
+                    case.spare_nodes -= num_node_fails
+                    loop = True
+                    case.set_value(
+                        "CONTINUE_RUN", case.get_value("RESUBMIT_SETS_CONTINUE_RUN")
+                    )
+                elif num_retry_fails > 0 and retry_count >= num_retry_fails:
+                    logger.warning("Detected model run failed, restarting")
+                    retry_count -= 1
+                    loop = True
+
+                if loop:
+                    # Archive the last consistent set of restart files and restore them
+                    if case.get_value("DOUT_S"):
+                        case.case_st_archive(resubmit=False)
+                        case.restore_from_archive()
+
+                    lid = new_lid(case=case)
+                    case.create_namelists()
+
+        if not cmd_success and not loop:
+            # We failed and we're not restarting
+            expect(
+                False,
+                "RUN FAIL: Command '{}' failed\nSee log file for details: {}".format(
+                    cmd, model_logfile
+                ),
+            )
+
+    model_log(
+        "e3sm",
+        logger,
+        "{} POST_RUN_CHECK BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+    )
+    _post_run_check(case, lid)
+    model_log(
+        "e3sm",
+        logger,
+        "{} POST_RUN_CHECK HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")),
+    )
+
+    return lid
+
+
+###############################################################################
+def _run_model(case, lid, skip_pnl=False, da_cycle=0):
+    ###############################################################################
+    functor = lambda: _run_model_impl(case, lid, skip_pnl=skip_pnl, da_cycle=da_cycle)
+
+    is_batch = case.get_value("BATCH_SYSTEM") is not None
+    msg_func = None
+
+    if is_batch:
+        jobid = batch_jobid()
+        msg_func = lambda *args: jobid if jobid is not None else ""
+
+    return run_and_log_case_status(
+        functor,
+        "case.run",
+        custom_starting_msg_functor=msg_func,
+        custom_success_msg_functor=msg_func,
+        caseroot=case.get_value("CASEROOT"),
+        is_batch=is_batch,
+    )
+
+
+###############################################################################
+def _post_run_check(case, lid):
+    ###############################################################################
+
+    rundir = case.get_value("RUNDIR")
+    model = case.get_value("MODEL")
+    driver = case.get_value("COMP_INTERFACE")
+    model = get_model()
+
+    fv3_standalone = False
+
+    if "CPL" not in case.get_values("COMP_CLASSES"):
+        fv3_standalone = True
+    if driver == "nuopc":
+        if fv3_standalone:
+            file_prefix = model
+        else:
+            file_prefix = "drv"
+    else:
+        file_prefix = "cpl"
+
+    cpl_ninst = 1
+    if case.get_value("MULTI_DRIVER"):
+        cpl_ninst = case.get_value("NINST_MAX")
+    cpl_logs = []
+
+    if cpl_ninst > 1:
+        for inst in range(cpl_ninst):
+            cpl_logs.append(
+                os.path.join(rundir, file_prefix + "_%04d.log." % (inst + 1) + lid)
+            )
+    else:
+        cpl_logs = [os.path.join(rundir, file_prefix + ".log." + lid)]
+
+    cpl_logfile = cpl_logs[0]
+
+    # find the last model.log and cpl.log
+    model_logfile = os.path.join(rundir, model + ".log." + lid)
+    if not os.path.isfile(model_logfile):
+        expect(False, "Model did not complete, no {} log file ".format(model_logfile))
+    elif os.stat(model_logfile).st_size == 0:
+        expect(False, "Run FAILED")
+    else:
+        count_ok = 0
+        for cpl_logfile in cpl_logs:
+            print(f"cpl_logfile {cpl_logfile}")
+            if not os.path.isfile(cpl_logfile):
+                break
+            with open(cpl_logfile, "r") as fd:
+                if fv3_standalone and "HAS ENDED" in fd.read():
+                    count_ok += 1
+                elif not fv3_standalone and "SUCCESSFUL TERMINATION" in fd.read():
+                    count_ok += 1
+        if count_ok != cpl_ninst:
+            expect(False, "Model did not complete - see {} \n ".format(cpl_logfile))
+
+
+###############################################################################
+def _save_logs(case, lid):
+    ###############################################################################
+    rundir = case.get_value("RUNDIR")
+    logfiles = glob.glob(os.path.join(rundir, "*.log.{}".format(lid)))
+    for logfile in logfiles:
+        if os.path.isfile(logfile):
+            gzip_existing_file(logfile)
+
+
+######################################################################################
+def _resubmit_check(case):
+    ###############################################################################
+    """
+    check to see if we need to do resubmission from this particular job,
+    Note that Mira requires special logic
+    """
+    dout_s = case.get_value("DOUT_S")
+    logger.warning("dout_s {} ".format(dout_s))
+    mach = case.get_value("MACH")
+    logger.warning("mach {} ".format(mach))
+    resubmit_num = case.get_value("RESUBMIT")
+    logger.warning("resubmit_num {}".format(resubmit_num))
+    # If dout_s is True than short-term archiving handles the resubmit
+    # If dout_s is True and machine is mira submit the st_archive script
+    resubmit = False
+    if not dout_s and resubmit_num > 0:
+        resubmit = True
+    elif dout_s and mach == "mira":
+        caseroot = case.get_value("CASEROOT")
+        cimeroot = case.get_value("CIMEROOT")
+        cmd = "ssh cooleylogin1 'cd {case}; CIMEROOT={root} ./case.submit {case} --job case.st_archive'".format(
+            case=caseroot, root=cimeroot
+        )
+        run_cmd(cmd, verbose=True)
+
+    if resubmit:
+        job = case.get_primary_job()
+
+        case.submit(job=job, resubmit=True)
+
+    logger.debug("resubmit after check is {}".format(resubmit))
+
+
+###############################################################################
+def _do_external(script_name, caseroot, rundir, lid, prefix):
+    ###############################################################################
+    expect(
+        os.path.isfile(script_name), "External script {} not found".format(script_name)
+    )
+    filename = "{}.external.log.{}".format(prefix, lid)
+    outfile = os.path.join(rundir, filename)
+    append_status("Starting script {}".format(script_name), "CaseStatus")
+    run_sub_or_cmd(
+        script_name,
+        [caseroot],
+        (os.path.basename(script_name).split(".", 1))[0],
+        [caseroot],
+        logfile=outfile,
+    )  # For sub, use case?
+    append_status("Completed script {}".format(script_name), "CaseStatus")
+
+
+###############################################################################
+def _do_data_assimilation(da_script, caseroot, cycle, lid, rundir):
+    ###############################################################################
+    expect(
+        os.path.isfile(da_script),
+        "Data Assimilation script {} not found".format(da_script),
+    )
+    filename = "da.log.{}".format(lid)
+    outfile = os.path.join(rundir, filename)
+    run_sub_or_cmd(
+        da_script,
+        [caseroot, cycle],
+        os.path.basename(da_script),
+        [caseroot, cycle],
+        logfile=outfile,
+    )  # For sub, use case?
+
+
+###############################################################################
+
+[docs] +def case_run(self, skip_pnl=False, set_continue_run=False, submit_resubmits=False): + ############################################################################### + model_log( + "e3sm", + logger, + "{} CASE.RUN BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + # Set up the run, run the model, do the postrun steps + + # set up the LID + lid = new_lid(case=self) + + prerun_script = self.get_value("PRERUN_SCRIPT") + if prerun_script: + model_log( + "e3sm", + logger, + "{} PRERUN_SCRIPT BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + self.flush() + _do_external( + prerun_script, + self.get_value("CASEROOT"), + self.get_value("RUNDIR"), + lid, + prefix="prerun", + ) + self.read_xml() + model_log( + "e3sm", + logger, + "{} PRERUN_SCRIPT HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + + # We might need to tweak these if we want to allow the user to change them + data_assimilation_cycles = self.get_value("DATA_ASSIMILATION_CYCLES") + data_assimilation_script = self.get_value("DATA_ASSIMILATION_SCRIPT") + data_assimilation = ( + data_assimilation_cycles > 0 + and len(data_assimilation_script) > 0 + and os.path.isfile(data_assimilation_script) + ) + + for cycle in range(data_assimilation_cycles): + # After the first DA cycle, runs are restart runs + if cycle > 0: + lid = new_lid() + self.set_value("CONTINUE_RUN", self.get_value("RESUBMIT_SETS_CONTINUE_RUN")) + + # WARNING: All case variables are reloaded during run_model to get + # new values of any variables that may have been changed by + # the user during model execution. Thus, any local variables + # set from case variables before this point may be + # inconsistent with their latest values in the xml files, so + # should generally be reloaded (via case.get_value(XXX)) if they are still needed. + model_log( + "e3sm", + logger, + "{} RUN_MODEL BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + lid = _run_model(self, lid, skip_pnl, da_cycle=cycle) + model_log( + "e3sm", + logger, + "{} RUN_MODEL HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + + if self.get_value("CHECK_TIMING") or self.get_value("SAVE_TIMING"): + model_log( + "e3sm", + logger, + "{} GET_TIMING BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + get_timing(self, lid) # Run the getTiming script + model_log( + "e3sm", + logger, + "{} GET_TIMING HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + + if data_assimilation: + model_log( + "e3sm", + logger, + "{} DO_DATA_ASSIMILATION BEGINS HERE".format( + time.strftime("%Y-%m-%d %H:%M:%S") + ), + ) + self.flush() + _do_data_assimilation( + data_assimilation_script, + self.get_value("CASEROOT"), + cycle, + lid, + self.get_value("RUNDIR"), + ) + self.read_xml() + model_log( + "e3sm", + logger, + "{} DO_DATA_ASSIMILATION HAS FINISHED".format( + time.strftime("%Y-%m-%d %H:%M:%S") + ), + ) + + _save_logs(self, lid) # Copy log files back to caseroot + + model_log( + "e3sm", + logger, + "{} SAVE_POSTRUN_PROVENANCE BEGINS HERE".format( + time.strftime("%Y-%m-%d %H:%M:%S") + ), + ) + try: + Config.instance().save_postrun_provenance(self, lid) + except AttributeError: + logger.debug("No hook for saving postrun provenance was executed") + model_log( + "e3sm", + logger, + "{} SAVE_POSTRUN_PROVENANCE HAS FINISHED".format( + time.strftime("%Y-%m-%d %H:%M:%S") + ), + ) + + postrun_script = self.get_value("POSTRUN_SCRIPT") + if postrun_script: + model_log( + "e3sm", + logger, + "{} POSTRUN_SCRIPT BEGINS HERE".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + self.flush() + _do_external( + postrun_script, + self.get_value("CASEROOT"), + self.get_value("RUNDIR"), + lid, + prefix="postrun", + ) + self.read_xml() + _save_logs(self, lid) + model_log( + "e3sm", + logger, + "{} POSTRUN_SCRIPT HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + + if set_continue_run: + self.set_value("CONTINUE_RUN", self.get_value("RESUBMIT_SETS_CONTINUE_RUN")) + + external_workflow = self.get_value("EXTERNAL_WORKFLOW") + if not external_workflow: + logger.warning("check for resubmit") + + logger.debug("submit_resubmits is {}".format(submit_resubmits)) + if submit_resubmits: + _resubmit_check(self) + + model_log( + "e3sm", + logger, + "{} CASE.RUN HAS FINISHED".format(time.strftime("%Y-%m-%d %H:%M:%S")), + ) + return True
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_setup.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_setup.html new file mode 100644 index 00000000000..3f90c324a28 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_setup.html @@ -0,0 +1,633 @@ + + + + + + CIME.case.case_setup — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case_setup

+"""
+Library for case.setup.
+case_setup is a member of class Case from file case.py
+"""
+
+import os
+
+from CIME.XML.standard_module_setup import *
+from CIME.config import Config
+from CIME.XML.machines import Machines
+from CIME.BuildTools.configure import (
+    generate_env_mach_specific,
+    copy_depends_files,
+)
+from CIME.utils import (
+    run_and_log_case_status,
+    get_batch_script_for_job,
+    safe_copy,
+    file_contains_python_function,
+    import_from_file,
+    copy_local_macros_to_dir,
+)
+from CIME.utils import batch_jobid
+from CIME.test_status import *
+from CIME.locked_files import unlock_file, lock_file
+
+import errno, shutil
+
+logger = logging.getLogger(__name__)
+
+
+###############################################################################
+def _build_usernl_files(case, model, comp):
+    ###############################################################################
+    """
+    Create user_nl_xxx files, expects cwd is caseroot
+    """
+    model = model.upper()
+    if model == "DRV":
+        model_file = case.get_value("CONFIG_CPL_FILE")
+    else:
+        model_file = case.get_value("CONFIG_{}_FILE".format(model))
+    expect(
+        model_file is not None,
+        "Could not locate CONFIG_{}_FILE in config_files.xml".format(model),
+    )
+    model_dir = os.path.dirname(model_file)
+
+    expect(
+        os.path.isdir(model_dir),
+        "cannot find cime_config directory {} for component {}".format(model_dir, comp),
+    )
+    comp_interface = case.get_value("COMP_INTERFACE")
+    multi_driver = case.get_value("MULTI_DRIVER")
+    ninst = 1
+
+    if multi_driver:
+        ninst_max = case.get_value("NINST_MAX")
+        if comp_interface != "nuopc" and model not in ("DRV", "CPL", "ESP"):
+            ninst_model = case.get_value("NINST_{}".format(model))
+            expect(
+                ninst_model == ninst_max,
+                "MULTI_DRIVER mode, all components must have same NINST value.  NINST_{} != {}".format(
+                    model, ninst_max
+                ),
+            )
+    if comp == "cpl":
+        if not os.path.exists("user_nl_cpl"):
+            safe_copy(os.path.join(model_dir, "user_nl_cpl"), ".")
+    else:
+        if comp_interface == "nuopc":
+            ninst = case.get_value("NINST")
+        elif ninst == 1:
+            ninst = case.get_value("NINST_{}".format(model))
+        default_nlfile = "user_nl_{}".format(comp)
+        model_nl = os.path.join(model_dir, default_nlfile)
+        user_nl_list = _get_user_nl_list(case, default_nlfile, model_dir)
+
+        # Note that, even if there are multiple elements of user_nl_list (i.e., we are
+        # creating multiple user_nl files for this component with different names), all of
+        # them will start out as copies of the single user_nl_comp file in the model's
+        # source tree - unless the file has _stream in its name
+        for nlfile in user_nl_list:
+            if ninst > 1:
+                for inst_counter in range(1, ninst + 1):
+                    inst_nlfile = "{}_{:04d}".format(nlfile, inst_counter)
+                    if not os.path.exists(inst_nlfile):
+                        # If there is a user_nl_foo in the case directory, copy it
+                        # to user_nl_foo_INST; otherwise, copy the original
+                        # user_nl_foo from model_dir
+                        if os.path.exists(nlfile):
+                            safe_copy(nlfile, inst_nlfile)
+                        elif "_stream" in nlfile:
+                            safe_copy(os.path.join(model_dir, nlfile), inst_nlfile)
+                        elif os.path.exists(model_nl):
+                            safe_copy(model_nl, inst_nlfile)
+            else:
+                # ninst = 1
+                if not os.path.exists(nlfile):
+                    if "_stream" in nlfile:
+                        safe_copy(os.path.join(model_dir, nlfile), nlfile)
+                    elif os.path.exists(model_nl):
+                        safe_copy(model_nl, nlfile)
+
+
+###############################################################################
+def _get_user_nl_list(case, default_nlfile, model_dir):
+    """Get a list of user_nl files needed by this component
+
+    Typically, each component has a single user_nl file: user_nl_comp. However, some
+    components use multiple user_nl files. These components can define a function in
+    cime_config/buildnml named get_user_nl_list, which returns a list of user_nl files
+    that need to be staged in the case directory. For example, in a run where CISM is
+    modeling both Antarctica and Greenland, its get_user_nl_list function will return
+    ['user_nl_cism', 'user_nl_cism_ais', 'user_nl_cism_gris'].
+
+    If that function is NOT defined in the component's buildnml, then we return the given
+    default_nlfile.
+
+    """
+    # Check if buildnml is present in the expected location, and if so, whether it
+    # contains the function "get_user_nl_list"; if so, we'll import the module and call
+    # that function; if not, we'll fall back on the default value.
+    buildnml_path = os.path.join(model_dir, "buildnml")
+    has_function = False
+    if os.path.isfile(buildnml_path) and file_contains_python_function(
+        buildnml_path, "get_user_nl_list"
+    ):
+        has_function = True
+
+    if has_function:
+        comp_buildnml = import_from_file("comp_buildnml", buildnml_path)
+        return comp_buildnml.get_user_nl_list(case)
+    else:
+        return [default_nlfile]
+
+
+###############################################################################
+def _create_macros_cmake(
+    caseroot, cmake_macros_dir, mach_obj, compiler, case_cmake_path
+):
+    ###############################################################################
+    if not os.path.isfile(os.path.join(caseroot, "Macros.cmake")):
+        safe_copy(os.path.join(cmake_macros_dir, "Macros.cmake"), caseroot)
+
+    if not os.path.exists(case_cmake_path):
+        os.mkdir(case_cmake_path)
+
+    # This impl is coupled to contents of Macros.cmake
+    os_ = mach_obj.get_value("OS")
+    mach = mach_obj.get_machine_name()
+    macros = [
+        "universal.cmake",
+        os_ + ".cmake",
+        compiler + ".cmake",
+        "{}_{}.cmake".format(compiler, os),
+        mach + ".cmake",
+        "{}_{}.cmake".format(compiler, mach),
+        "CMakeLists.txt",
+    ]
+    for macro in macros:
+        repo_macro = os.path.join(cmake_macros_dir, macro)
+        case_macro = os.path.join(case_cmake_path, macro)
+        if not os.path.exists(case_macro) and os.path.exists(repo_macro):
+            safe_copy(repo_macro, case_cmake_path)
+
+    copy_depends_files(mach, mach_obj.machines_dir, caseroot, compiler)
+
+
+###############################################################################
+def _create_macros(
+    case, mach_obj, caseroot, compiler, mpilib, debug, comp_interface, sysos
+):
+    ###############################################################################
+    """
+    creates the Macros.make, Depends.compiler, Depends.machine, Depends.machine.compiler
+    and env_mach_specific.xml if they don't already exist.
+    """
+    reread = not os.path.isfile("env_mach_specific.xml")
+    new_cmake_macros_dir = case.get_value("CMAKE_MACROS_DIR")
+
+    if reread:
+        case.flush()
+        generate_env_mach_specific(
+            caseroot,
+            mach_obj,
+            compiler,
+            mpilib,
+            debug,
+            comp_interface,
+            sysos,
+            False,
+            threaded=case.get_build_threaded(),
+            noenv=True,
+        )
+        case.read_xml()
+
+    case_cmake_path = os.path.join(caseroot, "cmake_macros")
+
+    _create_macros_cmake(
+        caseroot, new_cmake_macros_dir, mach_obj, compiler, case_cmake_path
+    )
+    copy_local_macros_to_dir(
+        case_cmake_path, extra_machdir=case.get_value("EXTRA_MACHDIR")
+    )
+
+
+###############################################################################
+def _case_setup_impl(
+    case, caseroot, clean=False, test_mode=False, reset=False, keep=None
+):
+    ###############################################################################
+    os.chdir(caseroot)
+
+    non_local = case.get_value("NONLOCAL")
+
+    models = case.get_values("COMP_CLASSES")
+    mach = case.get_value("MACH")
+    compiler = case.get_value("COMPILER")
+    debug = case.get_value("DEBUG")
+    mpilib = case.get_value("MPILIB")
+    sysos = case.get_value("OS")
+    comp_interface = case.get_value("COMP_INTERFACE")
+    extra_machines_dir = case.get_value("EXTRA_MACHDIR")
+
+    expect(mach is not None, "xml variable MACH is not set")
+
+    mach_obj = Machines(machine=mach, extra_machines_dir=extra_machines_dir)
+
+    # Check that $DIN_LOC_ROOT exists or can be created:
+    if not non_local:
+        din_loc_root = case.get_value("DIN_LOC_ROOT")
+        testcase = case.get_value("TESTCASE")
+
+        if not os.path.isdir(din_loc_root):
+            try:
+                os.makedirs(din_loc_root)
+            except OSError as e:
+                if e.errno == errno.EACCES:
+                    logger.info("Invalid permissions to create {}".format(din_loc_root))
+
+        expect(
+            not (not os.path.isdir(din_loc_root) and testcase != "SBN"),
+            "inputdata root is not a directory or is not readable: {}".format(
+                din_loc_root
+            ),
+        )
+
+    # Remove batch scripts
+    if reset or clean:
+        # clean setup-generated files
+        batch_script = get_batch_script_for_job(case.get_primary_job())
+        files_to_clean = [
+            batch_script,
+            "env_mach_specific.xml",
+            "Macros.make",
+            "Macros.cmake",
+            "cmake_macros",
+        ]
+        for file_to_clean in files_to_clean:
+            if os.path.exists(file_to_clean) and not (keep and file_to_clean in keep):
+                if os.path.isdir(file_to_clean):
+                    shutil.rmtree(file_to_clean)
+                else:
+                    os.remove(file_to_clean)
+                logger.info("Successfully cleaned {}".format(file_to_clean))
+
+        if not test_mode:
+            # rebuild the models (even on restart)
+            case.set_value("BUILD_COMPLETE", False)
+
+        # Cannot leave case in bad state (missing env_mach_specific.xml)
+        if clean and not os.path.isfile("env_mach_specific.xml"):
+            case.flush()
+            generate_env_mach_specific(
+                caseroot,
+                mach_obj,
+                compiler,
+                mpilib,
+                debug,
+                comp_interface,
+                sysos,
+                False,
+                threaded=case.get_build_threaded(),
+                noenv=True,
+            )
+            case.read_xml()
+
+    if not clean:
+        if not non_local:
+            case.load_env()
+
+        _create_macros(
+            case, mach_obj, caseroot, compiler, mpilib, debug, comp_interface, sysos
+        )
+
+        # Set tasks to 1 if mpi-serial library
+        if mpilib == "mpi-serial":
+            case.set_value("NTASKS", 1)
+
+        # Check ninst.
+        # In CIME there can be multiple instances of each component model (an ensemble) NINST is the instance of that component.
+        comp_interface = case.get_value("COMP_INTERFACE")
+        if comp_interface == "nuopc":
+            ninst = case.get_value("NINST")
+
+        multi_driver = case.get_value("MULTI_DRIVER")
+
+        for comp in models:
+            ntasks = case.get_value("NTASKS_{}".format(comp))
+            if comp == "CPL":
+                continue
+            if comp_interface != "nuopc":
+                ninst = case.get_value("NINST_{}".format(comp))
+            if multi_driver:
+                if comp_interface != "nuopc":
+                    expect(
+                        case.get_value("NINST_LAYOUT_{}".format(comp)) == "concurrent",
+                        "If multi_driver is TRUE, NINST_LAYOUT_{} must be concurrent".format(
+                            comp
+                        ),
+                    )
+                case.set_value("NTASKS_PER_INST_{}".format(comp), ntasks)
+            else:
+                if ninst > ntasks:
+                    if ntasks == 1:
+                        case.set_value("NTASKS_{}".format(comp), ninst)
+                        ntasks = ninst
+                    else:
+                        expect(
+                            False,
+                            "NINST_{comp} value {ninst} greater than NTASKS_{comp} {ntasks}".format(
+                                comp=comp, ninst=ninst, ntasks=ntasks
+                            ),
+                        )
+
+                case.set_value(
+                    "NTASKS_PER_INST_{}".format(comp), max(1, int(ntasks / ninst))
+                )
+
+        if os.path.exists(get_batch_script_for_job(case.get_primary_job())):
+            logger.info(
+                "Machine/Decomp/Pes configuration has already been done ...skipping"
+            )
+
+            case.initialize_derived_attributes()
+
+            case.set_value("SMP_PRESENT", case.get_build_threaded())
+
+        else:
+            case.check_pelayouts_require_rebuild(models)
+
+            unlock_file("env_build.xml")
+            unlock_file("env_batch.xml")
+
+            case.flush()
+            case.check_lockedfiles()
+
+            case.initialize_derived_attributes()
+
+            cost_per_node = case.get_value("COSTPES_PER_NODE")
+            case.set_value("COST_PES", case.num_nodes * cost_per_node)
+            threaded = case.get_build_threaded()
+            case.set_value("SMP_PRESENT", threaded)
+            if threaded and case.total_tasks * case.thread_count > cost_per_node:
+                smt_factor = max(
+                    1.0, int(case.get_value("MAX_TASKS_PER_NODE") / cost_per_node)
+                )
+                case.set_value(
+                    "TOTALPES",
+                    case.iotasks
+                    + int(
+                        (case.total_tasks - case.iotasks)
+                        * max(1.0, float(case.thread_count) / smt_factor)
+                    ),
+                )
+            else:
+                case.set_value(
+                    "TOTALPES",
+                    (case.total_tasks - case.iotasks) * case.thread_count
+                    + case.iotasks,
+                )
+
+            # May need to select new batch settings if pelayout changed (e.g. problem is now too big for prev-selected queue)
+            env_batch = case.get_env("batch")
+            env_batch.set_job_defaults([(case.get_primary_job(), {})], case)
+
+            # create batch files
+            env_batch.make_all_batch_files(case)
+
+            if Config.instance().make_case_run_batch_script and not case.get_value(
+                "TEST"
+            ):
+                input_batch_script = os.path.join(
+                    case.get_value("MACHDIR"), "template.case.run.sh"
+                )
+                env_batch.make_batch_script(
+                    input_batch_script,
+                    "case.run",
+                    case,
+                    outfile=get_batch_script_for_job("case.run.sh"),
+                )
+
+            # Make a copy of env_mach_pes.xml in order to be able
+            # to check that it does not change once case.setup is invoked
+            case.flush()
+            logger.debug("at copy TOTALPES = {}".format(case.get_value("TOTALPES")))
+            lock_file("env_mach_pes.xml")
+            lock_file("env_batch.xml")
+
+        # Create user_nl files for the required number of instances
+        if not os.path.exists("user_nl_cpl"):
+            logger.info("Creating user_nl_xxx files for components and cpl")
+
+        # loop over models
+        for model in models:
+            comp = case.get_value("COMP_{}".format(model))
+            logger.debug("Building {} usernl files".format(model))
+            _build_usernl_files(case, model, comp)
+            if comp == "cism":
+                glcroot = case.get_value("COMP_ROOT_DIR_GLC")
+                run_cmd_no_fail(
+                    "{}/cime_config/cism.template {}".format(glcroot, caseroot)
+                )
+            if comp == "cam":
+                camroot = case.get_value("COMP_ROOT_DIR_ATM")
+                logger.debug("Running cam.case_setup.py")
+                run_cmd_no_fail(
+                    "python {cam}/cime_config/cam.case_setup.py {cam} {case}".format(
+                        cam=camroot, case=caseroot
+                    )
+                )
+
+        _build_usernl_files(case, "drv", "cpl")
+
+        # Create needed directories for case
+        case.create_dirs()
+
+        logger.info(
+            "If an old case build already exists, might want to run 'case.build --clean' before building"
+        )
+
+        # Some tests need namelists created here (ERP) - so do this if we are in test mode
+        if (
+            test_mode or Config.instance().case_setup_generate_namelist
+        ) and not non_local:
+            logger.info("Generating component namelists as part of setup")
+            case.create_namelists()
+
+        # Record env information
+        env_module = case.get_env("mach_specific")
+        if mach == "zeus":
+            overrides = env_module.get_overrides_nodes(case)
+            logger.debug("Updating Zeus nodes {}".format(overrides))
+        env_module.make_env_mach_specific_file("sh", case)
+        env_module.make_env_mach_specific_file("csh", case)
+        if not non_local:
+            env_module.save_all_env_info("software_environment.txt")
+
+        logger.info(
+            "You can now run './preview_run' to get more info on how your case will be run"
+        )
+
+
+###############################################################################
+
+[docs] +def case_setup(self, clean=False, test_mode=False, reset=False, keep=None): + ############################################################################### + caseroot, casebaseid = self.get_value("CASEROOT"), self.get_value("CASEBASEID") + phase = "setup.clean" if clean else "case.setup" + functor = lambda: _case_setup_impl( + self, caseroot, clean=clean, test_mode=test_mode, reset=reset, keep=keep + ) + + is_batch = self.get_value("BATCH_SYSTEM") is not None + msg_func = None + + if is_batch: + jobid = batch_jobid() + msg_func = lambda *args: jobid if jobid is not None else "" + + if self.get_value("TEST") and not test_mode: + test_name = casebaseid if casebaseid is not None else self.get_value("CASE") + with TestStatus(test_dir=caseroot, test_name=test_name) as ts: + try: + run_and_log_case_status( + functor, + phase, + custom_starting_msg_functor=msg_func, + custom_success_msg_functor=msg_func, + caseroot=caseroot, + is_batch=is_batch, + ) + except BaseException: # Want to catch KeyboardInterrupt too + ts.set_status(SETUP_PHASE, TEST_FAIL_STATUS) + raise + else: + if clean: + ts.set_status(SETUP_PHASE, TEST_PEND_STATUS) + else: + ts.set_status(SETUP_PHASE, TEST_PASS_STATUS) + else: + run_and_log_case_status( + functor, + phase, + custom_starting_msg_functor=msg_func, + custom_success_msg_functor=msg_func, + caseroot=caseroot, + is_batch=is_batch, + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_st_archive.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_st_archive.html new file mode 100644 index 00000000000..06bcca6f67a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_st_archive.html @@ -0,0 +1,1447 @@ + + + + + + CIME.case.case_st_archive — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case_st_archive

+"""
+short term archiving
+case_st_archive, restore_from_archive, archive_last_restarts
+are members of class Case from file case.py
+"""
+
+import shutil, glob, re, os
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import (
+    run_and_log_case_status,
+    ls_sorted_by_mtime,
+    symlink_force,
+    safe_copy,
+    find_files,
+)
+from CIME.utils import batch_jobid
+from CIME.date import get_file_date
+from CIME.XML.archive import Archive
+from CIME.XML.files import Files
+from os.path import isdir, join
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+def _get_archive_fn_desc(archive_fn):
+    ###############################################################################
+    return "moving" if archive_fn is shutil.move else "copying"
+
+
+###############################################################################
+def _get_archive_file_fn(copy_only):
+    ###############################################################################
+    """
+    Returns the function to use for archiving some files
+    """
+    return safe_copy if copy_only else shutil.move
+
+
+###############################################################################
+def _get_datenames(casename, rundir):
+    ###############################################################################
+    """
+    Returns the date objects specifying the times of each file
+    Note we are assuming that the coupler restart files exist and are consistent with other component datenames
+    Not doc-testable due to filesystem dependence
+    """
+    expect(isdir(rundir), "Cannot open directory {} ".format(rundir))
+
+    files = sorted(glob.glob(os.path.join(rundir, casename + ".cpl.r.*.nc")))
+    if not files:
+        files = sorted(glob.glob(os.path.join(rundir, casename + ".cpl_0001.r.*.nc")))
+
+    logger.debug("  cpl files : {} ".format(files))
+
+    if not files:
+        logger.warning(
+            "Cannot find a {}.cpl*.r.*.nc file in directory {} ".format(
+                casename, rundir
+            )
+        )
+
+    datenames = []
+    for filename in files:
+        file_date = get_file_date(filename)
+        datenames.append(file_date)
+    return datenames
+
+
+def _datetime_str(_date):
+    """
+    Returns the standard format associated with filenames.
+
+    >>> from CIME.date import date
+    >>> _datetime_str(date(5, 8, 22))
+    '0005-08-22-00000'
+    >>> _datetime_str(get_file_date("0011-12-09-00435"))
+    '0011-12-09-00435'
+    """
+
+    format_string = "{year:04d}-{month:02d}-{day:02d}-{seconds:05d}"
+    return format_string.format(
+        year=_date.year(),
+        month=_date.month(),
+        day=_date.day(),
+        seconds=_date.second_of_day(),
+    )
+
+
+def _datetime_str_mpas(_date):
+    """
+    Returns the mpas format associated with filenames.
+
+    >>> from CIME.date import date
+    >>> _datetime_str_mpas(date(5, 8, 22))
+    '0005-08-22_00:00:00'
+    >>> _datetime_str_mpas(get_file_date("0011-12-09-00435"))
+    '0011-12-09_00:07:15'
+    """
+
+    format_string = (
+        "{year:04d}-{month:02d}-{day:02d}_{hours:02d}:{minutes:02d}:{seconds:02d}"
+    )
+    return format_string.format(
+        year=_date.year(),
+        month=_date.month(),
+        day=_date.day(),
+        hours=_date.hour(),
+        minutes=_date.minute(),
+        seconds=_date.second(),
+    )
+
+
+###############################################################################
+def _get_ninst_info(case, compclass):
+    ###############################################################################
+    """
+    Returns the number of instances used by a component and suffix strings for filenames
+    Not doc-testable due to case dependence
+    """
+
+    ninst = case.get_value("NINST_" + compclass.upper())
+    ninst_strings = []
+    if ninst is None:
+        ninst = 1
+    for i in range(1, ninst + 1):
+        if ninst > 1:
+            ninst_strings.append("_" + "{:04d}".format(i))
+
+    logger.debug(
+        "ninst and ninst_strings are: {} and {} for {}".format(
+            ninst, ninst_strings, compclass
+        )
+    )
+    return ninst, ninst_strings
+
+
+###############################################################################
+def _get_component_archive_entries(components, archive):
+    ###############################################################################
+    """
+    Each time this generator function is called, it yields a tuple
+    (archive_entry, compname, compclass) for one component in this
+    case's compset components.
+    """
+    for compname in components:
+        logger.debug("compname is {} ".format(compname))
+        archive_entry = archive.get_entry(compname)
+        if archive_entry is None:
+            logger.debug("No entry found for {}".format(compname))
+            compclass = None
+        else:
+            compclass = archive.get(archive_entry, "compclass")
+        yield (archive_entry, compname, compclass)
+
+
+###############################################################################
+def _archive_rpointer_files(
+    casename,
+    ninst_strings,
+    rundir,
+    save_interim_restart_files,
+    archive,
+    archive_entry,
+    archive_restdir,
+    datename,
+    datename_is_last,
+):
+    ###############################################################################
+
+    if datename_is_last:
+        # Copy of all rpointer files for latest restart date
+        rpointers = glob.glob(os.path.join(rundir, "rpointer.*"))
+        for rpointer in rpointers:
+            safe_copy(
+                rpointer, os.path.join(archive_restdir, os.path.basename(rpointer))
+            )
+    else:
+        # Generate rpointer file(s) for interim restarts for the one datename and each
+        # possible value of ninst_strings
+        if save_interim_restart_files:
+
+            # parse env_archive.xml to determine the rpointer files
+            # and contents for the given archive_entry tag
+            rpointer_items = archive.get_rpointer_contents(archive_entry)
+
+            # loop through the possible rpointer files and contents
+            for rpointer_file, rpointer_content in rpointer_items:
+                temp_rpointer_file = rpointer_file
+                temp_rpointer_content = rpointer_content
+
+                # put in a temporary setting for ninst_strings if they are empty
+                # in order to have just one loop over ninst_strings below
+                if rpointer_content != "unset":
+                    if not ninst_strings:
+                        ninst_strings = ["empty"]
+
+                    for ninst_string in ninst_strings:
+                        rpointer_file = temp_rpointer_file
+                        rpointer_content = temp_rpointer_content
+                        if ninst_string == "empty":
+                            ninst_string = ""
+                        for key, value in [
+                            ("$CASE", casename),
+                            ("$DATENAME", _datetime_str(datename)),
+                            ("$MPAS_DATENAME", _datetime_str_mpas(datename)),
+                            ("$NINST_STRING", ninst_string),
+                        ]:
+                            rpointer_file = rpointer_file.replace(key, value)
+                            rpointer_content = rpointer_content.replace(key, value)
+
+                        # write out the respective files with the correct contents
+                        rpointer_file = os.path.join(archive_restdir, rpointer_file)
+                        logger.info("writing rpointer_file {}".format(rpointer_file))
+                        f = open(rpointer_file, "w")
+                        for output in rpointer_content.split(","):
+                            f.write("{} \n".format(output))
+                        f.close()
+                else:
+                    logger.info(
+                        "rpointer_content unset, not creating rpointer file {}".format(
+                            rpointer_file
+                        )
+                    )
+
+
+###############################################################################
+def _archive_log_files(dout_s_root, rundir, archive_incomplete, archive_file_fn):
+    ###############################################################################
+    """
+    Find all completed log files, or all log files if archive_incomplete is True, and archive them.
+    Each log file is required to have ".log." in its name, and completed ones will end with ".gz"
+    Not doc-testable due to file system dependence
+    """
+    archive_logdir = os.path.join(dout_s_root, "logs")
+    if not os.path.exists(archive_logdir):
+        os.makedirs(archive_logdir)
+        logger.debug("created directory {} ".format(archive_logdir))
+
+    if archive_incomplete == False:
+        log_search = "*.log.*.gz"
+    else:
+        log_search = "*.log.*"
+
+    logfiles = glob.glob(os.path.join(rundir, log_search))
+    for logfile in logfiles:
+        srcfile = join(rundir, os.path.basename(logfile))
+        destfile = join(archive_logdir, os.path.basename(logfile))
+        logger.info(
+            "{} {} to {}".format(
+                _get_archive_fn_desc(archive_file_fn), srcfile, destfile
+            )
+        )
+        archive_file_fn(srcfile, destfile)
+
+
+###############################################################################
+def _archive_history_files(
+    archive,
+    compclass,
+    compname,
+    histfiles_savein_rundir,
+    last_date,
+    archive_file_fn,
+    dout_s_root,
+    casename,
+    rundir,
+):
+    ###############################################################################
+    """
+    perform short term archiving on history files in rundir
+
+    Not doc-testable due to case and file system dependence
+    """
+
+    # determine history archive directory (create if it does not exist)
+
+    archive_histdir = os.path.join(dout_s_root, compclass, "hist")
+    if not os.path.exists(archive_histdir):
+        os.makedirs(archive_histdir)
+        logger.debug("created directory {}".format(archive_histdir))
+    # the compname is drv but the files are named cpl
+    if compname == "drv":
+        compname = "cpl"
+
+    if compname == "nemo":
+        archive_rblddir = os.path.join(dout_s_root, compclass, "rebuild")
+        if not os.path.exists(archive_rblddir):
+            os.makedirs(archive_rblddir)
+            logger.debug("created directory {}".format(archive_rblddir))
+
+        sfxrbld = r"mesh_mask_" + r"[0-9]*"
+        pfile = re.compile(sfxrbld)
+        rbldfiles = [f for f in os.listdir(rundir) if pfile.search(f)]
+        logger.debug("rbldfiles = {} ".format(rbldfiles))
+
+        if rbldfiles:
+            for rbldfile in rbldfiles:
+                srcfile = join(rundir, rbldfile)
+                destfile = join(archive_rblddir, rbldfile)
+                logger.info(
+                    "{} {} to {} ".format(
+                        _get_archive_fn_desc(archive_file_fn), srcfile, destfile
+                    )
+                )
+                archive_file_fn(srcfile, destfile)
+
+        sfxhst = casename + r"_[0-9][mdy]_" + r"[0-9]*"
+        pfile = re.compile(sfxhst)
+        hstfiles = [f for f in os.listdir(rundir) if pfile.search(f)]
+        logger.debug("hstfiles = {} ".format(hstfiles))
+
+        if hstfiles:
+            for hstfile in hstfiles:
+                srcfile = join(rundir, hstfile)
+                destfile = join(archive_histdir, hstfile)
+                logger.info(
+                    "{} {} to {} ".format(
+                        _get_archive_fn_desc(archive_file_fn), srcfile, destfile
+                    )
+                )
+                archive_file_fn(srcfile, destfile)
+
+    # determine ninst and ninst_string
+
+    # archive history files - the only history files that kept in the
+    # run directory are those that are needed for restarts
+    histfiles = archive.get_all_hist_files(casename, compname, rundir)
+
+    if histfiles:
+        for histfile in histfiles:
+            file_date = get_file_date(os.path.basename(histfile))
+            if last_date is None or file_date is None or file_date <= last_date:
+                srcfile = join(rundir, histfile)
+                expect(
+                    os.path.isfile(srcfile),
+                    "history file {} does not exist ".format(srcfile),
+                )
+                destfile = join(archive_histdir, histfile)
+                if histfile in histfiles_savein_rundir:
+                    logger.info("copying {} to {} ".format(srcfile, destfile))
+                    safe_copy(srcfile, destfile)
+                else:
+                    logger.info(
+                        "{} {} to {} ".format(
+                            _get_archive_fn_desc(archive_file_fn), srcfile, destfile
+                        )
+                    )
+                    archive_file_fn(srcfile, destfile)
+
+
+###############################################################################
+
+[docs] +def get_histfiles_for_restarts( + rundir, archive, archive_entry, restfile, testonly=False +): + ############################################################################### + """ + query restart files to determine history files that are needed for restarts + + Not doc-testable due to filesystem dependence + """ + + # Make certain histfiles is a set so we don't repeat + histfiles = set() + rest_hist_varname = archive.get_entry_value("rest_history_varname", archive_entry) + if rest_hist_varname != "unset": + ncdump = shutil.which("ncdump") + expect(ncdump, "ncdump not found in path") + cmd = "{} -v {} {} ".format( + ncdump, rest_hist_varname, os.path.join(rundir, restfile) + ) + if testonly: + out = "{} =".format(rest_hist_varname) + else: + rc, out, error = run_cmd(cmd) + if rc != 0: + logger.info( + " WARNING: {} failed rc={:d}\n out={}\n err={}".format( + cmd, rc, out, error + ) + ) + logger.debug(" get_histfiles_for_restarts: \n out={}".format(out)) + + searchname = "{} =".format(rest_hist_varname) + if searchname in out: + offset = out.index(searchname) + items = out[offset:].split(",") + for item in items: + # the following match has an option of having any number of '.'s and '/'s + # at the beginning of the history filename + matchobj = re.search(r"\"\S+\s*\"", item) + if matchobj: + histfile = matchobj.group(0).strip('" ') + histfile = os.path.basename(histfile) + # append histfile to the list ONLY if it exists in rundir before the archiving + if histfile in histfiles: + logger.warning( + "WARNING, tried to add a duplicate file to histfiles" + ) + if os.path.isfile(os.path.join(rundir, histfile)): + histfiles.add(histfile) + else: + logger.debug( + " get_histfiles_for_restarts: histfile {} does not exist ".format( + histfile + ) + ) + return histfiles
+ + + +############################################################################### +def _archive_restarts_date( + case, + casename, + rundir, + archive, + datename, + datename_is_last, + last_date, + archive_restdir, + archive_file_fn, + components=None, + link_to_last_restart_files=False, + testonly=False, +): + ############################################################################### + """ + Archive restart files for a single date + + Returns a dictionary of histfiles that need saving in the run + directory, indexed by compname + """ + logger.info("-------------------------------------------") + logger.info("Archiving restarts for date {}".format(datename)) + logger.debug("last date {}".format(last_date)) + logger.debug("date is last? {}".format(datename_is_last)) + logger.debug("components are {}".format(components)) + logger.info("-------------------------------------------") + logger.debug("last date: {}".format(last_date)) + + if components is None: + components = case.get_compset_components() + components.append("drv") + components.append("dart") + + histfiles_savein_rundir_by_compname = {} + + for (archive_entry, compname, compclass) in _get_component_archive_entries( + components, archive + ): + if compclass: + logger.info("Archiving restarts for {} ({})".format(compname, compclass)) + + # archive restarts + histfiles_savein_rundir = _archive_restarts_date_comp( + case, + casename, + rundir, + archive, + archive_entry, + compclass, + compname, + datename, + datename_is_last, + last_date, + archive_restdir, + archive_file_fn, + link_to_last_restart_files=link_to_last_restart_files, + testonly=testonly, + ) + histfiles_savein_rundir_by_compname[compname] = histfiles_savein_rundir + + return histfiles_savein_rundir_by_compname + + +############################################################################### +def _archive_restarts_date_comp( + case, + casename, + rundir, + archive, + archive_entry, + compclass, + compname, + datename, + datename_is_last, + last_date, + archive_restdir, + archive_file_fn, + link_to_last_restart_files=False, + testonly=False, +): + ############################################################################### + """ + Archive restart files for a single date and single component + + If link_to_last_restart_files is True, then make a symlink to the + last set of restart files (i.e., the set with datename_is_last + True); if False (the default), copy them. (This has no effect on the + history files that are associated with these restart files.) + """ + datename_str = _datetime_str(datename) + + if datename_is_last or case.get_value("DOUT_S_SAVE_INTERIM_RESTART_FILES"): + if not os.path.exists(archive_restdir): + os.makedirs(archive_restdir) + + # archive the rpointer file(s) for this datename and all possible ninst_strings + _archive_rpointer_files( + casename, + _get_ninst_info(case, compclass)[1], + rundir, + case.get_value("DOUT_S_SAVE_INTERIM_RESTART_FILES"), + archive, + archive_entry, + archive_restdir, + datename, + datename_is_last, + ) + + # move all but latest restart files into the archive restart directory + # copy latest restart files to archive restart directory + histfiles_savein_rundir = [] + + # determine function to use for last set of restart files + if link_to_last_restart_files: + last_restart_file_fn = symlink_force + last_restart_file_fn_msg = "linking" + else: + last_restart_file_fn = safe_copy + last_restart_file_fn_msg = "copying" + + # the compname is drv but the files are named cpl + if compname == "drv": + compname = "cpl" + if compname == "cice5": + compname = "cice" + if compname == "ww3dev": + compname = "ww3" + + # get file_extension suffixes + for suffix in archive.get_rest_file_extensions(archive_entry): + # logger.debug("suffix is {} ninst {}".format(suffix, ninst)) + restfiles = "" + if compname.find("mpas") == 0 or compname == "mali": + pattern = ( + casename + + r"\." + + compname + + r"\." + + suffix + + r"\." + + "_".join(datename_str.rsplit("-", 1)) + ) + pfile = re.compile(pattern) + restfiles = [f for f in os.listdir(rundir) if pfile.search(f)] + elif compname == "nemo": + pattern = r"_*_" + suffix + r"[0-9]*" + pfile = re.compile(pattern) + restfiles = [f for f in os.listdir(rundir) if pfile.search(f)] + else: + pattern = r"^{}\.{}[\d_]*\.".format(casename, compname) + pfile = re.compile(pattern) + files = [f for f in os.listdir(rundir) if pfile.search(f)] + pattern = ( + r"_?" + + r"\d*" + + r"\." + + suffix + + r"\." + + r"[^\.]*" + + r"\.?" + + datename_str + ) + pfile = re.compile(pattern) + restfiles = [f for f in files if pfile.search(f)] + logger.debug("pattern is {} restfiles {}".format(pattern, restfiles)) + for rfile in restfiles: + rfile = os.path.basename(rfile) + + file_date = get_file_date(rfile) + if last_date is not None and file_date > last_date: + # Skip this file + continue + + if not os.path.exists(archive_restdir): + os.makedirs(archive_restdir) + + # obtain array of history files for restarts + # need to do this before archiving restart files + histfiles_for_restart = get_histfiles_for_restarts( + rundir, archive, archive_entry, rfile, testonly=testonly + ) + + if datename_is_last and histfiles_for_restart: + for histfile in histfiles_for_restart: + if histfile not in histfiles_savein_rundir: + histfiles_savein_rundir.append(histfile) + + # archive restart files and all history files that are needed for restart + # Note that the latest file should be copied and not moved + if datename_is_last: + srcfile = os.path.join(rundir, rfile) + destfile = os.path.join(archive_restdir, rfile) + last_restart_file_fn(srcfile, destfile) + logger.info( + "{} file {} to {}".format( + last_restart_file_fn_msg, srcfile, destfile + ) + ) + for histfile in histfiles_for_restart: + srcfile = os.path.join(rundir, histfile) + destfile = os.path.join(archive_restdir, histfile) + expect( + os.path.isfile(srcfile), + "history restart file {} for last date does not exist ".format( + srcfile + ), + ) + logger.info("Copying {} to {}".format(srcfile, destfile)) + safe_copy(srcfile, destfile) + logger.debug( + "datename_is_last + histfiles_for_restart copying \n {} to \n {}".format( + srcfile, destfile + ) + ) + else: + # Only archive intermediate restarts if requested - otherwise remove them + if case.get_value("DOUT_S_SAVE_INTERIM_RESTART_FILES"): + srcfile = os.path.join(rundir, rfile) + destfile = os.path.join(archive_restdir, rfile) + expect( + os.path.isfile(srcfile), + "restart file {} does not exist ".format(srcfile), + ) + logger.info( + "{} file {} to {}".format( + _get_archive_fn_desc(archive_file_fn), srcfile, destfile + ) + ) + archive_file_fn(srcfile, destfile) + + # need to copy the history files needed for interim restarts - since + # have not archived all of the history files yet + for histfile in histfiles_for_restart: + srcfile = os.path.join(rundir, histfile) + destfile = os.path.join(archive_restdir, histfile) + expect( + os.path.isfile(srcfile), + "hist file {} does not exist ".format(srcfile), + ) + logger.info("copying {} to {}".format(srcfile, destfile)) + safe_copy(srcfile, destfile) + else: + if compname == "nemo": + flist = glob.glob(rundir + "/" + casename + "_*_restart*.nc") + logger.debug("nemo restart file {}".format(flist)) + if len(flist) > 2: + flist0 = glob.glob( + rundir + "/" + casename + "_*_restart_0000.nc" + ) + if len(flist0) > 1: + rstfl01 = flist0[0] + rstfl01spl = rstfl01.split("/") + logger.debug("splitted name {}".format(rstfl01spl)) + rstfl01nm = rstfl01spl[-1] + rstfl01nmspl = rstfl01nm.split("_") + logger.debug( + "splitted name step2 {}".format(rstfl01nmspl) + ) + rsttm01 = rstfl01nmspl[-3] + + rstfl02 = flist0[1] + rstfl02spl = rstfl02.split("/") + logger.debug("splitted name {}".format(rstfl02spl)) + rstfl02nm = rstfl02spl[-1] + rstfl02nmspl = rstfl02nm.split("_") + logger.debug( + "splitted name step2 {}".format(rstfl02nmspl) + ) + rsttm02 = rstfl02nmspl[-3] + + if int(rsttm01) > int(rsttm02): + restlist = glob.glob( + rundir + + "/" + + casename + + "_" + + rsttm02 + + "_restart_*.nc" + ) + else: + restlist = glob.glob( + rundir + + "/" + + casename + + "_" + + rsttm01 + + "_restart_*.nc" + ) + logger.debug("nemo restart list {}".format(restlist)) + if restlist: + for _restfile in restlist: + srcfile = os.path.join(rundir, _restfile) + logger.info( + "removing interim restart file {}".format( + srcfile + ) + ) + if os.path.isfile(srcfile): + try: + os.remove(srcfile) + except OSError: + logger.warning( + "unable to remove interim restart file {}".format( + srcfile + ) + ) + else: + logger.warning( + "interim restart file {} does not exist".format( + srcfile + ) + ) + elif len(flist) == 2: + flist0 = glob.glob( + rundir + "/" + casename + "_*_restart.nc" + ) + if len(flist0) > 1: + rstfl01 = flist0[0] + rstfl01spl = rstfl01.split("/") + logger.debug("splitted name {}".format(rstfl01spl)) + rstfl01nm = rstfl01spl[-1] + rstfl01nmspl = rstfl01nm.split("_") + logger.debug( + "splitted name step2 {}".format(rstfl01nmspl) + ) + rsttm01 = rstfl01nmspl[-2] + + rstfl02 = flist0[1] + rstfl02spl = rstfl02.split("/") + logger.debug("splitted name {}".format(rstfl02spl)) + rstfl02nm = rstfl02spl[-1] + rstfl02nmspl = rstfl02nm.split("_") + logger.debug( + "splitted name step2 {}".format(rstfl02nmspl) + ) + rsttm02 = rstfl02nmspl[-2] + + if int(rsttm01) > int(rsttm02): + restlist = glob.glob( + rundir + + "/" + + casename + + "_" + + rsttm02 + + "_restart_*.nc" + ) + else: + restlist = glob.glob( + rundir + + "/" + + casename + + "_" + + rsttm01 + + "_restart_*.nc" + ) + logger.debug("nemo restart list {}".format(restlist)) + if restlist: + for _rfile in restlist: + srcfile = os.path.join(rundir, _rfile) + logger.info( + "removing interim restart file {}".format( + srcfile + ) + ) + if os.path.isfile(srcfile): + try: + os.remove(srcfile) + except OSError: + logger.warning( + "unable to remove interim restart file {}".format( + srcfile + ) + ) + else: + logger.warning( + "interim restart file {} does not exist".format( + srcfile + ) + ) + else: + logger.warning( + "unable to find NEMO restart file in {}".format(rundir) + ) + + else: + srcfile = os.path.join(rundir, rfile) + logger.info("removing interim restart file {}".format(srcfile)) + if os.path.isfile(srcfile): + try: + os.remove(srcfile) + except OSError: + logger.warning( + "unable to remove interim restart file {}".format( + srcfile + ) + ) + else: + logger.warning( + "interim restart file {} does not exist".format(srcfile) + ) + + return histfiles_savein_rundir + + +############################################################################### +def _archive_process( + case, + archive, + last_date, + archive_incomplete_logs, + copy_only, + components=None, + dout_s_root=None, + casename=None, + rundir=None, + testonly=False, +): + ############################################################################### + """ + Parse config_archive.xml and perform short term archiving + """ + + logger.debug("In archive_process...") + + if dout_s_root is None: + dout_s_root = case.get_value("DOUT_S_ROOT") + if rundir is None: + rundir = case.get_value("RUNDIR") + if casename is None: + casename = case.get_value("CASE") + if components is None: + components = case.get_compset_components() + components.append("drv") + components.append("dart") + + archive_file_fn = _get_archive_file_fn(copy_only) + + # archive log files + _archive_log_files(dout_s_root, rundir, archive_incomplete_logs, archive_file_fn) + + # archive restarts and all necessary associated files (e.g. rpointer files) + datenames = _get_datenames(casename, rundir) + logger.debug("datenames {} ".format(datenames)) + histfiles_savein_rundir_by_compname = {} + for datename in datenames: + datename_is_last = False + if datename == datenames[-1]: + datename_is_last = True + + logger.debug("datename {} last_date {}".format(datename, last_date)) + if last_date is None or datename <= last_date: + archive_restdir = join(dout_s_root, "rest", _datetime_str(datename)) + + histfiles_savein_rundir_by_compname_this_date = _archive_restarts_date( + case, + casename, + rundir, + archive, + datename, + datename_is_last, + last_date, + archive_restdir, + archive_file_fn, + components, + testonly=testonly, + ) + if datename_is_last: + histfiles_savein_rundir_by_compname = ( + histfiles_savein_rundir_by_compname_this_date + ) + + # archive history files + + for (_, compname, compclass) in _get_component_archive_entries(components, archive): + if compclass: + logger.info( + "Archiving history files for {} ({})".format(compname, compclass) + ) + histfiles_savein_rundir = histfiles_savein_rundir_by_compname.get( + compname, [] + ) + logger.debug( + "_archive_process: histfiles_savein_rundir {} ".format( + histfiles_savein_rundir + ) + ) + _archive_history_files( + archive, + compclass, + compname, + histfiles_savein_rundir, + last_date, + archive_file_fn, + dout_s_root, + casename, + rundir, + ) + + +############################################################################### +
+[docs] +def restore_from_archive( + self, rest_dir=None, dout_s_root=None, rundir=None, test=False +): + ############################################################################### + """ + Take archived restart files and load them into current case. Use rest_dir if provided otherwise use most recent + restore_from_archive is a member of Class Case + """ + if dout_s_root is None: + dout_s_root = self.get_value("DOUT_S_ROOT") + if rundir is None: + rundir = self.get_value("RUNDIR") + if rest_dir: + if not os.path.isabs(rest_dir): + rest_dir = os.path.join(dout_s_root, "rest", rest_dir) + else: + rest_root = os.path.join(dout_s_root, "rest") + + if os.path.exists(rest_root): + rest_dir = os.path.join( + rest_root, ls_sorted_by_mtime(os.path.join(dout_s_root, "rest"))[-1] + ) + + if rest_dir is None and test: + logger.warning( + "No rest_dir found for test - is this expected? DOUT_S_ROOT={}".format( + dout_s_root + ) + ) + return + expect(os.path.exists(rest_dir), "ERROR: No directory {} found".format(rest_dir)) + logger.info("Restoring restart from {}".format(rest_dir)) + + for item in glob.glob("{}/*".format(rest_dir)): + base = os.path.basename(item) + dst = os.path.join(rundir, base) + if os.path.exists(dst): + os.remove(dst) + logger.info("Restoring {} from {} to {}".format(item, rest_dir, rundir)) + + safe_copy(item, rundir)
+ + + +############################################################################### +
+[docs] +def archive_last_restarts( + self, archive_restdir, rundir, last_date=None, link_to_restart_files=False +): + ############################################################################### + """ + Convenience function for archiving just the last set of restart + files to a given directory. This also saves files attached to the + restart set, such as rpointer files and necessary history + files. However, it does not save other files that are typically + archived (e.g., history files, log files). + + Files are copied to the directory given by archive_restdir. + + If link_to_restart_files is True, then symlinks rather than copies + are done for the restart files. (This has no effect on the history + files that are associated with these restart files.) + """ + archive = self.get_env("archive") + casename = self.get_value("CASE") + datenames = _get_datenames(casename, rundir) + expect(len(datenames) >= 1, "No restart dates found") + last_datename = datenames[-1] + + # Not currently used for anything if we're only archiving the last + # set of restart files, but needed to satisfy the following interface + archive_file_fn = _get_archive_file_fn(copy_only=False) + + _ = _archive_restarts_date( + case=self, + casename=casename, + rundir=rundir, + archive=archive, + datename=last_datename, + datename_is_last=True, + last_date=last_date, + archive_restdir=archive_restdir, + archive_file_fn=archive_file_fn, + link_to_last_restart_files=link_to_restart_files, + )
+ + + +############################################################################### +
+[docs] +def case_st_archive( + self, + last_date_str=None, + archive_incomplete_logs=True, + copy_only=False, + resubmit=True, +): + ############################################################################### + """ + Create archive object and perform short term archiving + """ + logger.debug("resubmit {}".format(resubmit)) + caseroot = self.get_value("CASEROOT") + self.load_env(job="case.st_archive") + if last_date_str is not None: + try: + last_date = get_file_date(last_date_str) + except ValueError: + expect(False, "Could not parse the last date to archive") + else: + last_date = None + + dout_s_root = self.get_value("DOUT_S_ROOT") + if dout_s_root is None or dout_s_root == "UNSET": + expect(False, "XML variable DOUT_S_ROOT is required for short-term achiver") + if not isdir(dout_s_root): + os.makedirs(dout_s_root) + + dout_s_save_interim = self.get_value("DOUT_S_SAVE_INTERIM_RESTART_FILES") + if dout_s_save_interim == "FALSE" or dout_s_save_interim == "UNSET": + rest_n = self.get_value("REST_N") + stop_n = self.get_value("STOP_N") + if rest_n < stop_n: + logger.warning( + "Restart files from end of run will be saved" + "interim restart files will be deleted" + ) + + logger.info("st_archive starting") + + is_batch = self.get_value("BATCH_SYSTEM") + msg_func = None + + if is_batch: + jobid = batch_jobid() + msg_func = lambda *args: jobid if jobid is not None else "" + + archive = self.get_env("archive") + functor = lambda: _archive_process( + self, archive, last_date, archive_incomplete_logs, copy_only + ) + run_and_log_case_status( + functor, + "st_archive", + custom_starting_msg_functor=msg_func, + custom_success_msg_functor=msg_func, + caseroot=caseroot, + is_batch=is_batch, + ) + + logger.info("st_archive completed") + + # resubmit case if appropriate + if not self.get_value("EXTERNAL_WORKFLOW") and resubmit: + resubmit_cnt = self.get_value("RESUBMIT") + logger.debug("resubmit_cnt {} resubmit {}".format(resubmit_cnt, resubmit)) + if resubmit_cnt > 0: + logger.info( + "resubmitting from st_archive, resubmit={:d}".format(resubmit_cnt) + ) + if self.get_value("MACH") == "mira": + expect( + os.path.isfile(".original_host"), "ERROR alcf host file not found" + ) + with open(".original_host", "r") as fd: + sshhost = fd.read() + run_cmd( + "ssh cooleylogin1 ssh {} '{case}/case.submit {case} --resubmit' ".format( + sshhost, case=caseroot + ), + verbose=True, + ) + else: + self.submit(resubmit=True) + + return True
+ + + +
+[docs] +def test_st_archive(self, testdir="st_archive_test"): + files = Files() + archive = Archive(files=files) + components = [] + # expect(not self.get_value("MULTI_DRIVER"),"Test not configured for multi-driver cases") + + config_archive_files = archive.get_all_config_archive_files(files) + # create the run directory testdir and populate it with rest_file and hist_file from + # config_archive.xml test_file_names + if os.path.exists(testdir): + logger.info("Removing existing test directory {}".format(testdir)) + shutil.rmtree(testdir) + dout_s_root = os.path.join(testdir, "archive") + archive = Archive() + schema = files.get_schema("ARCHIVE_SPEC_FILE") + for config_archive_file in config_archive_files: + archive.read(config_archive_file, schema) + comp_archive_specs = archive.get_children("comp_archive_spec") + for comp_archive_spec in comp_archive_specs: + components.append(archive.get(comp_archive_spec, "compname")) + test_file_names = archive.get_optional_child( + "test_file_names", root=comp_archive_spec + ) + if test_file_names is not None: + if not os.path.exists(testdir): + os.makedirs(os.path.join(testdir, "archive")) + + for file_node in archive.get_children("tfile", root=test_file_names): + fname = os.path.join(testdir, archive.text(file_node)) + disposition = archive.get(file_node, "disposition") + logger.info( + "Create file {} with disposition {}".format(fname, disposition) + ) + with open(fname, "w") as fd: + fd.write(disposition + "\n") + + logger.info("testing components: {} ".format(list(set(components)))) + _archive_process( + self, + archive, + None, + False, + False, + components=list(set(components)), + dout_s_root=dout_s_root, + casename="casename", + rundir=testdir, + testonly=True, + ) + + _check_disposition(testdir) + + # Now test the restore capability + testdir2 = os.path.join(testdir, "run2") + os.makedirs(testdir2) + + restore_from_archive(self, rundir=testdir2, dout_s_root=dout_s_root, test=True) + + restfiles = [ + f + for f in os.listdir( + os.path.join(testdir, "archive", "rest", "1976-01-01-00000") + ) + ] + for _file in restfiles: + expect( + os.path.isfile(os.path.join(testdir2, _file)), + "Expected file {} to be restored from rest dir".format(_file), + ) + + return True
+ + + +
+[docs] +def test_env_archive(self, testdir="env_archive_test"): + components = self.get_values("COMP_CLASSES") + comps_in_case = [] + # create the run directory testdir and populate it with rest_file and hist_file from + # config_archive.xml test_file_names + if os.path.exists(testdir): + logger.info("Removing existing test directory {}".format(testdir)) + shutil.rmtree(testdir) + dout_s_root = os.path.join(testdir, "archive") + archive = self.get_env("archive") + comp_archive_specs = archive.scan_children("comp_archive_spec") + + # ignore stub and dead components + for comp in list(components): + compname = self.get_value("COMP_{}".format(comp)) + if ( + compname == "s" + comp.lower() or compname == "x" + comp.lower() + ) and comp != "ESP": + logger.info("Not testing component {}".format(comp)) + components.remove(comp) + elif comp == "ESP" and self.get_value("MODEL") == "e3sm": + components.remove(comp) + else: + if compname == "cpl": + compname = "drv" + comps_in_case.append(compname) + + for comp_archive_spec in comp_archive_specs: + comp_expected = archive.get(comp_archive_spec, "compname") + # Rename ww3 component when case and archive names don't match, + # specific to CESM. + if comp_expected == "ww3" and "ww" in comps_in_case: + comp_expected = "ww" + comp_class = archive.get(comp_archive_spec, "compclass").upper() + if comp_class in components: + components.remove(comp_class) + else: + expect( + False, "Error finding comp_class {} in components".format(comp_class) + ) + if comp_expected == "cpl": + comp_expected = "drv" + if comp_expected != "dart": + expect( + comp_expected in comps_in_case, + "env_archive defines component {} not defined in case".format( + comp_expected + ), + ) + + test_file_names = archive.get_optional_child( + "test_file_names", root=comp_archive_spec + ) + if test_file_names is not None: + if not os.path.exists(testdir): + os.makedirs(os.path.join(testdir, "archive")) + + for file_node in archive.get_children("tfile", root=test_file_names): + fname = os.path.join(testdir, archive.text(file_node)) + disposition = archive.get(file_node, "disposition") + logger.info( + "Create file {} with disposition {}".format(fname, disposition) + ) + with open(fname, "w") as fd: + fd.write(disposition + "\n") + + expect( + not components, "No archive entry found for components: {}".format(components) + ) + if "dart" not in comps_in_case: + comps_in_case.append("dart") + logger.info("testing components: {} ".format(comps_in_case)) + _archive_process( + self, + archive, + None, + False, + False, + components=comps_in_case, + dout_s_root=dout_s_root, + casename="casename", + rundir=testdir, + testonly=True, + ) + + _check_disposition(testdir) + + # Now test the restore capability + testdir2 = os.path.join(testdir, "run2") + os.makedirs(testdir2) + restfiles = [] + restore_from_archive(self, rundir=testdir2, dout_s_root=dout_s_root, test=True) + if os.path.exists(os.path.join(testdir, "archive", "rest")): + restfiles = [ + f + for f in os.listdir( + os.path.join(testdir, "archive", "rest", "1976-01-01-00000") + ) + ] + for _file in restfiles: + expect( + os.path.isfile(os.path.join(testdir2, _file)), + "Expected file {} to be restored from rest dir".format(_file), + ) + + return True
+ + + +def _check_disposition(testdir): + copyfilelist = [] + for root, _, files in os.walk(testdir): + for _file in files: + with open(os.path.join(root, _file), "r") as fd: + disposition = fd.readline() + logger.info( + "Checking testfile {} with disposition {}".format(_file, disposition) + ) + if root == testdir: + if "move" in disposition: + if find_files(os.path.join(testdir, "archive"), _file): + expect( + False, + "Copied file {} to archive with disposition move".format( + _file + ), + ) + else: + expect(False, "Failed to move file {} to archive".format(_file)) + if "copy" in disposition: + copyfilelist.append(_file) + elif "ignore" in disposition: + expect( + False, + "Moved file {} with dispostion ignore to directory {}".format( + _file, root + ), + ) + elif "copy" in disposition: + expect( + _file in copyfilelist, + "File {} with disposition copy was moved to directory {}".format( + _file, root + ), + ) + for _file in copyfilelist: + expect( + find_files(os.path.join(testdir, "archive"), _file) != [], + "File {} was not copied to archive.".format(_file), + ) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_submit.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_submit.html new file mode 100644 index 00000000000..b5c74b460eb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_submit.html @@ -0,0 +1,499 @@ + + + + + + CIME.case.case_submit — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case_submit

+#!/usr/bin/env python3
+
+"""
+case.submit - Submit a cesm workflow to the queueing system or run it
+if there is no queueing system.  A cesm workflow may include multiple
+jobs.
+submit, check_case and check_da_settings are members of class Case in file case.py
+"""
+import configparser
+from CIME.XML.standard_module_setup import *
+from CIME.utils import expect, run_and_log_case_status, CIMEError, get_time_in_seconds
+from CIME.locked_files import unlock_file, lock_file
+from CIME.test_status import *
+
+import socket
+
+logger = logging.getLogger(__name__)
+
+
+def _build_prereq_str(case, prev_job_ids):
+    delimiter = case.get_value("depend_separator")
+    prereq_str = ""
+    for job_id in prev_job_ids.values():
+        prereq_str += str(job_id) + delimiter
+    return prereq_str[:-1]
+
+
+def _submit(
+    case,
+    job=None,
+    no_batch=False,
+    prereq=None,
+    allow_fail=False,
+    resubmit=False,
+    resubmit_immediate=False,
+    skip_pnl=False,
+    mail_user=None,
+    mail_type=None,
+    batch_args=None,
+    workflow=True,
+    chksum=False,
+):
+    if job is None:
+        job = case.get_first_job()
+    caseroot = case.get_value("CASEROOT")
+    # Check mediator
+    hasMediator = True
+    comp_classes = case.get_values("COMP_CLASSES")
+    if "CPL" not in comp_classes:
+        hasMediator = False
+
+    # Check if CONTINUE_RUN value makes sense
+    # if submitted with a prereq don't do this check
+    if case.get_value("CONTINUE_RUN") and hasMediator and not prereq:
+        rundir = case.get_value("RUNDIR")
+        expect(
+            os.path.isdir(rundir),
+            "CONTINUE_RUN is true but RUNDIR {} does not exist".format(rundir),
+        )
+        # only checks for the first instance in a multidriver case
+        if case.get_value("COMP_INTERFACE") == "nuopc":
+            rpointer = "rpointer.cpl"
+        else:
+            rpointer = "rpointer.drv"
+        # Variable MULTI_DRIVER is always true for nuopc so we need to also check NINST > 1
+        if case.get_value("MULTI_DRIVER") and case.get_value("NINST") > 1:
+            rpointer = rpointer + "_0001"
+        expect(
+            os.path.exists(os.path.join(rundir, rpointer)),
+            "CONTINUE_RUN is true but this case does not appear to have restart files staged in {} {}".format(
+                rundir, rpointer
+            ),
+        )
+        # Finally we open the rpointer file and check that it's correct
+        casename = case.get_value("CASE")
+        with open(os.path.join(rundir, rpointer), "r") as fd:
+            ncfile = fd.readline().strip()
+            expect(
+                ncfile.startswith(casename)
+                and os.path.exists(os.path.join(rundir, ncfile)),
+                "File {ncfile} not present or does not match case {casename}".format(
+                    ncfile=os.path.join(rundir, ncfile), casename=casename
+                ),
+            )
+
+    # if case.submit is called with the no_batch flag then we assume that this
+    # flag will stay in effect for the duration of the RESUBMITs
+    env_batch = case.get_env("batch")
+    external_workflow = case.get_value("EXTERNAL_WORKFLOW")
+    if env_batch.get_batch_system_type() == "none" or resubmit and external_workflow:
+        no_batch = True
+
+    if no_batch:
+        batch_system = "none"
+    else:
+        batch_system = env_batch.get_batch_system_type()
+
+    if batch_system != case.get_value("BATCH_SYSTEM"):
+        unlock_file(os.path.basename(env_batch.filename), caseroot=caseroot)
+        case.set_value("BATCH_SYSTEM", batch_system)
+
+    env_batch_has_changed = False
+    if not external_workflow:
+        try:
+            case.check_lockedfile(
+                os.path.basename(env_batch.filename), caseroot=caseroot
+            )
+        except:
+            env_batch_has_changed = True
+
+    if batch_system != "none" and env_batch_has_changed and not external_workflow:
+        # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
+        logger.warning(
+            """
+env_batch.xml appears to have changed, regenerating batch scripts
+manual edits to these file will be lost!
+"""
+        )
+        env_batch.make_all_batch_files(case)
+    case.flush()
+    lock_file(os.path.basename(env_batch.filename), caseroot=caseroot)
+
+    if resubmit:
+        # This is a resubmission, do not reinitialize test values
+        if job == "case.test":
+            case.set_value("IS_FIRST_RUN", False)
+
+        resub = case.get_value("RESUBMIT")
+        logger.info("Submitting job '{}', resubmit={:d}".format(job, resub))
+        case.set_value("RESUBMIT", resub - 1)
+        if case.get_value("RESUBMIT_SETS_CONTINUE_RUN"):
+            case.set_value("CONTINUE_RUN", True)
+
+    else:
+        if job == "case.test":
+            case.set_value("IS_FIRST_RUN", True)
+
+        if no_batch:
+            batch_system = "none"
+        else:
+            batch_system = env_batch.get_batch_system_type()
+
+        case.set_value("BATCH_SYSTEM", batch_system)
+
+        env_batch_has_changed = False
+        try:
+            case.check_lockedfile(os.path.basename(env_batch.filename))
+        except CIMEError:
+            env_batch_has_changed = True
+
+        if env_batch.get_batch_system_type() != "none" and env_batch_has_changed:
+            # May need to regen batch files if user made batch setting changes (e.g. walltime, queue, etc)
+            logger.warning(
+                """
+env_batch.xml appears to have changed, regenerating batch scripts
+manual edits to these file will be lost!
+"""
+            )
+            env_batch.make_all_batch_files(case)
+
+        unlock_file(os.path.basename(env_batch.filename), caseroot=caseroot)
+        lock_file(os.path.basename(env_batch.filename), caseroot=caseroot)
+
+        case.check_case(skip_pnl=skip_pnl, chksum=chksum)
+        if job == case.get_primary_job():
+            case.check_DA_settings()
+            if case.get_value("MACH") == "mira":
+                with open(".original_host", "w") as fd:
+                    fd.write(socket.gethostname())
+
+    # Load Modules
+    case.load_env()
+
+    case.flush()
+
+    logger.warning("submit_jobs {}".format(job))
+    job_ids = case.submit_jobs(
+        no_batch=no_batch,
+        job=job,
+        prereq=prereq,
+        skip_pnl=skip_pnl,
+        resubmit_immediate=resubmit_immediate,
+        allow_fail=allow_fail,
+        mail_user=mail_user,
+        mail_type=mail_type,
+        batch_args=batch_args,
+        workflow=workflow,
+    )
+
+    xml_jobids = []
+    for jobname, jobid in job_ids.items():
+        logger.info("Submitted job {} with id {}".format(jobname, jobid))
+        if jobid:
+            xml_jobids.append("{}:{}".format(jobname, jobid))
+
+    xml_jobid_text = ", ".join(xml_jobids)
+    if xml_jobid_text:
+        case.set_value("JOB_IDS", xml_jobid_text)
+
+    return xml_jobid_text
+
+
+
+[docs] +def submit( + self, + job=None, + no_batch=False, + prereq=None, + allow_fail=False, + resubmit=False, + resubmit_immediate=False, + skip_pnl=False, + mail_user=None, + mail_type=None, + batch_args=None, + workflow=True, + chksum=False, +): + if resubmit_immediate and self.get_value("MACH") in ["mira", "cetus"]: + logger.warning( + "resubmit_immediate does not work on Mira/Cetus, submitting normally" + ) + resubmit_immediate = False + + caseroot = self.get_value("CASEROOT") + if self.get_value("TEST"): + casebaseid = self.get_value("CASEBASEID") + # This should take care of the race condition where the submitted job + # begins immediately and tries to set RUN phase. We proactively assume + # a passed SUBMIT phase. If this state is already PASS, don't set it again + # because then we'll lose RUN phase info if it's there. This info is important + # for system_tests_common to know if it needs to reinitialize the test or not. + with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts: + phase_status = ts.get_status(SUBMIT_PHASE) + if phase_status != TEST_PASS_STATUS: + ts.set_status(SUBMIT_PHASE, TEST_PASS_STATUS) + + # If this is a resubmit check the hidden file .submit_options for + # any submit options used on the original submit and use them again + submit_options = os.path.join(caseroot, ".submit_options") + if resubmit and os.path.exists(submit_options): + config = configparser.RawConfigParser() + config.read(submit_options) + if not skip_pnl and config.has_option("SubmitOptions", "skip_pnl"): + skip_pnl = config.getboolean("SubmitOptions", "skip_pnl") + if mail_user is None and config.has_option("SubmitOptions", "mail_user"): + mail_user = config.get("SubmitOptions", "mail_user") + if mail_type is None and config.has_option("SubmitOptions", "mail_type"): + mail_type = str(config.get("SubmitOptions", "mail_type")).split(",") + if batch_args is None and config.has_option("SubmitOptions", "batch_args"): + batch_args = config.get("SubmitOptions", "batch_args") + + is_batch = self.get_value("BATCH_SYSTEM") is not None + + try: + functor = lambda: _submit( + self, + job=job, + no_batch=no_batch, + prereq=prereq, + allow_fail=allow_fail, + resubmit=resubmit, + resubmit_immediate=resubmit_immediate, + skip_pnl=skip_pnl, + mail_user=mail_user, + mail_type=mail_type, + batch_args=batch_args, + workflow=workflow, + chksum=chksum, + ) + run_and_log_case_status( + functor, + "case.submit", + caseroot=caseroot, + custom_success_msg_functor=lambda x: x.split(":")[-1], + is_batch=is_batch, + ) + except BaseException: # Want to catch KeyboardInterrupt too + # If something failed in the batch system, make sure to mark + # the test as failed if we are running a test. + if self.get_value("TEST"): + with TestStatus(test_dir=caseroot, test_name=casebaseid) as ts: + ts.set_status(SUBMIT_PHASE, TEST_FAIL_STATUS) + + raise
+ + + +
+[docs] +def check_case(self, skip_pnl=False, chksum=False): + self.check_lockedfiles() + if not skip_pnl: + self.create_namelists() # Must be called before check_all_input_data + logger.info("Checking that inputdata is available as part of case submission") + if not self.get_value("TEST"): + self.check_all_input_data(chksum=chksum) + + if self.get_value("COMP_WAV") == "ww": + # the ww3 buildnml has dependencies on inputdata so we must run it again + self.create_namelists(component="WAV") + + if self.get_value("COMP_INTERFACE") == "nuopc": + # + # Check that run length is a multiple of the longest component + # coupling interval. The longest interval is smallest NCPL value. + # models using the nuopc interface will fail at initialization unless + # ncpl follows these rules, other models will only fail later and so + # this test is skipped so that short tests can be run without adjusting NCPL + # + maxncpl = 10000 + minncpl = 0 + maxcomp = None + for comp in self.get_values("COMP_CLASSES"): + if comp == "CPL": + continue + compname = self.get_value("COMP_{}".format(comp)) + + # ignore stub components in this test. + if compname == "s{}".format(comp.lower()): + ncpl = None + else: + ncpl = self.get_value("{}_NCPL".format(comp)) + + if ncpl and maxncpl > ncpl: + maxncpl = ncpl + maxcomp = comp + if ncpl and minncpl < ncpl: + minncpl = ncpl + + ncpl_base_period = self.get_value("NCPL_BASE_PERIOD") + if ncpl_base_period == "hour": + coupling_secs = 3600 / maxncpl + timestep = 3600 / minncpl + elif ncpl_base_period == "day": + coupling_secs = 86400 / maxncpl + timestep = 86400 / minncpl + elif ncpl_base_period == "year": + coupling_secs = 31536000 / maxncpl + timestep = 31536000 / minncpl + elif ncpl_base_period == "decade": + coupling_secs = 315360000 / maxncpl + timestep = 315360000 / minncpl + stop_option = self.get_value("STOP_OPTION") + stop_n = self.get_value("STOP_N") + if stop_option == "nsteps": + stop_option = "seconds" + stop_n = stop_n * timestep + + runtime = get_time_in_seconds(stop_n, stop_option) + expect( + runtime >= coupling_secs and runtime % coupling_secs == 0, + " Runtime ({0} s) must be a multiple of the longest coupling interval {1}_NCPL ({2}s). Adjust runtime or {1}_NCPL".format( + runtime, maxcomp, coupling_secs + ), + ) + + expect( + self.get_value("BUILD_COMPLETE"), + "Build complete is " "not True please rebuild the model by calling case.build", + ) + logger.info("Check case OK")
+ + + +
+[docs] +def check_DA_settings(self): + script = self.get_value("DATA_ASSIMILATION_SCRIPT") + cycles = self.get_value("DATA_ASSIMILATION_CYCLES") + if len(script) > 0 and os.path.isfile(script) and cycles > 0: + logger.info( + "Data Assimilation enabled using script {} with {:d} cycles".format( + script, cycles + ) + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_test.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_test.html new file mode 100644 index 00000000000..3cc73ce62e8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/case_test.html @@ -0,0 +1,207 @@ + + + + + + CIME.case.case_test — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.case_test

+"""
+Run a testcase.
+case_test is a member of class Case from case.py
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import expect, find_system_test, append_testlog, find_proc_id
+from CIME.SystemTests.system_tests_common import *
+
+import sys, signal
+
+
+def _iter_signal_names():
+    for signame in [
+        item
+        for item in dir(signal)
+        if item.startswith("SIG") and not item.startswith("SIG_")
+    ]:
+        yield signame
+
+
+def _signal_handler(signum, _):
+    name = "Unknown"
+    for signame in _iter_signal_names():
+        if signum == getattr(signal, signame):
+            name = signame
+
+    # Terminate children
+    proc_ids = find_proc_id(children_only=True)
+    for proc_id in proc_ids:
+        try:
+            os.kill(proc_id, signal.SIGKILL)
+        except OSError:
+            # If the batch system killed the entire process group, these
+            # processes might already be dying
+            pass
+
+    # Throw an exception so SystemTest infrastructure can handle this error
+    expect(False, "Job killed due to receiving signal {:d} ({})".format(signum, name))
+
+
+def _set_up_signal_handlers():
+    """
+    Add handles for all signals that might be used to abort a test
+
+    We need to handle a wide variety due to different implementations of the
+    timeout mechanism for different batch systems.
+    """
+    for signame in ["SIGINT", "SIGTERM", "SIGXCPU", "SIGUSR1", "SIGUSR2"]:
+        signum = getattr(signal, signame)
+        signal.signal(signum, _signal_handler)
+
+
+
+[docs] +def case_test(self, testname=None, reset=False, skip_pnl=False): + if testname is None: + testname = self.get_value("TESTCASE") + + expect(testname is not None, "testname argument not resolved") + logging.warning("Running test for {}".format(testname)) + + _set_up_signal_handlers() + + try: + # The following line can throw exceptions if the testname is + # not found or the test constructor throws. We need to be + # sure to leave TestStatus in the appropriate state if that + # happens. + test = find_system_test(testname, self)(self) + except BaseException: + caseroot = self.get_value("CASEROOT") + with TestStatus(test_dir=caseroot) as ts: + ts.set_status(RUN_PHASE, TEST_FAIL_STATUS, comments="failed to initialize") + append_testlog(str(sys.exc_info()[1])) + raise + + if reset: + logger.info("Reset test to initial conditions and exit") + # pylint: disable=protected-access + test._resetup_case(RUN_PHASE) + return True + success = test.run(skip_pnl=skip_pnl) + + return success
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/check_input_data.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/check_input_data.html new file mode 100644 index 00000000000..24cfc925df9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/check_input_data.html @@ -0,0 +1,812 @@ + + + + + + CIME.case.check_input_data — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.check_input_data

+"""
+API for checking input for testcase
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.utils import SharedArea, find_files, safe_copy, expect
+from CIME.XML.inputdata import Inputdata
+import CIME.Servers
+
+import glob, hashlib, shutil
+
+logger = logging.getLogger(__name__)
+# The inputdata_checksum.dat file will be read into this hash if it's available
+chksum_hash = dict()
+local_chksum_file = "inputdata_checksum.dat"
+
+
+def _download_checksum_file(rundir):
+    """
+    Download the checksum files from each server and merge them into rundir.
+    """
+    inputdata = Inputdata()
+    protocol = "svn"
+    chksum_found = False
+    # download and merge all available chksum files.
+    while protocol is not None:
+        protocol, address, user, passwd, chksum_file, _, _ = inputdata.get_next_server()
+        if protocol not in vars(CIME.Servers):
+            logger.info("Client protocol {} not enabled".format(protocol))
+            continue
+        logger.info(
+            "Using protocol {} with user {} and passwd {}".format(
+                protocol, user, passwd
+            )
+        )
+        if protocol == "svn":
+            server = CIME.Servers.SVN(address, user, passwd)
+        elif protocol == "gftp":
+            server = CIME.Servers.GridFTP(address, user, passwd)
+        elif protocol == "ftp":
+            server = CIME.Servers.FTP.ftp_login(address, user, passwd)
+        elif protocol == "wget":
+            server = CIME.Servers.WGET.wget_login(address, user, passwd)
+        else:
+            expect(False, "Unsupported inputdata protocol: {}".format(protocol))
+        if not server:
+            continue
+
+        if chksum_file:
+            chksum_found = True
+        else:
+            continue
+
+        success = False
+        rel_path = chksum_file
+        full_path = os.path.join(rundir, local_chksum_file)
+        new_file = full_path + ".raw"
+        protocol = type(server).__name__
+        logger.info(
+            "Trying to download file: '{}' to path '{}' using {} protocol.".format(
+                rel_path, new_file, protocol
+            )
+        )
+        tmpfile = None
+        if os.path.isfile(full_path):
+            tmpfile = full_path + ".tmp"
+            os.rename(full_path, tmpfile)
+        # Use umask to make sure files are group read/writable. As long as parent directories
+        # have +s, then everything should work.
+        success = server.getfile(rel_path, new_file)
+        if success:
+            _reformat_chksum_file(full_path, new_file)
+            if tmpfile:
+                _merge_chksum_files(full_path, tmpfile)
+            chksum_hash.clear()
+        else:
+            if tmpfile and os.path.isfile(tmpfile):
+                os.rename(tmpfile, full_path)
+                logger.warning(
+                    "Could not automatically download file "
+                    + full_path
+                    + " Restoring existing version."
+                )
+            else:
+                logger.warning(
+                    "Could not automatically download file {}".format(full_path)
+                )
+    return chksum_found
+
+
+def _reformat_chksum_file(chksum_file, server_file):
+    """
+    The checksum file on the server has 8 space seperated columns, I need the first and last ones.
+    This function gets the first and last column of server_file and saves it to chksum_file
+    """
+    with open(server_file) as fd, open(chksum_file, "w") as fout:
+        lines = fd.readlines()
+        for line in lines:
+            lsplit = line.split()
+            if len(lsplit) < 8 or " DIR " in line:
+                continue
+
+            # remove the first directory ('inputdata/') from the filename
+            chksum = lsplit[0]
+            fname = (lsplit[7]).split("/", 1)[1]
+            fout.write(" ".join((chksum, fname)) + "\n")
+    os.remove(server_file)
+
+
+def _merge_chksum_files(new_file, old_file):
+    """
+    If more than one server checksum file is available, this merges the files and removes
+    any duplicate lines
+    """
+    with open(old_file) as fin:
+        lines = fin.readlines()
+    with open(new_file) as fin:
+        lines += fin.readlines()
+    lines = set(lines)
+    with open(new_file, "w") as fout:
+        fout.write("".join(lines))
+    os.remove(old_file)
+
+
+def _download_if_in_repo(
+    server, input_data_root, rel_path, isdirectory=False, ic_filepath=None
+):
+    """
+    Return True if successfully downloaded
+    server is an object handle of type CIME.Servers
+    input_data_root is the local path to inputdata (DIN_LOC_ROOT)
+    rel_path is the path to the file or directory relative to input_data_root
+    user is the user name of the person running the script
+    isdirectory indicates that this is a directory download rather than a single file
+    """
+    if not (rel_path or server.fileexists(rel_path)):
+        return False
+    full_path = os.path.join(input_data_root, rel_path)
+    if ic_filepath:
+        full_path = full_path.replace(ic_filepath, "/")
+    logger.info(
+        "Trying to download file: '{}' to path '{}' using {} protocol.".format(
+            rel_path, full_path, type(server).__name__
+        )
+    )
+    # Make sure local path exists, create if it does not
+    if isdirectory or full_path.endswith(os.sep):
+        if not os.path.exists(full_path):
+            logger.info("Creating directory {}".format(full_path))
+            os.makedirs(full_path + ".tmp")
+        isdirectory = True
+    elif not os.path.exists(os.path.dirname(full_path)):
+        os.makedirs(os.path.dirname(full_path))
+
+    # Use umask to make sure files are group read/writable. As long as parent directories
+    # have +s, then everything should work.
+    if isdirectory:
+        success = server.getdirectory(rel_path, full_path + ".tmp")
+        # this is intended to prevent a race condition in which
+        # one case attempts to use a refdir before another one has
+        # completed the download
+        if success:
+            os.rename(full_path + ".tmp", full_path)
+        else:
+            shutil.rmtree(full_path + ".tmp")
+    else:
+        success = server.getfile(rel_path, full_path)
+
+    return success
+
+
+def _check_all_input_data_impl(
+    self,
+    protocol,
+    address,
+    input_data_root,
+    data_list_dir,
+    download,
+    chksum,
+):
+    success = False
+    if protocol is not None and address is not None:
+        success = self.check_input_data(
+            protocol=protocol,
+            address=address,
+            download=download,
+            input_data_root=input_data_root,
+            data_list_dir=data_list_dir,
+            chksum=chksum,
+        )
+    else:
+        if chksum:
+            chksum_found = _download_checksum_file(self.get_value("RUNDIR"))
+
+        clm_usrdat_name = self.get_value("CLM_USRDAT_NAME")
+        if clm_usrdat_name and clm_usrdat_name == "UNSET":
+            clm_usrdat_name = None
+
+        if download and clm_usrdat_name:
+            success = _downloadfromserver(
+                self,
+                input_data_root,
+                data_list_dir,
+                attributes={"CLM_USRDAT_NAME": clm_usrdat_name},
+            )
+        if not success:
+            success = self.check_input_data(
+                protocol=protocol,
+                address=address,
+                download=False,
+                input_data_root=input_data_root,
+                data_list_dir=data_list_dir,
+                chksum=chksum and chksum_found,
+            )
+        if download and not success:
+            if not chksum:
+                chksum_found = _download_checksum_file(self.get_value("RUNDIR"))
+            success = _downloadfromserver(self, input_data_root, data_list_dir)
+
+    expect(
+        not download or (download and success),
+        "Could not find all inputdata on any server",
+    )
+    self.stage_refcase(input_data_root=input_data_root, data_list_dir=data_list_dir)
+    return success
+
+
+
+[docs] +def check_all_input_data( + self, + protocol=None, + address=None, + input_data_root=None, + data_list_dir="Buildconf", + download=True, + chksum=False, +): + """ + Read through all files of the form *.input_data_list in the data_list_dir directory. These files + contain a list of input and boundary files needed by each model component. For each file in the + list confirm that it is available in input_data_root and if not (optionally download it from a + server at address using protocol. Perform a chksum of the downloaded file. + """ + # Run the entire impl in a SharedArea to help avoid permission problems + with SharedArea(): + return _check_all_input_data_impl( + self, protocol, address, input_data_root, data_list_dir, download, chksum + )
+ + + +def _downloadfromserver(case, input_data_root, data_list_dir, attributes=None): + """ + Download files + """ + success = False + protocol = "svn" + inputdata = Inputdata() + if not input_data_root: + input_data_root = case.get_value("DIN_LOC_ROOT") + + while not success and protocol is not None: + protocol, address, user, passwd, _, ic_filepath, _ = inputdata.get_next_server( + attributes=attributes + ) + logger.info("Checking server {} with protocol {}".format(address, protocol)) + success = case.check_input_data( + protocol=protocol, + address=address, + download=True, + input_data_root=input_data_root, + data_list_dir=data_list_dir, + user=user, + passwd=passwd, + ic_filepath=ic_filepath, + ) + return success + + +
+[docs] +def stage_refcase(self, input_data_root=None, data_list_dir=None): + """ + Get a REFCASE for a hybrid or branch run + This is the only case in which we are downloading an entire directory instead of + a single file at a time. + """ + get_refcase = self.get_value("GET_REFCASE") + run_type = self.get_value("RUN_TYPE") + continue_run = self.get_value("CONTINUE_RUN") + + # We do not fully populate the inputdata directory on every + # machine and do not expect every user to download the 3TB+ of + # data in our inputdata repository. This code checks for the + # existence of inputdata in the local inputdata directory and + # attempts to download data from the server if it's needed and + # missing. + if get_refcase and run_type != "startup" and not continue_run: + din_loc_root = self.get_value("DIN_LOC_ROOT") + run_refdate = self.get_value("RUN_REFDATE") + run_refcase = self.get_value("RUN_REFCASE") + run_refdir = self.get_value("RUN_REFDIR") + rundir = self.get_value("RUNDIR") + + if os.path.isabs(run_refdir): + refdir = run_refdir + expect( + os.path.isdir(refdir), + "Reference case directory {} does not exist or is not readable".format( + refdir + ), + ) + + else: + refdir = os.path.join(din_loc_root, run_refdir, run_refcase, run_refdate) + if not os.path.isdir(refdir): + logger.warning( + "Refcase not found in {}, will attempt to download from inputdata".format( + refdir + ) + ) + with open( + os.path.join("Buildconf", "refcase.input_data_list"), "w" + ) as fd: + fd.write("refdir = {}{}".format(refdir, os.sep)) + if input_data_root is None: + input_data_root = din_loc_root + if data_list_dir is None: + data_list_dir = "Buildconf" + success = _downloadfromserver( + self, input_data_root=input_data_root, data_list_dir=data_list_dir + ) + expect(success, "Could not download refcase from any server") + + logger.info(" - Prestaging REFCASE ({}) to {}".format(refdir, rundir)) + + # prestage the reference case's files. + + if not os.path.exists(rundir): + logger.debug("Creating run directory: {}".format(rundir)) + os.makedirs(rundir) + rpointerfile = None + # copy the refcases' rpointer files to the run directory + for rpointerfile in glob.iglob(os.path.join("{}", "*rpointer*").format(refdir)): + logger.info("Copy rpointer {}".format(rpointerfile)) + safe_copy(rpointerfile, rundir) + os.chmod(os.path.join(rundir, os.path.basename(rpointerfile)), 0o644) + expect( + rpointerfile, + "Reference case directory {} does not contain any rpointer files".format( + refdir + ), + ) + # link everything else + + for rcfile in glob.iglob(os.path.join(refdir, "*")): + rcbaseline = os.path.basename(rcfile) + if not os.path.exists("{}/{}".format(rundir, rcbaseline)): + logger.info("Staging file {}".format(rcfile)) + os.symlink(rcfile, "{}/{}".format(rundir, rcbaseline)) + # Backward compatibility, some old refcases have cam2 in the name + # link to local cam file. + for cam2file in glob.iglob(os.path.join("{}", "*.cam2.*").format(rundir)): + camfile = cam2file.replace("cam2", "cam") + os.symlink(cam2file, camfile) + elif not get_refcase and run_type != "startup": + logger.info( + "GET_REFCASE is false, the user is expected to stage the refcase to the run directory." + ) + if os.path.exists(os.path.join("Buildconf", "refcase.input_data_list")): + os.remove(os.path.join("Buildconf", "refcase.input_data_list")) + return True
+ + + +def _check_input_data_impl( + case, + protocol, + address, + input_data_root, + data_list_dir, + download, + user, + passwd, + chksum, + ic_filepath, +): + case.load_env(reset=True) + rundir = case.get_value("RUNDIR") + # Fill in defaults as needed + input_data_root = ( + case.get_value("DIN_LOC_ROOT") if input_data_root is None else input_data_root + ) + input_ic_root = case.get_value("DIN_LOC_IC", resolved=True) + expect( + os.path.isdir(data_list_dir), + "Invalid data_list_dir directory: '{}'".format(data_list_dir), + ) + + data_list_files = find_files(data_list_dir, "*.input_data_list") + if not data_list_files: + logger.warning( + "WARNING: No .input_data_list files found in dir '{}'".format(data_list_dir) + ) + + no_files_missing = True + if download: + if protocol not in vars(CIME.Servers): + logger.info("Client protocol {} not enabled".format(protocol)) + return False + logger.info( + "Using protocol {} with user {} and passwd {}".format( + protocol, user, passwd + ) + ) + if protocol == "svn": + server = CIME.Servers.SVN(address, user, passwd) + elif protocol == "gftp": + server = CIME.Servers.GridFTP(address, user, passwd) + elif protocol == "ftp": + server = CIME.Servers.FTP.ftp_login(address, user, passwd) + elif protocol == "wget": + server = CIME.Servers.WGET.wget_login(address, user, passwd) + else: + expect(False, "Unsupported inputdata protocol: {}".format(protocol)) + if not server: + return None + + for data_list_file in data_list_files: + logger.info("Loading input file list: '{}'".format(data_list_file)) + with open(data_list_file, "r") as fd: + lines = fd.readlines() + + for line in lines: + line = line.strip() + use_ic_path = False + if line and not line.startswith("#"): + tokens = line.split("=") + description, full_path = tokens[0].strip(), tokens[1].strip() + if ( + description.endswith("datapath") + or description.endswith("data_path") + or full_path.endswith("/dev/null") + ): + continue + if description.endswith("file") or description.endswith("filename"): + # There are required input data with key, or 'description' entries + # that specify in their names whether they are files or filenames + # rather than 'datapath's or 'data_path's so we check to make sure + # the input data list has correct non-path values for input files. + # This check happens whether or not a file already exists locally. + expect( + (not full_path.endswith(os.sep)), + "Unsupported directory path in input_data_list named {}. Line entry is '{} = {}'.".format( + data_list_file, description, full_path + ), + ) + if full_path: + # expand xml variables + full_path = case.get_resolved_value(full_path) + rel_path = full_path + if input_ic_root and input_ic_root in full_path and ic_filepath: + rel_path = full_path.replace(input_ic_root, ic_filepath) + use_ic_path = True + elif input_data_root in full_path: + rel_path = full_path.replace(input_data_root, "") + elif input_ic_root and ( + input_ic_root not in input_data_root + and input_ic_root in full_path + ): + if ic_filepath: + rel_path = full_path.replace(input_ic_root, ic_filepath) + use_ic_path = True + model = os.path.basename(data_list_file).split(".")[0] + isdirectory = rel_path.endswith(os.sep) + + if ( + "/" in rel_path + and rel_path == full_path + and not full_path.startswith("unknown") + ): + # User pointing to a file outside of input_data_root, we cannot determine + # rel_path, and so cannot download the file. If it already exists, we can + # proceed + if not os.path.exists(full_path): + print( + "Model {} missing file {} = '{}'".format( + model, description, full_path + ) + ) + # Data download path must be DIN_LOC_ROOT, DIN_LOC_IC or RUNDIR + + rundir = case.get_value("RUNDIR") + if download: + if full_path.startswith(rundir): + filepath = os.path.dirname(full_path) + if not os.path.exists(filepath): + logger.info( + "Creating directory {}".format(filepath) + ) + os.makedirs(filepath) + tmppath = full_path[len(rundir) + 1 :] + success = _download_if_in_repo( + server, + os.path.join(rundir, "inputdata"), + tmppath[10:], + isdirectory=isdirectory, + ic_filepath="/", + ) + no_files_missing = success + else: + logger.warning( + " Cannot download file since it lives outside of the input_data_root '{}'".format( + input_data_root + ) + ) + else: + no_files_missing = False + else: + logger.debug(" Found input file: '{}'".format(full_path)) + else: + # There are some special values of rel_path that + # we need to ignore - some of the component models + # set things like 'NULL' or 'same_as_TS' - + # basically if rel_path does not contain '/' (a + # directory tree) you can assume it's a special + # value and ignore it (perhaps with a warning) + + if ( + "/" in rel_path + and not os.path.exists(full_path) + and not full_path.startswith("unknown") + ): + print( + "Model {} missing file {} = '{}'".format( + model, description, full_path + ) + ) + if download: + if use_ic_path: + success = _download_if_in_repo( + server, + input_ic_root, + rel_path.strip(os.sep), + isdirectory=isdirectory, + ic_filepath=ic_filepath, + ) + else: + success = _download_if_in_repo( + server, + input_data_root, + rel_path.strip(os.sep), + isdirectory=isdirectory, + ic_filepath=ic_filepath, + ) + if not success: + no_files_missing = False + if success and chksum: + verify_chksum( + input_data_root, + rundir, + rel_path.strip(os.sep), + isdirectory, + ) + else: + no_files_missing = False + else: + if chksum: + verify_chksum( + input_data_root, + rundir, + rel_path.strip(os.sep), + isdirectory, + ) + logger.info( + "Chksum passed for file {}".format( + os.path.join(input_data_root, rel_path) + ) + ) + logger.debug( + " Already had input file: '{}'".format(full_path) + ) + else: + model = os.path.basename(data_list_file).split(".")[0] + logger.warning( + "Model {} no file specified for {}".format(model, description) + ) + + return no_files_missing + + +
+[docs] +def check_input_data( + case, + protocol="svn", + address=None, + input_data_root=None, + data_list_dir="Buildconf", + download=False, + user=None, + passwd=None, + chksum=False, + ic_filepath=None, +): + """ + For a given case check for the relevant input data as specified in data_list_dir/*.input_data_list + in the directory input_data_root, if not found optionally download it using the servers specified + in config_inputdata.xml. If a chksum file is available compute the chksum and compare it to that + in the file. + Return True if no files missing + """ + # Run the entire impl in a SharedArea to help avoid permission problems + with SharedArea(): + return _check_input_data_impl( + case, + protocol, + address, + input_data_root, + data_list_dir, + download, + user, + passwd, + chksum, + ic_filepath, + )
+ + + +
+[docs] +def verify_chksum(input_data_root, rundir, filename, isdirectory): + """ + For file in filename perform a chksum and compare the result to that stored in + the local checksumfile, if isdirectory chksum all files in the directory of form *.* + """ + hashfile = os.path.join(rundir, local_chksum_file) + if not chksum_hash: + if not os.path.isfile(hashfile): + logger.warning("Failed to find or download file {}".format(hashfile)) + return + + with open(hashfile) as fd: + lines = fd.readlines() + for line in lines: + fchksum, fname = line.split() + if fname in chksum_hash: + expect( + chksum_hash[fname] == fchksum, + " Inconsistent hashes in chksum for file {}".format(fname), + ) + else: + chksum_hash[fname] = fchksum + + if isdirectory: + filenames = glob.glob(os.path.join(filename, "*.*")) + else: + filenames = [filename] + for fname in filenames: + if not os.sep in fname: + continue + chksum = md5(os.path.join(input_data_root, fname)) + if chksum_hash: + if not fname in chksum_hash: + logger.warning( + "Did not find hash for file {} in chksum file {}".format( + filename, hashfile + ) + ) + else: + expect( + chksum == chksum_hash[fname], + "chksum mismatch for file {} expected {} found {}".format( + os.path.join(input_data_root, fname), chksum, chksum_hash[fname] + ), + )
+ + + +
+[docs] +def md5(fname): + """ + performs an md5 sum one chunk at a time to avoid memory issues with large files. + """ + hash_md5 = hashlib.md5() + with open(fname, "rb") as f: + for chunk in iter(lambda: f.read(4096), b""): + hash_md5.update(chunk) + return hash_md5.hexdigest()
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/check_lockedfiles.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/check_lockedfiles.html new file mode 100644 index 00000000000..d5708e7e597 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/check_lockedfiles.html @@ -0,0 +1,281 @@ + + + + + + CIME.case.check_lockedfiles — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.check_lockedfiles

+"""
+API for checking locked files
+check_lockedfile, check_lockedfiles, check_pelayouts_require_rebuild are members
+of Class case.py from file case.py
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.XML.env_build import EnvBuild
+from CIME.XML.env_case import EnvCase
+from CIME.XML.env_mach_pes import EnvMachPes
+from CIME.XML.env_batch import EnvBatch
+from CIME.locked_files import unlock_file, LOCKED_DIR
+from CIME.build import clean
+
+logger = logging.getLogger(__name__)
+
+import glob
+
+
+
+[docs] +def check_pelayouts_require_rebuild(self, models): + """ + Create if we require a rebuild, expects cwd is caseroot + """ + locked_pes = os.path.join(LOCKED_DIR, "env_mach_pes.xml") + if os.path.exists(locked_pes): + # Look to see if $comp_PE_CHANGE_REQUIRES_REBUILD is defined + # for any component + env_mach_pes_locked = EnvMachPes( + infile=locked_pes, components=self.get_values("COMP_CLASSES") + ) + for comp in models: + if self.get_value("{}_PE_CHANGE_REQUIRES_REBUILD".format(comp)): + # Changing these values in env_mach_pes.xml will force + # you to clean the corresponding component + old_tasks = env_mach_pes_locked.get_value("NTASKS_{}".format(comp)) + old_threads = env_mach_pes_locked.get_value("NTHRDS_{}".format(comp)) + old_inst = env_mach_pes_locked.get_value("NINST_{}".format(comp)) + + new_tasks = self.get_value("NTASKS_{}".format(comp)) + new_threads = self.get_value("NTHRDS_{}".format(comp)) + new_inst = self.get_value("NINST_{}".format(comp)) + + if ( + old_tasks != new_tasks + or old_threads != new_threads + or old_inst != new_inst + ): + logging.warning( + "{} pe change requires clean build {} {}".format( + comp, old_tasks, new_tasks + ) + ) + cleanflag = comp.lower() + clean(self, cleanlist=[cleanflag]) + + unlock_file("env_mach_pes.xml", self.get_value("CASEROOT"))
+ + + +
+[docs] +def check_lockedfile(self, filebase): + caseroot = self.get_value("CASEROOT") + + cfile = os.path.join(caseroot, filebase) + lfile = os.path.join(caseroot, "LockedFiles", filebase) + components = self.get_values("COMP_CLASSES") + if os.path.isfile(cfile): + objname = filebase.split(".")[0] + if objname == "env_build": + f1obj = self.get_env("build") + f2obj = EnvBuild(caseroot, lfile, read_only=True) + elif objname == "env_mach_pes": + f1obj = self.get_env("mach_pes") + f2obj = EnvMachPes(caseroot, lfile, components=components, read_only=True) + elif objname == "env_case": + f1obj = self.get_env("case") + f2obj = EnvCase(caseroot, lfile, read_only=True) + elif objname == "env_batch": + f1obj = self.get_env("batch") + f2obj = EnvBatch(caseroot, lfile, read_only=True) + else: + logging.warning( + "Locked XML file '{}' is not current being handled".format(filebase) + ) + return + + diffs = f1obj.compare_xml(f2obj) + if diffs: + + logging.warning("File {} has been modified".format(lfile)) + toggle_build_status = False + for key in diffs.keys(): + if key != "BUILD_COMPLETE": + logging.warning( + " found difference in {} : case {} locked {}".format( + key, repr(diffs[key][0]), repr(diffs[key][1]) + ) + ) + toggle_build_status = True + if objname == "env_mach_pes": + expect(False, "Invoke case.setup --reset ") + elif objname == "env_case": + expect( + False, + "Cannot change file env_case.xml, please" + " recover the original copy from LockedFiles", + ) + elif objname == "env_build": + if toggle_build_status: + logging.warning("Setting build complete to False") + self.set_value("BUILD_COMPLETE", False) + if "PIO_VERSION" in diffs: + self.set_value("BUILD_STATUS", 2) + logging.critical( + "Changing PIO_VERSION requires running " + "case.build --clean-all and rebuilding" + ) + else: + self.set_value("BUILD_STATUS", 1) + + elif objname == "env_batch": + expect( + False, + "Batch configuration has changed, please run case.setup --reset", + ) + else: + expect(False, "'{}' diff was not handled".format(objname))
+ + + +
+[docs] +def check_lockedfiles(self, skip=None): + """ + Check that all lockedfiles match what's in case + + If caseroot is not specified, it is set to the current working directory + """ + caseroot = self.get_value("CASEROOT") + lockedfiles = glob.glob(os.path.join(caseroot, "LockedFiles", "*.xml")) + skip = [] if skip is None else skip + skip = [skip] if isinstance(skip, str) else skip + for lfile in lockedfiles: + fpart = os.path.basename(lfile) + # ignore files used for tests such as env_mach_pes.ERP1.xml by looking for extra dots in the name + if fpart.count(".") > 1: + continue + + do_skip = False + for item in skip: + if fpart.startswith(item): + do_skip = True + break + + if not do_skip: + self.check_lockedfile(fpart)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/preview_namelists.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/preview_namelists.html new file mode 100644 index 00000000000..b4cd702feac --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/case/preview_namelists.html @@ -0,0 +1,264 @@ + + + + + + CIME.case.preview_namelists — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.case.preview_namelists

+"""
+API for preview namelist
+create_dirs and create_namelists are members of Class case from file case.py
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import import_and_run_sub_or_cmd, safe_copy
+import time, glob
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +def create_dirs(self): + """ + Make necessary directories for case + """ + # Get data from XML + exeroot = self.get_value("EXEROOT") + libroot = self.get_value("LIBROOT") + incroot = self.get_value("INCROOT") + rundir = self.get_value("RUNDIR") + caseroot = self.get_value("CASEROOT") + docdir = os.path.join(caseroot, "CaseDocs") + dirs_to_make = [] + models = self.get_values("COMP_CLASSES") + for model in models: + dirname = model.lower() + dirs_to_make.append(os.path.join(exeroot, dirname, "obj")) + + dirs_to_make.extend([exeroot, libroot, incroot, rundir, docdir]) + + for dir_to_make in dirs_to_make: + if not os.path.isdir(dir_to_make) and not os.path.islink(dir_to_make): + try: + logger.debug("Making dir '{}'".format(dir_to_make)) + os.makedirs(dir_to_make) + except OSError as e: + # In a multithreaded situation, we may have lost a race to create this dir. + # We do not want to crash if that's the case. + if not os.path.isdir(dir_to_make): + expect( + False, + "Could not make directory '{}', error: {}".format( + dir_to_make, e + ), + ) + + # As a convenience write the location of the case directory in the bld and run directories + for dir_ in (exeroot, rundir): + with open(os.path.join(dir_, "CASEROOT"), "w+") as fd: + fd.write(caseroot + "\n")
+ + + +
+[docs] +def create_namelists(self, component=None): + """ + Create component namelists + """ + self.flush() + + create_dirs(self) + + casebuild = self.get_value("CASEBUILD") + caseroot = self.get_value("CASEROOT") + rundir = self.get_value("RUNDIR") + + docdir = os.path.join(caseroot, "CaseDocs") + + # Load modules + self.load_env() + + self.stage_refcase() + + # Create namelists - must have cpl last in the list below + # Note - cpl must be last in the loop below so that in generating its namelist, + # it can use xml vars potentially set by other component's buildnml scripts + models = self.get_values("COMP_CLASSES") + models += [models.pop(0)] + for model in models: + model_str = model.lower() + logger.info(" {} {} ".format(time.strftime("%Y-%m-%d %H:%M:%S"), model_str)) + config_file = self.get_value("CONFIG_{}_FILE".format(model_str.upper())) + config_dir = os.path.dirname(config_file) + if model_str == "cpl": + compname = "drv" + else: + compname = self.get_value("COMP_{}".format(model_str.upper())) + if component is None or component == model_str or compname == "ufsatm": + cmd = os.path.join(config_dir, "buildnml") + logger.info("Create namelist for component {}".format(compname)) + import_and_run_sub_or_cmd( + cmd, + (caseroot), + "buildnml", + (self, caseroot, compname), + config_dir, + compname, + case=self, + ) + + logger.debug( + "Finished creating component namelists, component {} models = {}".format( + component, models + ) + ) + + # Save namelists to docdir + if not os.path.isdir(docdir): + os.makedirs(docdir) + try: + with open(os.path.join(docdir, "README"), "w") as fd: + fd.write( + " CESM Resolved Namelist Files\n For documentation only DO NOT MODIFY\n" + ) + except (OSError, IOError) as e: + expect(False, "Failed to write {}/README: {}".format(docdir, e)) + + for cpglob in [ + "*_in_[0-9]*", + "*modelio*", + "*_in", + "nuopc.runconfig", + "*streams*txt*", + "*streams.xml", + "*stxt", + "*maps.rc", + "*cism*.config*", + "nuopc.runseq", + ]: + for file_to_copy in glob.glob(os.path.join(rundir, cpglob)): + logger.debug("Copy file from '{}' to '{}'".format(file_to_copy, docdir)) + safe_copy(file_to_copy, docdir) + + # Copy over chemistry mechanism docs if they exist + atmconf = self.get_value("COMP_ATM") + "conf" + if os.path.isdir(os.path.join(casebuild, atmconf)): + for file_to_copy in glob.glob(os.path.join(casebuild, atmconf, "*chem_mech*")): + safe_copy(file_to_copy, docdir)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/code_checker.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/code_checker.html new file mode 100644 index 00000000000..5c157ad7283 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/code_checker.html @@ -0,0 +1,335 @@ + + + + + + CIME.code_checker — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.code_checker

+"""
+Libraries for checking python code with pylint
+"""
+
+import os
+import json
+
+from CIME.XML.standard_module_setup import *
+
+from CIME.utils import (
+    run_cmd,
+    run_cmd_no_fail,
+    expect,
+    get_cime_root,
+    get_src_root,
+    is_python_executable,
+    get_cime_default_driver,
+)
+
+from multiprocessing.dummy import Pool as ThreadPool
+
+# pylint: disable=import-error
+from distutils.spawn import find_executable
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+def _run_pylint(all_files, interactive):
+    ###############################################################################
+    pylint = find_executable("pylint")
+
+    cmd_options = (
+        " --disable=I,C,R,logging-not-lazy,wildcard-import,unused-wildcard-import"
+    )
+    cmd_options += (
+        ",fixme,broad-except,bare-except,eval-used,exec-used,global-statement"
+    )
+    cmd_options += ",logging-format-interpolation,no-name-in-module,arguments-renamed"
+    cmd_options += " -j 0 -f json"
+    cimeroot = get_cime_root()
+    srcroot = get_src_root()
+
+    # if "scripts/Tools" in on_file:
+    #     cmd_options +=",relative-import"
+
+    # add init-hook option
+    cmd_options += ' --init-hook=\'sys.path.extend(("%s","%s","%s","%s"))\'' % (
+        os.path.join(cimeroot, "CIME"),
+        os.path.join(cimeroot, "CIME", "Tools"),
+        os.path.join(cimeroot, "scripts", "fortran_unit_testing", "python"),
+        os.path.join(srcroot, "components", "cmeps", "cime_config", "runseq"),
+    )
+
+    files = " ".join(all_files)
+    cmd = "%s %s %s" % (pylint, cmd_options, files)
+    logger.debug("pylint command is %s" % cmd)
+    stat, out, err = run_cmd(cmd, verbose=False, from_dir=cimeroot)
+
+    data = json.loads(out)
+
+    result = {}
+
+    for item in data:
+        if item["type"] != "error":
+            continue
+
+        path = item["path"]
+        message = item["message"]
+        line = item["line"]
+
+        if path in result:
+            result[path].append(f"{message}:{line}")
+        else:
+            result[path] = [
+                message,
+            ]
+
+    for k in result.keys():
+        result[k] = "\n".join(set(result[k]))
+
+    return result
+
+    # if stat != 0:
+    #     if interactive:
+    #         logger.info("File %s has pylint problems, please fix\n    Use command: %s" % (on_file, cmd))
+    #         logger.info(out + "\n" + err)
+    #     return (on_file, out + "\n" + err)
+    # else:
+    #     if interactive:
+    #         logger.info("File %s has no pylint problems" % on_file)
+    #     return (on_file, "")
+
+
+###############################################################################
+def _matches(file_path, file_ends):
+    ###############################################################################
+    for file_end in file_ends:
+        if file_path.endswith(file_end):
+            return True
+
+    return False
+
+
+###############################################################################
+def _should_pylint_skip(filepath):
+    ###############################################################################
+    # TODO - get rid of this
+    list_of_directories_to_ignore = (
+        "xmlconvertors",
+        "pointclm",
+        "point_clm",
+        "tools",
+        "machines",
+        "apidocs",
+        "doc",
+    )
+    for dir_to_skip in list_of_directories_to_ignore:
+        if dir_to_skip + "/" in filepath:
+            return True
+        # intended to be temporary, file needs update
+        if filepath.endswith("archive_metadata") or filepath.endswith("pgn.py"):
+            return True
+
+    return False
+
+
+###############################################################################
+
+[docs] +def get_all_checkable_files(): + ############################################################################### + cimeroot = get_cime_root() + all_git_files = run_cmd_no_fail( + "git ls-files", from_dir=cimeroot, verbose=False + ).splitlines() + if get_cime_default_driver() == "nuopc": + srcroot = get_src_root() + nuopc_git_files = [] + try: + nuopc_git_files = run_cmd_no_fail( + "git ls-files", + from_dir=os.path.join(srcroot, "components", "cmeps"), + verbose=False, + ).splitlines() + except: + logger.warning("No nuopc driver found in source") + all_git_files.extend( + [ + os.path.join(srcroot, "components", "cmeps", _file) + for _file in nuopc_git_files + ] + ) + files_to_test = [ + item + for item in all_git_files + if ( + (item.endswith(".py") or is_python_executable(os.path.join(cimeroot, item))) + and not _should_pylint_skip(item) + ) + ] + + return files_to_test
+ + + +############################################################################### +
+[docs] +def check_code(files, num_procs=10, interactive=False): + ############################################################################### + """ + Check all python files in the given directory + + Returns True if all files had no problems + """ + # Get list of files to check, we look to see if user-provided file argument + # is a valid file, if not, we search the repo for a file with similar name. + files_to_check = [] + if files: + repo_files = get_all_checkable_files() + for filearg in files: + if os.path.exists(filearg): + files_to_check.append(os.path.abspath(filearg)) + else: + found = False + for repo_file in repo_files: + if repo_file.endswith(filearg): + found = True + files_to_check.append(repo_file) # could have multiple matches + + if not found: + logger.warning( + "Could not find file matching argument '%s'" % filearg + ) + else: + # Check every python file + files_to_check = get_all_checkable_files() + + expect(len(files_to_check) > 0, "No matching files found") + + # No point in using more threads than files + # if len(files_to_check) < num_procs: + # num_procs = len(files_to_check) + + results = _run_pylint(files_to_check, interactive) + + return results
+ + + # pool = ThreadPool(num_procs) + # results = pool.map(lambda x : _run_pylint(x, interactive), files_to_check) + # pool.close() + # pool.join() + # return dict(results) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/compare_namelists.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/compare_namelists.html new file mode 100644 index 00000000000..73d6e76187b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/compare_namelists.html @@ -0,0 +1,832 @@ + + + + + + CIME.compare_namelists — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.compare_namelists

+import os, re, logging
+
+from collections import OrderedDict
+from CIME.utils import expect, CIMEError
+
+logger = logging.getLogger(__name__)
+
+# pragma pylint: disable=unsubscriptable-object
+
+###############################################################################
+def _normalize_lists(value_str):
+    ###############################################################################
+    """
+    >>> _normalize_lists("'one two' 'three four'")
+    "'one two','three four'"
+    >>> _normalize_lists("'one two'   'three four'")
+    "'one two','three four'"
+    >>> _normalize_lists("'one two' ,  'three four'")
+    "'one two','three four'"
+    >>> _normalize_lists("'one two'")
+    "'one two'"
+    >>> _normalize_lists("1 2  3, 4 ,  5")
+    '1,2,3,4,5'
+    >>> _normalize_lists("2, 2*13")
+    '2,2*13'
+    >>> _normalize_lists("'DMS -> 1.0 * value.nc'")
+    "'DMS -> 1.0 * value.nc'"
+    >>> _normalize_lists("1.0* value.nc")
+    '1.0*value.nc'
+    >>> _normalize_lists("1.0*value.nc")
+    '1.0*value.nc'
+    """
+    # Handle special case "value * value" which should not be treated as list
+    parsed = re.match(r"^([^*=->\s]*)\s*(\*)\s*(.*)$", value_str)
+    if parsed is not None:
+        value_str = "".join(parsed.groups())
+    result = ""
+    inside_quotes = False
+    idx = 0
+    while idx < len(value_str):
+        value_c = value_str[idx]
+        if value_c == "'":
+            inside_quotes = not inside_quotes
+            result += value_c
+            idx += 1
+        elif value_c.isspace() or value_c == ",":
+            if inside_quotes:
+                result += value_c
+                idx += 1
+            else:
+                result += ","
+                idx += 1
+                while idx < len(value_str):
+                    value_c = value_str[idx]
+                    if not value_c.isspace() and value_c != ",":
+                        break
+                    idx += 1
+        else:
+            result += value_c
+            idx += 1
+
+    return result
+
+
+###############################################################################
+def _interpret_value(value_str, filename):
+    ###############################################################################
+    """
+    >>> _interpret_value("one", "foo")
+    'one'
+    >>> _interpret_value("one, two", "foo")
+    ['one', 'two']
+    >>> _interpret_value("3*1.0", "foo")
+    ['1.0', '1.0', '1.0']
+    >>> _interpret_value("'DMS -> value.nc'", "foo")
+    OrderedDict([('DMS', 'value.nc')])
+    >>> _interpret_value("'DMS -> 1.0 * value.nc'", "foo")
+    OrderedDict([('DMS', '1.0*value.nc')])
+    >>> _interpret_value("'DMS -> 1.0* value.nc'", "foo")
+    OrderedDict([('DMS', '1.0*value.nc')])
+    """
+    comma_re = re.compile(r"\s*,\s*")
+    dict_re = re.compile(r"^'(\S+)\s*->\s*(\S+|(?:\S+\s*\*\s*\S+))\s*'")
+
+    value_str = _normalize_lists(value_str)
+
+    tokens = [item.strip() for item in comma_re.split(value_str) if item.strip() != ""]
+    if "->" in value_str:
+        # dict
+        rv = OrderedDict()
+        for token in tokens:
+            m = dict_re.match(token)
+            expect(
+                m is not None,
+                "In file '{}', Dict entry '{}' does not match expected format".format(
+                    filename, token
+                ),
+            )
+            k, v = m.groups()
+            rv[k] = _interpret_value(v, filename)
+
+        return rv
+    else:
+        new_tokens = []
+        for token in tokens:
+            if "*" in token:
+                try:
+                    # the following ensure that the following to namelist settings trigger a match
+                    # nmlvalue = 1,1,1 versus nmlvalue = 3*1
+                    sub_tokens = [item.strip() for item in token.split("*")]
+                    expect(
+                        len(sub_tokens) == 2,
+                        "Incorrect usage of multiplication in token '{}'".format(token),
+                    )
+                    new_tokens.extend([sub_tokens[1]] * int(sub_tokens[0]))
+                except Exception:
+                    # User probably did not intend to use the * operator as a namelist multiplier
+                    new_tokens.append(token)
+            else:
+                new_tokens.append(token)
+
+        if "," in value_str or len(new_tokens) > 1:
+            return new_tokens
+        else:
+            return new_tokens[0]
+
+
+###############################################################################
+def _parse_namelists(namelist_lines, filename):
+    ###############################################################################
+    """
+    Return data in form: {namelist -> {key -> value} }.
+      value can be an int, string, list, or dict
+
+    >>> teststr = '''&nml
+    ...   val = 'foo'
+    ...   aval = 'one','two', 'three'
+    ...   maval = 'one', 'two',
+    ...       'three', 'four'
+    ...   dval = 'one->two', 'three -> four'
+    ...   mdval = 'one   -> two',
+    ...           'three -> four',
+    ...           'five -> six'
+    ...   nval = 1850
+    ... /
+    ...
+    ... # Hello
+    ...
+    ...   &nml2
+    ...   val2 = .false.
+    ... /
+    ... '''
+    >>> _parse_namelists(teststr.splitlines(), 'foo')
+    OrderedDict([('nml', OrderedDict([('val', "'foo'"), ('aval', ["'one'", "'two'", "'three'"]), ('maval', ["'one'", "'two'", "'three'", "'four'"]), ('dval', OrderedDict([('one', 'two'), ('three', 'four')])), ('mdval', OrderedDict([('one', 'two'), ('three', 'four'), ('five', 'six')])), ('nval', '1850')])), ('nml2', OrderedDict([('val2', '.false.')]))])
+
+    >>> teststr = '''&fire_emis_nl
+    ... fire_emis_factors_file = 'fire_emis_factors_c140116.nc'
+    ... fire_emis_specifier = 'bc_a1 = BC', 'pom_a1 = 1.4*OC', 'pom_a2 = A*B*C', 'SO2 = SO2'
+    ... /
+    ... '''
+    >>> _parse_namelists(teststr.splitlines(), 'foo')
+    OrderedDict([('fire_emis_nl', OrderedDict([('fire_emis_factors_file', "'fire_emis_factors_c140116.nc'"), ('fire_emis_specifier', ["'bc_a1 = BC'", "'pom_a1 = 1.4*OC'", "'pom_a2 = A*B*C'", "'SO2 = SO2'"])]))])
+
+    >>> _parse_namelists('blah', 'foo') # doctest: +IGNORE_EXCEPTION_DETAIL
+    Traceback (most recent call last):
+        ...
+    CIMEError: ERROR: File 'foo' does not appear to be a namelist file, skipping
+
+    >>> teststr = '''&nml
+    ... val = 'one', 'two',
+    ... val2 = 'three'
+    ... /'''
+    >>> _parse_namelists(teststr.splitlines(), 'foo') # doctest: +IGNORE_EXCEPTION_DETAIL
+    Traceback (most recent call last):
+        ...
+    CIMEError: ERROR: In file 'foo', Incomplete multiline variable: 'val'
+
+    >>> teststr = '''&nml
+    ... val = 'one', 'two',
+    ... /'''
+    >>> _parse_namelists(teststr.splitlines(), 'foo') # doctest: +IGNORE_EXCEPTION_DETAIL
+    Traceback (most recent call last):
+        ...
+    CIMEError: ERROR: In file 'foo', Incomplete multiline variable: 'val'
+
+    >>> teststr = '''&nml
+    ... val = 'one', 'two',
+    ...       'three -> four'
+    ... /'''
+    >>> _parse_namelists(teststr.splitlines(), 'foo') # doctest: +IGNORE_EXCEPTION_DETAIL
+    Traceback (most recent call last):
+        ...
+    CIMEError: ERROR: In file 'foo', multiline list variable 'val' had dict entries
+
+    >>> teststr = '''&nml
+    ... val = 2, 2*13
+    ... /'''
+    >>> _parse_namelists(teststr.splitlines(), 'foo')
+    OrderedDict([('nml', OrderedDict([('val', ['2', '13', '13'])]))])
+
+    >>> teststr = '''&nml
+    ... val = 2 2 3
+    ... /'''
+    >>> _parse_namelists(teststr.splitlines(), 'foo')
+    OrderedDict([('nml', OrderedDict([('val', ['2', '2', '3'])]))])
+
+    >>> teststr = '''&nml
+    ... val =  'a brown cow' 'a red hen'
+    ... /'''
+    >>> _parse_namelists(teststr.splitlines(), 'foo')
+    OrderedDict([('nml', OrderedDict([('val', ["'a brown cow'", "'a red hen'"])]))])
+    """
+
+    comment_re = re.compile(r"^[#!]")
+    namelist_re = re.compile(r"^&(\S+)$")
+    name_re = re.compile(r"^([^\s=']+)\s*=\s*(.+)$")
+    rcline_re = re.compile(r"^([^&\s':]+)\s*:\s*(.+)$")
+
+    rv = OrderedDict()
+    current_namelist = None
+    multiline_variable = None  # (name, value)
+    for line in namelist_lines:
+
+        line = line.strip()
+        line = line.replace('"', "'")
+
+        logger.debug("Parsing line: '{}'".format(line))
+
+        if line == "" or comment_re.match(line) is not None:
+            logger.debug("  Line was whitespace or comment, skipping.")
+            continue
+
+        rcline = rcline_re.match(line)
+        if rcline is not None:
+            # Defining a variable (AKA name)
+            name, value = rcline.groups()
+
+            logger.debug("  Parsing variable '{}' with data '{}'".format(name, value))
+
+            if "seq_maps.rc" not in rv:
+                rv["seq_maps.rc"] = OrderedDict()
+
+            expect(
+                name not in rv["seq_maps.rc"],
+                "In file '{}', Duplicate name: '{}'".format(filename, name),
+            )
+            rv["seq_maps.rc"][name] = value
+
+        elif current_namelist is None:
+            # Must start a namelist
+            expect(
+                multiline_variable is None,
+                "In file '{}', Incomplete multiline variable: '{}'".format(
+                    filename,
+                    multiline_variable[0] if multiline_variable is not None else "",
+                ),
+            )
+
+            # Unfortunately, other tools were using the old compare_namelists.pl script
+            # to compare files that are not namelist files. We need a special error
+            # to signify this event
+            if namelist_re.match(line) is None:
+                expect(
+                    rv != OrderedDict(),
+                    "File '{}' does not appear to be a namelist file, skipping".format(
+                        filename
+                    ),
+                )
+                expect(
+                    False,
+                    "In file '{}', Line '{}' did not begin a namelist as expected".format(
+                        filename, line
+                    ),
+                )
+
+            current_namelist = namelist_re.match(line).groups()[0]
+            expect(
+                current_namelist not in rv,
+                "In file '{}', Duplicate namelist '{}'".format(
+                    filename, current_namelist
+                ),
+            )
+
+            rv[current_namelist] = OrderedDict()
+
+            logger.debug("  Starting namelist '{}'".format(current_namelist))
+
+        elif line == "/":
+            # Ends a namelist
+            logger.debug("  Ending namelist '{}'".format(current_namelist))
+
+            expect(
+                multiline_variable is None,
+                "In file '{}', Incomplete multiline variable: '{}'".format(
+                    filename,
+                    multiline_variable[0] if multiline_variable is not None else "",
+                ),
+            )
+
+            current_namelist = None
+
+        elif name_re.match(line):
+            # Defining a variable (AKA name)
+            name, value_str = name_re.match(line).groups()
+
+            logger.debug(
+                "  Parsing variable '{}' with data '{}'".format(name, value_str)
+            )
+
+            expect(
+                multiline_variable is None,
+                "In file '{}', Incomplete multiline variable: '{}'".format(
+                    filename,
+                    multiline_variable[0] if multiline_variable is not None else "",
+                ),
+            )
+            expect(
+                name not in rv[current_namelist],
+                "In file '{}', Duplicate name: '{}'".format(filename, name),
+            )
+
+            real_value = _interpret_value(value_str, filename)
+
+            rv[current_namelist][name] = real_value
+            logger.debug("    Adding value: {}".format(real_value))
+
+            if line.endswith(","):
+                # Value will continue on in subsequent lines
+                multiline_variable = (name, real_value)
+
+                logger.debug("    Var is multiline...")
+
+        elif multiline_variable is not None:
+            # Continuation of list or dict variable
+            current_value = multiline_variable[1]
+            logger.debug(
+                "  Continuing multiline variable '{}' with data '{}'".format(
+                    multiline_variable[0], line
+                )
+            )
+
+            real_value = _interpret_value(line, filename)
+            if type(current_value) is list:
+                expect(
+                    type(real_value) is not OrderedDict,
+                    "In file '{}', multiline list variable '{}' had dict entries".format(
+                        filename, multiline_variable[0]
+                    ),
+                )
+                real_value = real_value if type(real_value) is list else [real_value]
+                current_value.extend(real_value)
+
+            elif type(current_value) is OrderedDict:
+                expect(
+                    type(real_value) is OrderedDict,
+                    "In file '{}', multiline dict variable '{}' had non-dict entries".format(
+                        filename, multiline_variable[0]
+                    ),
+                )
+                current_value.update(real_value)
+
+            else:
+                expect(
+                    False,
+                    "In file '{}', Continuation should have been for list or dict, instead it was: '{}'".format(
+                        filename, type(current_value)
+                    ),
+                )
+
+            logger.debug("    Adding value: {}".format(real_value))
+
+            if not line.endswith(","):
+                # Completed
+                multiline_variable = None
+
+                logger.debug("    Terminating multiline variable")
+
+        else:
+            expect(
+                False, "In file '{}', Unrecognized line: '{}'".format(filename, line)
+            )
+
+    return rv
+
+
+###############################################################################
+def _normalize_string_value(name, value, case):
+    ###############################################################################
+    """
+    Some of the string in namelists will contain data that's inherently prone
+    to diffs, like file paths, etc. This function attempts to normalize that
+    data so that it will not cause diffs.
+    """
+    # Any occurance of case must be normalized because test-ids might not match
+    if case is not None:
+        case_re = re.compile(r"{}[.]([GC]+)[.]([^./\s]+)".format(case))
+        value = case_re.sub("{}.ACTION.TESTID".format(case), value)
+
+    if name in ["runid", "model_version", "username", "logfile"]:
+        # Don't even attempt to diff these, we don't care
+        return name.upper()
+    elif ":" in value:
+        items = value.split(":")
+        items = [_normalize_string_value(name, item, case) for item in items]
+        return ":".join(items)
+    elif "/" in value:
+        # Handle special format scale*path, normalize the path and reconstruct
+        parsed = re.match(r"^([^*]+\*)(/[^/]+)*", value)
+        if parsed is not None and len(parsed.groups()) == 2:
+            items = list(parsed.groups())
+            items[1] = os.path.basename(items[1])
+            return "".join(items)
+
+        # File path, just return the basename unless its a seq_maps.rc mapping
+        # mapname or maptype
+        if "mapname" not in name and "maptype" not in name:
+            return os.path.basename(value)
+        else:
+            return value
+    else:
+        return value
+
+
+###############################################################################
+def _compare_values(name, gold_value, comp_value, case):
+    ###############################################################################
+    """
+    Compare values for a specific variable in a namelist.
+
+    Returns comments
+
+    Note there will only be comments if values did not match
+    """
+    comments = ""
+    if type(gold_value) != type(comp_value):
+        comments += "  variable '{}' did not have expected type '{}', instead is type '{}'\n".format(
+            name, type(gold_value), type(comp_value)
+        )
+        return comments
+
+    if type(gold_value) is list:
+        # Note, list values remain order sensitive
+        for idx, gold_value_list_item in enumerate(gold_value):
+            if idx < len(comp_value):
+                comments += _compare_values(
+                    "{} list item {:d}".format(name, idx),
+                    gold_value_list_item,
+                    comp_value[idx],
+                    case,
+                )
+            else:
+                comments += "  list variable '{}' missing value {}\n".format(
+                    name, gold_value_list_item
+                )
+
+        if len(comp_value) > len(gold_value):
+            for comp_value_list_item in comp_value[len(gold_value) :]:
+                comments += "  list variable '{}' has extra value {}\n".format(
+                    name, comp_value_list_item
+                )
+
+    elif type(gold_value) is OrderedDict:
+        for key, gold_value_dict_item in gold_value.items():
+            if key in comp_value:
+                comments += _compare_values(
+                    "{} dict item {}".format(name, key),
+                    gold_value_dict_item,
+                    comp_value[key],
+                    case,
+                )
+            else:
+                comments += (
+                    "  dict variable '{}' missing key {} with value {}\n".format(
+                        name, key, gold_value_dict_item
+                    )
+                )
+
+        for key in comp_value:
+            if key not in gold_value:
+                comments += (
+                    "  dict variable '{}' has extra key {} with value {}\n".format(
+                        name, key, comp_value[key]
+                    )
+                )
+
+    else:
+        expect(
+            isinstance(gold_value, str),
+            "Unexpected type found: '{}'".format(type(gold_value)),
+        )
+        norm_gold_value = _normalize_string_value(name, gold_value, case)
+        norm_comp_value = _normalize_string_value(name, comp_value, case)
+
+        if norm_gold_value != norm_comp_value:
+            comments += "  BASE: {} = {}\n".format(name, norm_gold_value)
+            comments += "  COMP: {} = {}\n".format(name, norm_comp_value)
+
+    return comments
+
+
+###############################################################################
+def _compare_namelists(gold_namelists, comp_namelists, case):
+    ###############################################################################
+    """
+    Compare two namelists. Print diff information if any.
+    Returns comments
+    Note there will only be comments if the namelists were not an exact match
+
+    Expect args in form: {namelist -> {key -> value} }.
+      value can be an int, string, list, or dict
+
+    >>> teststr = '''&nml
+    ...   val = 'foo'
+    ...   aval = 'one','two', 'three'
+    ...   maval = 'one', 'two', 'three', 'four'
+    ...   dval = 'one -> two', 'three -> four'
+    ...   mdval = 'one -> two', 'three -> four', 'five -> six'
+    ...   nval = 1850
+    ... /
+    ... &nml2
+    ...   val2 = .false.
+    ... /
+    ... '''
+    >>> _compare_namelists(_parse_namelists(teststr.splitlines(), 'foo'), _parse_namelists(teststr.splitlines(), 'bar'), None)
+    ''
+    >>> teststr1 = '''&nml1
+    ...   val11 = 'foo'
+    ... /
+    ... &nml2
+    ...   val21 = 'foo'
+    ...   val22 = 'foo', 'bar', 'baz'
+    ...   val23 = 'baz'
+    ...   val24 = '1 -> 2', '2 -> 3', '3 -> 4'
+    ... /
+    ... &nml3
+    ...   val3 = .false.
+    ... /'''
+    >>> teststr2 = '''&nml01
+    ...   val11 = 'foo'
+    ... /
+    ... &nml2
+    ...   val21 = 'foo0'
+    ...   val22 = 'foo', 'bar0', 'baz'
+    ...   val230 = 'baz'
+    ...   val24 = '1 -> 20', '2 -> 3', '30 -> 4'
+    ... /
+    ... &nml3
+    ...   val3 = .false.
+    ... /'''
+    >>> comments = _compare_namelists(_parse_namelists(teststr1.splitlines(), 'foo'), _parse_namelists(teststr2.splitlines(), 'bar'), None)
+    >>> print(comments)
+    Missing namelist: nml1
+    Differences in namelist 'nml2':
+      BASE: val21 = 'foo'
+      COMP: val21 = 'foo0'
+      BASE: val22 list item 1 = 'bar'
+      COMP: val22 list item 1 = 'bar0'
+      missing variable: 'val23'
+      BASE: val24 dict item 1 = 2
+      COMP: val24 dict item 1 = 20
+      dict variable 'val24' missing key 3 with value 4
+      dict variable 'val24' has extra key 30 with value 4
+      found extra variable: 'val230'
+    Found extra namelist: nml01
+    <BLANKLINE>
+
+    >>> teststr1 = '''&rad_cnst_nl
+    ... icecldoptics           = 'mitchell'
+    ... logfile                = 'cpl.log.150514-001533'
+    ... case_name              = 'ERB.f19_g16.B1850C5.sandiatoss3_intel.C.150513-230221'
+    ... runid                  = 'FOO'
+    ... model_version          = 'cam5_3_36'
+    ... username               = 'jgfouca'
+    ... iceopticsfile          = '/projects/ccsm/inputdata/atm/cam/physprops/iceoptics_c080917.nc'
+    ... liqcldoptics           = 'gammadist'
+    ... liqopticsfile          = '/projects/ccsm/inputdata/atm/cam/physprops/F_nwvl200_mu20_lam50_res64_t298_c080428.nc'
+    ... mode_defs              = 'mam3_mode1:accum:=', 'A:num_a1:N:num_c1:num_mr:+',
+    ...   'A:so4_a1:N:so4_c1:sulfate:/projects/ccsm/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:pom_a1:N:pom_c1:p-organic:/projects/ccsm/inputdata/atm/cam/physprops/ocpho_rrtmg_c101112.nc:+',
+    ...   'A:soa_a1:N:soa_c1:s-organic:/projects/ccsm/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+', 'A:bc_a1:N:bc_c1:black-c:/projects/ccsm/inputdata/atm/cam/physprops/bcpho_rrtmg_c100508.nc:+',
+    ...   'A:dst_a1:N:dst_c1:dust:/projects/ccsm/inputdata/atm/cam/physprops/dust4_rrtmg_c090521.nc:+', 'A:ncl_a1:N:ncl_c1:seasalt:/projects/ccsm/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc',
+    ...   'mam3_mode2:aitken:=', 'A:num_a2:N:num_c2:num_mr:+',
+    ...   'A:so4_a2:N:so4_c2:sulfate:/projects/ccsm/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:soa_a2:N:soa_c2:s-organic:/projects/ccsm/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+',
+    ...   'A:ncl_a2:N:ncl_c2:seasalt:/projects/ccsm/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc', 'mam3_mode3:coarse:=',
+    ...   'A:num_a3:N:num_c3:num_mr:+', 'A:dst_a3:N:dst_c3:dust:/projects/ccsm/inputdata/atm/cam/physprops/dust4_rrtmg_c090521.nc:+',
+    ...   'A:ncl_a3:N:ncl_c3:seasalt:/projects/ccsm/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:so4_a3:N:so4_c3:sulfate:/projects/ccsm/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc'
+    ... rad_climate            = 'A:Q:H2O', 'N:O2:O2', 'N:CO2:CO2',
+    ...   'N:ozone:O3', 'N:N2O:N2O', 'N:CH4:CH4',
+    ...   'N:CFC11:CFC11', 'N:CFC12:CFC12', 'M:mam3_mode1:/projects/ccsm/inputdata/atm/cam/physprops/mam3_mode1_rrtmg_c110318.nc',
+    ...   'M:mam3_mode2:/projects/ccsm/inputdata/atm/cam/physprops/mam3_mode2_rrtmg_c110318.nc', 'M:mam3_mode3:/projects/ccsm/inputdata/atm/cam/physprops/mam3_mode3_rrtmg_c110318.nc'
+    ... /'''
+    >>> teststr2 = '''&rad_cnst_nl
+    ... icecldoptics           = 'mitchell'
+    ... logfile                = 'cpl.log.150514-2398745'
+    ... case_name              = 'ERB.f19_g16.B1850C5.sandiatoss3_intel.C.150513-1274213'
+    ... runid                  = 'BAR'
+    ... model_version          = 'cam5_3_36'
+    ... username               = 'hudson'
+    ... iceopticsfile          = '/something/else/inputdata/atm/cam/physprops/iceoptics_c080917.nc'
+    ... liqcldoptics           = 'gammadist'
+    ... liqopticsfile          = '/something/else/inputdata/atm/cam/physprops/F_nwvl200_mu20_lam50_res64_t298_c080428.nc'
+    ... mode_defs              = 'mam3_mode1:accum:=', 'A:num_a1:N:num_c1:num_mr:+',
+    ...   'A:so4_a1:N:so4_c1:sulfate:/something/else/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:pom_a1:N:pom_c1:p-organic:/something/else/inputdata/atm/cam/physprops/ocpho_rrtmg_c101112.nc:+',
+    ...   'A:soa_a1:N:soa_c1:s-organic:/something/else/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+', 'A:bc_a1:N:bc_c1:black-c:/something/else/inputdata/atm/cam/physprops/bcpho_rrtmg_c100508.nc:+',
+    ...   'A:dst_a1:N:dst_c1:dust:/something/else/inputdata/atm/cam/physprops/dust4_rrtmg_c090521.nc:+', 'A:ncl_a1:N:ncl_c1:seasalt:/something/else/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc',
+    ...   'mam3_mode2:aitken:=', 'A:num_a2:N:num_c2:num_mr:+',
+    ...   'A:so4_a2:N:so4_c2:sulfate:/something/else/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc:+', 'A:soa_a2:N:soa_c2:s-organic:/something/else/inputdata/atm/cam/physprops/ocphi_rrtmg_c100508.nc:+',
+    ...   'A:ncl_a2:N:ncl_c2:seasalt:/something/else/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc', 'mam3_mode3:coarse:=',
+    ...   'A:num_a3:N:num_c3:num_mr:+', 'A:dst_a3:N:dst_c3:dust:/something/else/inputdata/atm/cam/physprops/dust4_rrtmg_c090521.nc:+',
+    ...   'A:ncl_a3:N:ncl_c3:seasalt:/something/else/inputdata/atm/cam/physprops/ssam_rrtmg_c100508.nc:+', 'A:so4_a3:N:so4_c3:sulfate:/something/else/inputdata/atm/cam/physprops/sulfate_rrtmg_c080918.nc'
+    ... rad_climate            = 'A:Q:H2O', 'N:O2:O2', 'N:CO2:CO2',
+    ...   'N:ozone:O3', 'N:N2O:N2O', 'N:CH4:CH4',
+    ...   'N:CFC11:CFC11', 'N:CFC12:CFC12', 'M:mam3_mode1:/something/else/inputdata/atm/cam/physprops/mam3_mode1_rrtmg_c110318.nc',
+    ...   'M:mam3_mode2:/something/else/inputdata/atm/cam/physprops/mam3_mode2_rrtmg_c110318.nc', 'M:mam3_mode3:/something/else/inputdata/atm/cam/physprops/mam3_mode3_rrtmg_c110318.nc'
+    ... /'''
+    >>> _compare_namelists(_parse_namelists(teststr1.splitlines(), 'foo'), _parse_namelists(teststr2.splitlines(), 'bar'), 'ERB.f19_g16.B1850C5.sandiatoss3_intel')
+    ''
+    >>> teststr1 = '''&nml
+    ... csw_specifier = 'DMS -> 1.0 * value.nc'
+    ... /'''
+    >>> _compare_namelists(_parse_namelists(teststr1.splitlines(), 'foo'),\
+    _parse_namelists(teststr1.splitlines(), 'foo'), "case")
+    ''
+    >>> teststr2 = '''&nml
+    ... csw_specifier = 'DMS -> 2.0 * value.nc'
+    ... /'''
+    >>> comments = _compare_namelists(_parse_namelists(teststr1.splitlines(), 'foo'),\
+    _parse_namelists(teststr2.splitlines(), 'foo'), "case")
+    >>> print(comments)
+      BASE: csw_specifier dict item DMS = 1.0*value.nc
+      COMP: csw_specifier dict item DMS = 2.0*value.nc
+    <BLANKLINE>
+    >>> teststr2 = '''&nml
+    ... csw_specifier = 'DMS -> 1.0 * other.nc'
+    ... /'''
+    >>> comments = _compare_namelists(_parse_namelists(teststr1.splitlines(), 'foo'),\
+    _parse_namelists(teststr2.splitlines(), 'foo'), "case")
+    >>> print(comments)
+      BASE: csw_specifier dict item DMS = 1.0*value.nc
+      COMP: csw_specifier dict item DMS = 1.0*other.nc
+    <BLANKLINE>
+    """
+    different_namelists = OrderedDict()
+    for namelist, gold_names in gold_namelists.items():
+        if namelist not in comp_namelists:
+            different_namelists[namelist] = ["Missing namelist: {}\n".format(namelist)]
+        else:
+            comp_names = comp_namelists[namelist]
+            for name, gold_value in gold_names.items():
+                if name not in comp_names:
+                    different_namelists.setdefault(namelist, []).append(
+                        "  missing variable: '{}'\n".format(name)
+                    )
+                else:
+                    comp_value = comp_names[name]
+                    comments = _compare_values(name, gold_value, comp_value, case)
+                    if comments != "":
+                        different_namelists.setdefault(namelist, []).append(comments)
+
+            for name in comp_names:
+                if name not in gold_names:
+                    different_namelists.setdefault(namelist, []).append(
+                        "  found extra variable: '{}'\n".format(name)
+                    )
+
+    for namelist in comp_namelists:
+        if namelist not in gold_namelists:
+            different_namelists[namelist] = [
+                "Found extra namelist: {}\n".format(namelist)
+            ]
+
+    comments = ""
+    for namelist, nlcomment in different_namelists.items():
+        if len(nlcomment) == 1:
+            comments += nlcomment[0]
+        else:
+            comments += "Differences in namelist '{}':\n".format(namelist)
+            comments += "".join(nlcomment)
+
+    return comments
+
+
+###############################################################################
+
+[docs] +def compare_namelist_files(gold_file, compare_file, case=None): + ############################################################################### + """ + Returns (is_match, comments) + """ + expect(os.path.exists(gold_file), "File not found: {}".format(gold_file)) + expect(os.path.exists(compare_file), "File not found: {}".format(compare_file)) + + gold_namelists = _parse_namelists(open(gold_file, "r").readlines(), gold_file) + comp_namelists = _parse_namelists(open(compare_file, "r").readlines(), compare_file) + comments = _compare_namelists(gold_namelists, comp_namelists, case) + return comments == "", comments
+ + + +############################################################################### +
+[docs] +def is_namelist_file(file_path): + ############################################################################### + try: + compare_namelist_files(file_path, file_path) + except CIMEError as e: + assert "does not appear to be a namelist file" in str(e), str(e) + return False + return True
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/compare_test_results.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/compare_test_results.html new file mode 100644 index 00000000000..a9eaefde96a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/compare_test_results.html @@ -0,0 +1,356 @@ + + + + + + CIME.compare_test_results — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.compare_test_results

+import CIME.compare_namelists, CIME.simple_compare
+from CIME.utils import append_status, EnvironmentContext, parse_test_name
+from CIME.test_status import *
+from CIME.hist_utils import compare_baseline, get_ts_synopsis
+from CIME.case import Case
+from CIME.test_utils import get_test_status_files
+
+import os, logging
+
+###############################################################################
+
+[docs] +def append_status_cprnc_log(msg, logfile_name, test_dir): + ############################################################################### + try: + append_status(msg, logfile_name, caseroot=test_dir) + except IOError: + pass
+ + + +############################################################################### +
+[docs] +def compare_namelists(case, baseline_name, baseline_root, logfile_name): + ############################################################################### + log_lvl = logging.getLogger().getEffectiveLevel() + logging.disable(logging.CRITICAL) + success = case.case_cmpgen_namelists( + compare=True, + compare_name=baseline_name, + baseline_root=baseline_root, + logfile_name=logfile_name, + ) + logging.getLogger().setLevel(log_lvl) + return success
+ + + +############################################################################### +
+[docs] +def compare_history(case, baseline_name, baseline_root, log_id): + ############################################################################### + real_user = case.get_value("REALUSER") + with EnvironmentContext(USER=real_user): + baseline_full_dir = os.path.join( + baseline_root, baseline_name, case.get_value("CASEBASEID") + ) + + outfile_suffix = "{}.{}".format(baseline_name, log_id) + try: + result, comments = compare_baseline( + case, baseline_dir=baseline_full_dir, outfile_suffix=outfile_suffix + ) + except IOError: + result, comments = compare_baseline( + case, baseline_dir=baseline_full_dir, outfile_suffix=None + ) + + return result, comments
+ + + +############################################################################### +
+[docs] +def compare_test_results( + baseline_name, + baseline_root, + test_root, + compiler, + test_id=None, + compare_tests=None, + namelists_only=False, + hist_only=False, +): + ############################################################################### + """ + Compares with baselines for all matching tests + + Outputs results for each test to stdout (one line per test); possible status + codes are: PASS, FAIL, SKIP. (A SKIP denotes a test that did not make it to + the run phase or a test for which the run phase did not pass: we skip + baseline comparisons in this case.) + + In addition, creates files named compare.log.BASELINE_NAME.TIMESTAMP in each + test directory, which contain more detailed output. Also creates + *.cprnc.out.BASELINE_NAME.TIMESTAMP files in each run directory. + + Returns True if all tests generated either PASS or SKIP results, False if + there was at least one FAIL result. + """ + test_status_files = get_test_status_files(test_root, compiler, test_id=test_id) + + # ID to use in the log file names, to avoid file name collisions with + # earlier files that may exist. + log_id = CIME.utils.get_timestamp() + + all_pass_or_skip = True + + compare_tests_counts = None + if compare_tests: + compare_tests_counts = dict( + [(compare_test, 0) for compare_test in compare_tests] + ) + + for test_status_file in test_status_files: + test_dir = os.path.dirname(test_status_file) + ts = TestStatus(test_dir=test_dir) + test_name = ts.get_name() + testopts = parse_test_name(test_name)[1] + testopts = [] if testopts is None else testopts + build_only = "B" in testopts + + if not compare_tests or CIME.utils.match_any(test_name, compare_tests_counts): + + if not hist_only: + nl_compare_result = None + nl_compare_comment = "" + nl_result = ts.get_status(SETUP_PHASE) + if nl_result is None: + nl_compare_result = "SKIP" + nl_compare_comment = "Test did not make it to setup phase" + nl_do_compare = False + else: + nl_do_compare = True + else: + nl_do_compare = False + + detailed_comments = "" + if not namelists_only and not build_only: + compare_result = None + compare_comment = "" + run_result = ts.get_status(RUN_PHASE) + if run_result is None: + compare_result = "SKIP" + compare_comment = "Test did not make it to run phase" + do_compare = False + elif run_result != TEST_PASS_STATUS: + compare_result = "SKIP" + compare_comment = "Run phase did not pass" + do_compare = False + else: + do_compare = True + else: + do_compare = False + + with Case(test_dir) as case: + if baseline_name is None: + baseline_name = case.get_value("BASELINE_NAME_CMP") + if not baseline_name: + baseline_name = CIME.utils.get_current_branch( + repo=CIME.utils.get_cime_root() + ) + + if baseline_root is None: + baseline_root = case.get_value("BASELINE_ROOT") + + logfile_name = "compare.log.{}.{}".format( + baseline_name.replace("/", "_"), log_id + ) + + append_status_cprnc_log( + "Comparing against baseline with compare_test_results:\n" + "Baseline: {}\n In baseline_root: {}".format( + baseline_name, baseline_root + ), + logfile_name, + test_dir, + ) + + if nl_do_compare or do_compare: + if nl_do_compare: + nl_success = compare_namelists( + case, baseline_name, baseline_root, logfile_name + ) + if nl_success: + nl_compare_result = TEST_PASS_STATUS + nl_compare_comment = "" + else: + nl_compare_result = TEST_FAIL_STATUS + nl_compare_comment = "See {}/{}".format(test_dir, logfile_name) + all_pass_or_skip = False + + if do_compare: + success, detailed_comments = compare_history( + case, baseline_name, baseline_root, log_id + ) + if success: + compare_result = TEST_PASS_STATUS + else: + compare_result = TEST_FAIL_STATUS + all_pass_or_skip = False + + compare_comment = get_ts_synopsis(detailed_comments) + + brief_result = "" + if not hist_only: + brief_result += "{} {} {} {}\n".format( + nl_compare_result, test_name, NAMELIST_PHASE, nl_compare_comment + ) + + if not namelists_only: + brief_result += "{} {} {}".format( + compare_result, test_name, BASELINE_PHASE + ) + if compare_comment: + brief_result += " {}".format(compare_comment) + brief_result += "\n" + + print(brief_result) + + append_status_cprnc_log(brief_result, logfile_name, test_dir) + + if detailed_comments: + append_status_cprnc_log( + "Detailed comments:\n" + detailed_comments, logfile_name, test_dir + ) + + # Emit a warning if items in compare_tests did not match anything + if compare_tests: + for compare_test, compare_count in compare_tests_counts.items(): + if compare_count == 0: + logging.warning( + """ +compare test arg '{}' did not match any tests in test_root {} with +compiler {} and test_id {}. It's possible that one of these arguments +had a mistake (likely compiler or testid).""".format( + compare_test, test_root, compiler, test_id + ) + ) + + return all_pass_or_skip
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/config.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/config.html new file mode 100644 index 00000000000..80e750c5a4a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/config.html @@ -0,0 +1,446 @@ + + + + + + CIME.config — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.config

+import sys
+import glob
+import logging
+import importlib.machinery
+import importlib.util
+
+from CIME import utils
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +class ConfigBase: + def __new__(cls): + if not hasattr(cls, "_instance"): + cls._instance = super(ConfigBase, cls).__new__(cls) + + return cls._instance + + def __init__(self): + self._attribute_config = {} + + @property + def loaded(self): + return getattr(self, "_loaded", False) + +
+[docs] + @classmethod + def instance(cls): + """Access singleton. + + Explicit way to access singleton, same as calling constructor. + """ + return cls()
+ + +
+[docs] + @classmethod + def load(cls, customize_path): + obj = cls() + + logger.debug("Searching %r for files to load", customize_path) + + customize_files = glob.glob(f"{customize_path}/**/*.py", recursive=True) + + # filter out any tests + customize_files = [ + x for x in customize_files if "tests" not in x and "conftest" not in x + ] + + customize_module_spec = importlib.machinery.ModuleSpec("cime_customize", None) + + customize_module = importlib.util.module_from_spec(customize_module_spec) + + sys.modules["CIME.customize"] = customize_module + + for x in sorted(customize_files): + obj._load_file(x, customize_module) + + setattr(obj, "_loaded", True) + + return obj
+ + + def _load_file(self, file_path, customize_module): + logger.debug("Loading file %r", file_path) + + raw_config = utils.import_from_file("raw_config", file_path) + + # filter user define variables and functions + user_defined = [x for x in dir(raw_config) if not x.endswith("__")] + + # set values on this object, will overwrite existing + for x in user_defined: + try: + value = getattr(raw_config, x) + except AttributeError: + # should never hit this + logger.fatal("Attribute %r missing on obejct", x) + + sys.exit(1) + else: + setattr(customize_module, x, value) + + self._set_attribute(x, value) + + def _set_attribute(self, name, value, desc=None): + if hasattr(self, name): + logger.debug("Overwriting %r attribute", name) + + logger.debug("Setting attribute %r with value %r", name, value) + + setattr(self, name, value) + + self._attribute_config[name] = { + "desc": desc, + "default": value, + } + +
+[docs] + def print_rst_table(self): + max_variable = max([len(x) for x in self._attribute_config.keys()]) + max_default = max( + [len(str(x["default"])) for x in self._attribute_config.values()] + ) + max_type = max( + [len(type(x["default"]).__name__) for x in self._attribute_config.values()] + ) + max_desc = max([len(x["desc"]) for x in self._attribute_config.values()]) + + divider_row = ( + f"{'='*max_variable} {'='*max_default} {'='*max_type} {'='*max_desc}" + ) + + rows = [ + divider_row, + f"Variable{' '*(max_variable-8)} Default{' '*(max_default-7)} Type{' '*(max_type-4)} Description{' '*(max_desc-11)}", + divider_row, + ] + + for variable, value in sorted( + self._attribute_config.items(), key=lambda x: x[0] + ): + variable_fill = max_variable - len(variable) + default_fill = max_default - len(str(value["default"])) + type_fill = max_type - len(type(value["default"]).__name__) + + rows.append( + f"{variable}{' '*variable_fill} {value['default']}{' '*default_fill} {type(value['default']).__name__}{' '*type_fill} {value['desc']}" + ) + + rows.append(divider_row) + + print("\n".join(rows))
+
+ + + +
+[docs] +class Config(ConfigBase): + def __init__(self): + super().__init__() + + if self.loaded: + return + + self._set_attribute( + "additional_archive_components", + ("drv", "dart"), + desc="Additional components to archive.", + ) + self._set_attribute( + "verbose_run_phase", + False, + desc="If set to `True` then after a SystemTests successful run phase the elapsed time is recorded to BASELINE_ROOT, on a failure the test is checked against the previous run and potential breaking merges are listed in the testlog.", + ) + self._set_attribute( + "baseline_store_teststatus", + True, + desc="If set to `True` and GENERATE_BASELINE is set then a teststatus.log is created in the case's baseline.", + ) + self._set_attribute( + "common_sharedlibroot", + True, + desc="If set to `True` then SHAREDLIBROOT is set for the case and SystemTests will only build the shared libs once.", + ) + self._set_attribute( + "create_test_flag_mode", + "cesm", + desc="Sets the flag mode for the `create_test` script. When set to `cesm`, the `-c` flag will compare baselines against a give directory.", + ) + self._set_attribute( + "use_kokkos", + False, + desc="If set to `True` and CAM_TARGET is `preqx_kokkos`, `theta-l` or `theta-l_kokkos` then kokkos is built with the shared libs.", + ) + self._set_attribute( + "shared_clm_component", + True, + desc="If set to `True` and then the `clm` land component is built as a shared lib.", + ) + self._set_attribute( + "ufs_alternative_config", + False, + desc="If set to `True` and UFS_DRIVER is set to `nems` then model config dir is set to `$CIMEROOT/../src/model/NEMS/cime/cime_config`.", + ) + self._set_attribute( + "enable_smp", + True, + desc="If set to `True` then `SMP=` is added to model compile command.", + ) + self._set_attribute( + "build_model_use_cmake", + False, + desc="If set to `True` the model is built using using CMake otherwise Make is used.", + ) + self._set_attribute( + "build_cime_component_lib", + True, + desc="If set to `True` then `Filepath`, `CIME_cppdefs` and `CCSM_cppdefs` directories are copied from CASEBUILD directory to BUILDROOT in order to build CIME's internal components.", + ) + self._set_attribute( + "default_short_term_archiving", + True, + desc="If set to `True` and the case is not a test then DOUT_S is set to True and TIMER_LEVEL is set to 4.", + ) + # TODO combine copy_e3sm_tools and copy_cesm_tools into a single variable + self._set_attribute( + "copy_e3sm_tools", + False, + desc="If set to `True` then E3SM specific tools are copied into the case directory.", + ) + self._set_attribute( + "copy_cesm_tools", + True, + desc="If set to `True` then CESM specific tools are copied into the case directory.", + ) + self._set_attribute( + "copy_cism_source_mods", + True, + desc="If set to `True` then `$CASEROOT/SourceMods/src.cism/source_cism` is created and a README is written to directory.", + ) + self._set_attribute( + "make_case_run_batch_script", + False, + desc="If set to `True` and case is not a test then `case.run.sh` is created in case directory from `$MACHDIR/template.case.run.sh`.", + ) + self._set_attribute( + "case_setup_generate_namelist", + False, + desc="If set to `True` and case is a test then namelists are created during `case.setup`.", + ) + self._set_attribute( + "create_bless_log", + False, + desc="If set to `True` and comparing test to baselines the most recent bless is added to comments.", + ) + self._set_attribute( + "allow_unsupported", + True, + desc="If set to `True` then unsupported compsets and resolutions are allowed.", + ) + # set for ufs + self._set_attribute( + "check_machine_name_from_test_name", + True, + desc="If set to `True` then the TestScheduler will use testlists to parse for a list of tests.", + ) + self._set_attribute( + "sort_tests", + False, + desc="If set to `True` then the TestScheduler will sort tests by runtime.", + ) + self._set_attribute( + "calculate_mode_build_cost", + False, + desc="If set to `True` then the TestScheduler will set the number of processors for building the model to min(16, (($GMAKE_J * 2) / 3) + 1) otherwise it's set to 4.", + ) + self._set_attribute( + "share_exes", + False, + desc="If set to `True` then the TestScheduler will share exes between tests.", + ) + + self._set_attribute( + "serialize_sharedlib_builds", + True, + desc="If set to `True` then the TestScheduler will use `proc_pool + 1` processors to build shared libraries otherwise a single processor is used.", + ) + + self._set_attribute( + "use_testreporter_template", + True, + desc="If set to `True` then the TestScheduler will create `testreporter` in $CIME_OUTPUT_ROOT.", + ) + + self._set_attribute( + "check_invalid_args", + True, + desc="If set to `True` then script arguments are checked for being valid.", + ) + self._set_attribute( + "test_mode", + "cesm", + desc="Sets the testing mode, this changes various configuration for CIME's unit and system tests.", + ) + self._set_attribute( + "xml_component_key", + "COMP_ROOT_DIR_{}", + desc="The string template used as the key to query the XML system to find a components root directory e.g. the template `COMP_ROOT_DIR_{}` and component `LND` becomes `COMP_ROOT_DIR_LND`.", + ) + self._set_attribute( + "set_comp_root_dir_cpl", + True, + desc="If set to `True` then COMP_ROOT_DIR_CPL is set for the case.", + ) + self._set_attribute( + "use_nems_comp_root_dir", + False, + desc="If set to `True` then COMP_ROOT_DIR_CPL is set using UFS_DRIVER if defined.", + ) + self._set_attribute( + "test_custom_project_machine", + "melvin", + desc="Sets the machine name to use when testing a machine with no PROJECT.", + ) + self._set_attribute( + "driver_default", "nuopc", desc="Sets the default driver for the model." + ) + self._set_attribute( + "driver_choices", + ("mct", "nuopc"), + desc="Sets the available driver choices for the model.", + ) + self._set_attribute( + "mct_path", + "{srcroot}/libraries/mct", + desc="Sets the path to the mct library.", + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/cs_status.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/cs_status.html new file mode 100644 index 00000000000..5490fbdcddb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/cs_status.html @@ -0,0 +1,260 @@ + + + + + + CIME.cs_status — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.cs_status

+"""
+Implementation of the cs.status script, which prints the status of all
+of the tests in one or more test suites
+"""
+
+from __future__ import print_function
+from CIME.XML.standard_module_setup import *
+from CIME.XML.expected_fails_file import ExpectedFailsFile
+from CIME.test_status import TestStatus, SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS
+import os
+import sys
+from collections import defaultdict
+
+
+
+[docs] +def cs_status( + test_paths, + summary=False, + fails_only=False, + count_fails_phase_list=None, + check_throughput=False, + check_memory=False, + expected_fails_filepath=None, + force_rebuild=False, + out=sys.stdout, +): + """Print the test statuses of all tests in test_paths. The default + is to print to stdout, but this can be overridden with the 'out' + argument. + + If summary is True, then only the overall status of each test is printed + + If fails_only is True, then only test failures are printed (this + includes PENDs as well as FAILs). + + If count_fails_phase_list is provided, it should be a list of phases + (from the phases given by test_status.ALL_PHASES). For each phase in + this list: do not give line-by-line output; instead, just report the + total number of tests that have not PASSed this phase (this includes + PENDs and FAILs). (This is typically used with the fails_only + option, but it can also be used without that option.) + + If expected_fails_filepath is provided, it should be a string giving + the full path to a file listing expected failures for this test + suite. Expected failures are then labeled as such in the output. + """ + expect(not (summary and fails_only), "Cannot have both summary and fails_only") + expect( + not (summary and count_fails_phase_list), + "Cannot have both summary and count_fails_phase_list", + ) + if count_fails_phase_list is None: + count_fails_phase_list = [] + non_pass_counts = dict.fromkeys(count_fails_phase_list, 0) + xfails = _get_xfails(expected_fails_filepath) + test_id_output = defaultdict(str) + test_id_counts = defaultdict(int) + for test_path in test_paths: + test_dir = os.path.dirname(test_path) + ts = TestStatus(test_dir=test_dir) + + if force_rebuild: + with ts: + ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS) + + test_id = os.path.basename(test_dir).split(".")[-1] + if summary: + output = _overall_output( + ts, " {status} {test_name}\n", check_throughput, check_memory + ) + else: + if fails_only: + output = "" + else: + output = _overall_output( + ts, + " {test_name} (Overall: {status}) details:\n", + check_throughput, + check_memory, + ) + output += ts.phase_statuses_dump( + prefix=" ", + skip_passes=fails_only, + skip_phase_list=count_fails_phase_list, + xfails=xfails.get(ts.get_name()), + ) + if count_fails_phase_list: + ts.increment_non_pass_counts(non_pass_counts) + + test_id_output[test_id] += output + test_id_counts[test_id] += 1 + + for test_id in sorted(test_id_output): + count = test_id_counts[test_id] + print( + "{}: {} test{}".format(test_id, count, "s" if count > 1 else ""), file=out + ) + print(test_id_output[test_id], file=out) + print(" ", file=out) + + if count_fails_phase_list: + print(72 * "=", file=out) + print("Non-PASS results for select phases:", file=out) + for phase in count_fails_phase_list: + print("{} non-passes: {}".format(phase, non_pass_counts[phase]), file=out)
+ + + +def _get_xfails(expected_fails_filepath): + """Returns a dictionary of ExpectedFails objects, where the keys are test names + + expected_fails_filepath should be either a string giving the path to + the file containing expected failures, or None. If None, then this + returns an empty dictionary (as if expected_fails_filepath were + pointing to a file with no expected failures listed). + """ + if expected_fails_filepath is not None: + expected_fails_file = ExpectedFailsFile(expected_fails_filepath) + xfails = expected_fails_file.get_expected_fails() + else: + xfails = {} + return xfails + + +def _overall_output(ts, format_str, check_throughput, check_memory): + """Returns a string giving the overall test status + + Args: + ts: TestStatus object + format_str (string): string giving the format of the output; must + contain place-holders for status and test_name + """ + test_name = ts.get_name() + status = ts.get_overall_test_status( + check_throughput=check_throughput, + check_memory=check_memory, + )[0] + return format_str.format(status=status, test_name=test_name) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/cs_status_creator.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/cs_status_creator.html new file mode 100644 index 00000000000..40f537adea0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/cs_status_creator.html @@ -0,0 +1,172 @@ + + + + + + CIME.cs_status_creator — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.cs_status_creator

+"""
+Creates a test suite-specific cs.status file from a template
+"""
+
+from CIME.XML.standard_module_setup import *
+import CIME.utils
+import os
+import stat
+
+
+
+[docs] +def create_cs_status(test_root, test_id, extra_args="", filename=None): + """Create a test suite-specific cs.status file from the template + + Arguments: + test_root (string): path to test root; the file will be put here. If + this directory doesn't exist, it is created. + test_id (string): test id for this test suite. This can contain + shell wildcards if you want this one cs.status file to work + across multiple test suites. However, be careful not to make + this too general: for example, ending this with '*' will pick up + the *.ref1 directories for ERI and other tests, which is NOT + what you want. + extra_args (string): extra arguments to the cs.status command + (If there are multiple arguments, these should be in a space-delimited string.) + filename (string): name of the generated cs.status file. If not + given, this will be built from the test_id. + """ + cime_root = CIME.utils.get_cime_root() + tools_path = os.path.join(cime_root, "CIME", "Tools") + template_path = CIME.utils.get_template_path() + template_file = os.path.join(template_path, "cs.status.template") + template = open(template_file, "r").read() + template = ( + template.replace("<PATH>", tools_path) + .replace("<EXTRA_ARGS>", extra_args) + .replace("<TESTID>", test_id) + .replace("<TESTROOT>", test_root) + ) + if not os.path.exists(test_root): + os.makedirs(test_root) + if filename is None: + filename = "cs.status.{}".format(test_id) + cs_status_file = os.path.join(test_root, filename) + with open(cs_status_file, "w") as fd: + fd.write(template) + os.chmod( + cs_status_file, os.stat(cs_status_file).st_mode | stat.S_IXUSR | stat.S_IXGRP + )
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/date.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/date.html new file mode 100644 index 00000000000..899547ecab4 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/date.html @@ -0,0 +1,440 @@ + + + + + + CIME.date — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.date

+import re
+from CIME.XML.standard_module_setup import *
+
+logger = logging.getLogger(__name__)
+###############################################################################
+
+[docs] +def get_file_date(filename): + ############################################################################### + """ + Returns the date associated with the filename as a date object representing the correct date + Formats supported: + "%Y-%m-%d_%h.%M.%s + "%Y-%m-%d_%05s" + "%Y-%m-%d-%05s" + "%Y-%m-%d" + "%Y-%m" + "%Y.%m" + + >>> get_file_date("./ne4np4_oQU240.cam.r.0001-01-06-00435.nc") + date(1, 1, 6, 0, 7, 15) + >>> get_file_date("./ne4np4_oQU240.cam.r.0010-1-06_00435.nc") + date(10, 1, 6, 0, 7, 15) + >>> get_file_date("./ne4np4_oQU240.cam.r.0010-10.nc") + date(10, 10, 1, 0, 0, 0) + >>> get_file_date("0064-3-8_10.20.30.nc") + date(64, 3, 8, 10, 20, 30) + >>> get_file_date("0140-3-5") + date(140, 3, 5, 0, 0, 0) + >>> get_file_date("0140-3") + date(140, 3, 1, 0, 0, 0) + >>> get_file_date("0140.3") + date(140, 3, 1, 0, 0, 0) + """ + + # + # TODO: Add these to config_archive.xml, instead of here + # Note these must be in order of most specific to least + # so that lesser specificities aren't used to parse greater ones + re_formats = [ + r"[0-9]*[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}_[0-9]{1,2}\.[0-9]{1,2}\.[0-9]{1,2}", # [yy...]yyyy-mm-dd_hh.MM.ss + r"[0-9]*[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}[\-_][0-9]{1,5}", # [yy...]yyyy-mm-dd_sssss + r"[0-9]*[0-9]{4}-[0-9]{1,2}-[0-9]{1,2}", # [yy...]yyyy-mm-dd + r"[0-9]*[0-9]{4}[\-\.][0-9]{1,2}", # [yy...]yyyy-mm + ] + + for re_str in re_formats: + match = re.search(re_str, filename) + if match is None: + continue + date_str = match.group() + date_tuple = [int(unit) for unit in re.split(r"-|_|\.", date_str)] + year = date_tuple[0] + month = date_tuple[1] + day = 1 + second = 0 + if len(date_tuple) > 2: + day = date_tuple[2] + if len(date_tuple) == 4: + second = date_tuple[3] + elif len(date_tuple) == 6: + # Create a date object with arbitrary year, month, day, but the correct time of day + # Then use _get_day_second to get the time of day in seconds + second = date.hms_to_second( + hour=date_tuple[3], minute=date_tuple[4], second=date_tuple[5] + ) + return date(year, month, day, 0, 0, second) + + # Not a valid filename date format + logger.debug("{} is a filename without a supported date!".format(filename)) + return None
+ + + +
+[docs] +class date: + """ + Simple struct for holding dates and the time of day and performing comparisons + + Difference in Hour, Minute, or Second + >>> date(4, 5, 6, 9) == date(4, 5, 6, 8) + False + >>> date(4, 5, 6, 9) != date(4, 5, 6, 8) + True + >>> date(4, 5, 6, 9) < date(4, 5, 6, 8) + False + >>> date(4, 5, 6, 9) <= date(4, 5, 6, 8) + False + >>> date(4, 5, 6, 9) >= date(4, 5, 6, 8) + True + >>> date(4, 5, 6, 9) > date(4, 5, 6, 8) + True + + >>> date(4, 5, 6, 4) == date(4, 5, 6, 8) + False + >>> date(4, 5, 6, 4) != date(4, 5, 6, 8) + True + >>> date(4, 5, 6, 4) < date(4, 5, 6, 8) + True + >>> date(4, 5, 6, 4) <= date(4, 5, 6, 8) + True + >>> date(4, 5, 6, 4) >= date(4, 5, 6, 8) + False + >>> date(4, 5, 6, 4) > date(4, 5, 6, 8) + False + + Difference in Day + >>> date(4, 5, 8, 8) == date(4, 5, 6, 8) + False + >>> date(4, 5, 8, 8) != date(4, 5, 6, 8) + True + >>> date(4, 5, 8, 8) < date(4, 5, 6, 8) + False + >>> date(4, 5, 8, 8) <= date(4, 5, 6, 8) + False + >>> date(4, 5, 8, 8) >= date(4, 5, 6, 8) + True + >>> date(4, 5, 8, 8) > date(4, 5, 6, 8) + True + + >>> date(4, 5, 5, 8) == date(4, 5, 6, 8) + False + >>> date(4, 5, 5, 8) != date(4, 5, 6, 8) + True + >>> date(4, 5, 5, 8) < date(4, 5, 6, 8) + True + >>> date(4, 5, 5, 8) <= date(4, 5, 6, 8) + True + >>> date(4, 5, 5, 8) >= date(4, 5, 6, 8) + False + >>> date(4, 5, 5, 8) > date(4, 5, 6, 8) + False + + Difference in Month + >>> date(4, 6, 6, 8) == date(4, 5, 6, 8) + False + >>> date(4, 6, 6, 8) != date(4, 5, 6, 8) + True + >>> date(4, 6, 6, 8) < date(4, 5, 6, 8) + False + >>> date(4, 6, 6, 8) <= date(4, 5, 6, 8) + False + >>> date(4, 6, 6, 8) >= date(4, 5, 6, 8) + True + >>> date(4, 6, 6, 8) > date(4, 5, 6, 8) + True + + >>> date(4, 4, 6, 8) == date(4, 5, 6, 8) + False + >>> date(4, 4, 6, 8) != date(4, 5, 6, 8) + True + >>> date(4, 4, 6, 8) < date(4, 5, 6, 8) + True + >>> date(4, 4, 6, 8) <= date(4, 5, 6, 8) + True + >>> date(4, 4, 6, 8) >= date(4, 5, 6, 8) + False + >>> date(4, 4, 6, 8) > date(4, 5, 6, 8) + False + + Difference in Year + >>> date(5, 5, 6, 8) == date(4, 5, 6, 8) + False + >>> date(5, 5, 6, 8) != date(4, 5, 6, 8) + True + >>> date(5, 5, 6, 8) < date(4, 5, 6, 8) + False + >>> date(5, 5, 6, 8) <= date(4, 5, 6, 8) + False + >>> date(5, 5, 6, 8) >= date(4, 5, 6, 8) + True + >>> date(5, 5, 6, 8) > date(4, 5, 6, 8) + True + + >>> date(3, 5, 6, 8) == date(4, 5, 6, 8) + False + >>> date(3, 5, 6, 8) != date(4, 5, 6, 8) + True + >>> date(3, 5, 6, 8) < date(4, 5, 6, 8) + True + >>> date(3, 5, 6, 8) <= date(4, 5, 6, 8) + True + >>> date(3, 5, 6, 8) >= date(4, 5, 6, 8) + False + >>> date(3, 5, 6, 8) > date(4, 5, 6, 8) + False + """ + +
+[docs] + @staticmethod + def hms_to_second(hour, minute, second): + _SECONDS_PER_HOUR = 3600 + _SECONDS_PER_MINUTE = 60 + return hour * _SECONDS_PER_HOUR + minute * _SECONDS_PER_MINUTE + second
+ + +
+[docs] + @staticmethod + def second_to_hms(second): + _SECONDS_PER_HOUR = 3600 + _SECONDS_PER_MINUTE = 60 + return { + "hour": second // _SECONDS_PER_HOUR, + "minute": (second % _SECONDS_PER_HOUR) // _SECONDS_PER_MINUTE, + "second": second % _SECONDS_PER_MINUTE, + }
+ + + def __init__(self, year=1, month=1, day=1, hour=0, minute=0, second=0): + self._year = year + self._month = month + self._day = day + self._second = self.hms_to_second(hour, minute, second) + + def __str__(self): + """ + >>> str(date(4, 5, 7, second=64)) + 'date(4, 5, 7, 0, 1, 4)' + """ + fmt_str = "date({year:d}, {month:d}, {day:d}, {hour:d}, {minute:d}, {second:d})" + return fmt_str.format( + year=self.year(), + month=self.month(), + day=self.day(), + hour=self.hour(), + minute=self.minute(), + second=self.second(), + ) + +
+[docs] + def year(self): + return self._year
+ + +
+[docs] + def month(self): + return self._month
+ + +
+[docs] + def day(self): + return self._day
+ + +
+[docs] + def hour(self): + return self.second_to_hms(self._second)["hour"]
+ + +
+[docs] + def minute(self): + return self.second_to_hms(self._second)["minute"]
+ + +
+[docs] + def second(self): + return self.second_to_hms(self._second)["second"]
+ + +
+[docs] + def second_of_day(self): + return self._second
+ + + def __repr__(self): + return str(self) + + def __eq__(self, other): + return ( + (self.year() == other.year()) + and (self.month() == other.month()) + and (self.day() == other.day()) + and (self.second_of_day() == other.second_of_day()) + ) + + def __ne__(self, other): + return not (self == other) + + def __lt__(self, other): + if self.year() < other.year(): + return True + elif self.year() > other.year(): + return False + # self.year == other.year + if self.month() < other.month(): + return True + elif self.month() > other.month(): + return False + # self.month = other.month + if self.day() < other.day(): + return True + elif self.day() > other.day(): + return False + # self.day = other.day + if self.second_of_day() < other.second_of_day(): + return True + else: + # the dates are equal + return False + + def __le__(self, other): + return (self < other) or (self == other) + + def __ge__(self, other): + return not (self < other) + + def __gt__(self, other): + return not (self <= other)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/expected_fails.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/expected_fails.html new file mode 100644 index 00000000000..1f3c8439899 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/expected_fails.html @@ -0,0 +1,174 @@ + + + + + + CIME.expected_fails — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.expected_fails

+"""
+Contains the definition of a class to hold information on expected failures for a single test
+"""
+
+from CIME.XML.standard_module_setup import *
+
+EXPECTED_FAILURE_COMMENT = "(EXPECTED FAILURE)"
+UNEXPECTED_FAILURE_COMMENT_START = "(UNEXPECTED"  # There will be some additional text after this, before the end parentheses
+
+
+
+[docs] +class ExpectedFails(object): + def __init__(self): + """Initialize an empty ExpectedFails object""" + self._fails = {} + + def __eq__(self, rhs): + expect(isinstance(rhs, ExpectedFails), "Wrong type") + return self._fails == rhs._fails # pylint: disable=protected-access + + def __ne__(self, rhs): + result = self.__eq__(rhs) + return not result + + def __repr__(self): + return repr(self._fails) + +
+[docs] + def add_failure(self, phase, expected_status): + """Add an expected failure to the list""" + expect( + phase not in self._fails, "Phase {} already present in list".format(phase) + ) + self._fails[phase] = expected_status
+ + +
+[docs] + def expected_fails_comment(self, phase, status): + """Returns a string giving the expected fails comment for this phase and status""" + if phase not in self._fails: + return "" + + if self._fails[phase] == status: + return EXPECTED_FAILURE_COMMENT + else: + return "{}: expected {})".format( + UNEXPECTED_FAILURE_COMMENT_START, self._fails[phase] + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/get_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/get_tests.html new file mode 100644 index 00000000000..20a9c1721a1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/get_tests.html @@ -0,0 +1,646 @@ + + + + + + CIME.get_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.get_tests

+import CIME.utils
+from CIME.utils import expect, convert_to_seconds, parse_test_name, get_cime_root
+from CIME.XML.machines import Machines
+import sys, os
+
+# Expect that, if a model wants to use python-based test lists, they will have a file
+# $model/cime_config/tests.py , containing a test dictionary called _TESTS. Currently,
+# only E3SM is using this feature.
+
+sys.path.insert(0, os.path.join(get_cime_root(), "../cime_config"))
+_ALL_TESTS = {}
+try:
+    from tests import _TESTS  # pylint: disable=import-error
+
+    _ALL_TESTS.update(_TESTS)
+except ImportError:
+    pass
+
+# Here are the tests belonging to cime suites. Format for individual tests is
+# <test>.<grid>.<compset>[.<testmod>]
+#
+# suite_name : {
+#     "inherit" : (suite1, suite2, ...), # Optional. Suites to inherit tests from. Default is None. Tuple, list, or str.
+#     "time"    : "HH:MM:SS",            # Optional. Recommended upper-limit on test time.
+#     "share"   : True|False,            # Optional. If True, all tests in this suite share a build. Default is False.
+#     "perf"    : True|False,            # Optional. If True, all tests in this suite will do performance tracking. Default is False.
+#     "tests"   : (test1, test2, ...)    # Optional. The list of tests for this suite. See above for format. Tuple, list, or str. This is the ONLY inheritable attribute.
+# }
+
+_CIME_TESTS = {
+    "cime_tiny": {
+        "time": "0:10:00",
+        "tests": (
+            "ERS.f19_g16_rx1.A",
+            "NCK.f19_g16_rx1.A",
+        ),
+    },
+    "cime_test_only_pass": {
+        "time": "0:10:00",
+        "tests": (
+            "TESTRUNPASS_P1.f19_g16_rx1.A",
+            "TESTRUNPASS_P1.ne30_g16_rx1.A",
+            "TESTRUNPASS_P1.f45_g37_rx1.A",
+        ),
+    },
+    "cime_test_only_slow_pass": {
+        "time": "0:10:00",
+        "tests": (
+            "TESTRUNSLOWPASS_P1.f19_g16_rx1.A",
+            "TESTRUNSLOWPASS_P1.ne30_g16_rx1.A",
+            "TESTRUNSLOWPASS_P1.f45_g37_rx1.A",
+        ),
+    },
+    "cime_test_only": {
+        "time": "0:10:00",
+        "tests": (
+            "TESTBUILDFAIL_P1.f19_g16_rx1.A",
+            "TESTBUILDFAILEXC_P1.f19_g16_rx1.A",
+            "TESTRUNFAIL_P1.f19_g16_rx1.A",
+            "TESTRUNSTARCFAIL_P1.f19_g16_rx1.A",
+            "TESTRUNFAILEXC_P1.f19_g16_rx1.A",
+            "TESTRUNPASS_P1.f19_g16_rx1.A",
+            "TESTTESTDIFF_P1.f19_g16_rx1.A",
+            "TESTMEMLEAKFAIL_P1.f09_g16.X",
+            "TESTMEMLEAKPASS_P1.f09_g16.X",
+        ),
+    },
+    "cime_test_all": {
+        "inherit": "cime_test_only",
+        "time": "0:10:00",
+        "tests": "TESTRUNDIFF_P1.f19_g16_rx1.A",
+    },
+    "cime_test_share": {
+        "time": "0:10:00",
+        "share": True,
+        "tests": (
+            "SMS_P2.f19_g16_rx1.A",
+            "SMS_P4.f19_g16_rx1.A",
+            "SMS_P8.f19_g16_rx1.A",
+            "SMS_P16.f19_g16_rx1.A",
+        ),
+    },
+    "cime_test_share2": {
+        "time": "0:10:00",
+        "share": True,
+        "tests": (
+            "SMS_P2.f19_g16_rx1.X",
+            "SMS_P4.f19_g16_rx1.X",
+            "SMS_P8.f19_g16_rx1.X",
+            "SMS_P16.f19_g16_rx1.X",
+        ),
+    },
+    "cime_test_perf": {
+        "time": "0:10:00",
+        "perf": True,
+        "tests": (
+            "SMS_P2.T42_T42.S",
+            "SMS_P4.T42_T42.S",
+            "SMS_P8.T42_T42.S",
+            "SMS_P16.T42_T42.S",
+        ),
+    },
+    "cime_test_timing": {
+        "time": "0:10:00",
+        "tests": ("SMS_P1.T42_T42.S",),
+    },
+    "cime_test_repeat": {
+        "tests": (
+            "TESTRUNPASS_P1.f19_g16_rx1.A",
+            "TESTRUNPASS_P2.ne30_g16_rx1.A",
+            "TESTRUNPASS_P4.f45_g37_rx1.A",
+        )
+    },
+    "cime_test_time": {
+        "time": "0:13:00",
+        "tests": ("TESTRUNPASS_P69.f19_g16_rx1.A.testmod",),
+    },
+    "cime_test_multi_inherit": {
+        "inherit": ("cime_test_repeat", "cime_test_only_pass", "cime_test_all")
+    },
+    "cime_developer": {
+        "time": "0:15:00",
+        "tests": (
+            "NCK_Ld3.f45_g37_rx1.A",
+            "ERI_Ln9.f09_g16.X",
+            "ERIO_Ln11.f09_g16.X",
+            "SEQ_Ln9.f19_g16_rx1.A",
+            "ERS.ne30_g16_rx1.A.drv-y100k",
+            "IRT_N2_Vmct_Ln9.f19_g16_rx1.A",
+            "ERR_Ln9.f45_g37_rx1.A",
+            "ERP_Ln9.f45_g37_rx1.A",
+            "SMS_D_Ln9_Mmpi-serial.f19_g16_rx1.A",
+            "PET_Ln9_P4.f19_f19.A",
+            "PEM_Ln9_P4.f19_f19.A",
+            "SMS_Ln3.T42_T42.S",
+            "PRE.f19_f19.ADESP",
+            "PRE.f19_f19.ADESP_TEST",
+            "MCC_P1.f19_g16_rx1.A",
+            "LDSTA.f45_g37_rx1.A",
+        ),
+    },
+}
+
+_ALL_TESTS.update(_CIME_TESTS)
+
+###############################################################################
+def _get_key_data(raw_dict, key, the_type):
+    ###############################################################################
+    if key not in raw_dict:
+        if the_type is tuple:
+            return ()
+        elif the_type is str:
+            return None
+        elif the_type is bool:
+            return False
+        else:
+            expect(False, "Unsupported type {}".format(the_type))
+    else:
+        val = raw_dict[key]
+        if the_type is tuple and isinstance(val, str):
+            val = (val,)
+
+        expect(
+            isinstance(val, the_type),
+            "Wrong type for {}, {} is a {} but expected {}".format(
+                key, val, type(val), the_type
+            ),
+        )
+
+        return val
+
+
+###############################################################################
+
+[docs] +def get_test_data(suite): + ############################################################################### + """ + For a given suite, returns (inherit, time, share, perf, tests) + """ + raw_dict = _ALL_TESTS[suite] + for key in raw_dict.keys(): + expect( + key in ["inherit", "time", "share", "perf", "tests"], + "Unexpected test key '{}'".format(key), + ) + + return ( + _get_key_data(raw_dict, "inherit", tuple), + _get_key_data(raw_dict, "time", str), + _get_key_data(raw_dict, "share", bool), + _get_key_data(raw_dict, "perf", bool), + _get_key_data(raw_dict, "tests", tuple), + )
+ + + +############################################################################### +
+[docs] +def get_test_suites(): + ############################################################################### + return list(_ALL_TESTS.keys())
+ + + +############################################################################### +
+[docs] +def get_test_suite( + suite, machine=None, compiler=None, skip_inherit=False, skip_tests=None +): + ############################################################################### + """ + Return a list of FULL test names for a suite. + """ + expect(suite in get_test_suites(), "Unknown test suite: '{}'".format(suite)) + machobj = Machines(machine=machine) + machine = machobj.get_machine_name() + + if compiler is None: + compiler = machobj.get_default_compiler() + expect( + machobj.is_valid_compiler(compiler), + "Compiler {} not valid for machine {}".format(compiler, machine), + ) + + inherits_from, _, _, _, tests_raw = get_test_data(suite) + tests = [] + for item in tests_raw: + expect( + isinstance(item, str), + "Bad type of test {}, expected string".format(item), + ) + + test_mods = None + test_components = item.split(".") + expect(len(test_components) in [3, 4], "Bad test name {}".format(item)) + + if len(test_components) == 4: + test_name = ".".join(test_components[:-1]) + test_mods = test_components[-1] + else: + test_name = item + if not skip_tests or not test_name in skip_tests: + tests.append( + CIME.utils.get_full_test_name( + test_name, + machine=machine, + compiler=compiler, + testmods_string=test_mods, + ) + ) + + if not skip_inherit: + for inherits in inherits_from: + inherited_tests = get_test_suite(inherits, machine, compiler) + + for inherited_test in inherited_tests: + if inherited_test not in tests: + tests.append(inherited_test) + + return tests
+ + + +############################################################################### +
+[docs] +def suite_has_test(suite, test_full_name, skip_inherit=False): + ############################################################################### + _, _, _, _, machine, compiler, _ = CIME.utils.parse_test_name(test_full_name) + expect(machine is not None, "{} is not a full test name".format(test_full_name)) + + tests = get_test_suite( + suite, machine=machine, compiler=compiler, skip_inherit=skip_inherit + ) + return test_full_name in tests
+ + + +############################################################################### +
+[docs] +def get_build_groups(tests): + ############################################################################### + """ + Given a list of tests, return a list of lists, with each list representing + a group of tests that can share executables. + + >>> tests = ["SMS_P2.f19_g16_rx1.A.melvin_gnu", "SMS_P4.f19_g16_rx1.A.melvin_gnu", "SMS_P2.f19_g16_rx1.X.melvin_gnu", "SMS_P4.f19_g16_rx1.X.melvin_gnu", "TESTRUNSLOWPASS_P1.f19_g16_rx1.A.melvin_gnu", "TESTRUNSLOWPASS_P1.ne30_g16_rx1.A.melvin_gnu"] + >>> get_build_groups(tests) + [('SMS_P2.f19_g16_rx1.A.melvin_gnu', 'SMS_P4.f19_g16_rx1.A.melvin_gnu'), ('SMS_P2.f19_g16_rx1.X.melvin_gnu', 'SMS_P4.f19_g16_rx1.X.melvin_gnu'), ('TESTRUNSLOWPASS_P1.f19_g16_rx1.A.melvin_gnu',), ('TESTRUNSLOWPASS_P1.ne30_g16_rx1.A.melvin_gnu',)] + """ + build_groups = [] # list of tuples ([tests], set(suites)) + + # Get a list of suites that share exes + suites = get_test_suites() + share_suites = [] + for suite in suites: + share = get_test_data(suite)[2] + if share: + share_suites.append(suite) + + # Divide tests up into build groups. Assumes that build-compatibility is transitive + for test in tests: + matched = False + + my_share_suites = set() + for suite in share_suites: + if suite_has_test(suite, test, skip_inherit=True): + my_share_suites.add(suite) + + # Try to match this test with an existing build group + if my_share_suites: + for build_group_tests, build_group_suites in build_groups: + overlap = build_group_suites & my_share_suites + if overlap: + matched = True + build_group_tests.append(test) + build_group_suites.update(my_share_suites) + break + + # Nothing matched, this test is in a build group of its own + if not matched: + build_groups.append(([test], my_share_suites)) + + return [tuple(item[0]) for item in build_groups]
+ + + +############################################################################### +
+[docs] +def is_perf_test(test): + ############################################################################### + """ + Is the provided test in a suite with perf=True? + + >>> is_perf_test("SMS_P2.T42_T42.S.melvin_gnu") + True + >>> is_perf_test("SMS_P2.f19_g16_rx1.X.melvin_gnu") + False + >>> is_perf_test("PFS_P2.f19_g16_rx1.X.melvin_gnu") + True + """ + # Get a list of performance suites + if test.startswith("PFS"): + return True + else: + suites = get_test_suites() + for suite in suites: + perf = get_test_data(suite)[3] + if perf and suite_has_test(suite, test, skip_inherit=True): + return True + + return False
+ + + +############################################################################### +
+[docs] +def infer_arch_from_tests(testargs): + ############################################################################### + """ + Return a tuple (machine, [compilers]) that can be inferred from the test args + + >>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu"]) + ('melvin', ['gnu']) + >>> infer_arch_from_tests(["NCK.f19_g16_rx1.A"]) + (None, []) + >>> infer_arch_from_tests(["NCK.f19_g16_rx1.A", "NCK.f19_g16_rx1.A.melvin_gnu"]) + ('melvin', ['gnu']) + >>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu", "NCK.f19_g16_rx1.A.melvin_gnu"]) + ('melvin', ['gnu']) + >>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu9", "NCK.f19_g16_rx1.A.melvin_gnu"]) + ('melvin', ['gnu9', 'gnu']) + >>> infer_arch_from_tests(["NCK.f19_g16_rx1.A.melvin_gnu", "NCK.f19_g16_rx1.A.mappy_gnu"]) + Traceback (most recent call last): + ... + CIME.utils.CIMEError: ERROR: Must have consistent machine 'melvin' != 'mappy' + """ + e3sm_test_suites = get_test_suites() + + machine = None + compilers = [] + for testarg in testargs: + testarg = testarg.strip() + if testarg.startswith("^"): + testarg = testarg[1:] + + if testarg not in e3sm_test_suites: + machine_for_this_test, compiler_for_this_test = parse_test_name(testarg)[ + 4:6 + ] + if machine_for_this_test is not None: + if machine is None: + machine = machine_for_this_test + else: + expect( + machine == machine_for_this_test, + "Must have consistent machine '%s' != '%s'" + % (machine, machine_for_this_test), + ) + + if ( + compiler_for_this_test is not None + and compiler_for_this_test not in compilers + ): + compilers.append(compiler_for_this_test) + + return machine, compilers
+ + + +############################################################################### +
+[docs] +def get_full_test_names(testargs, machine, compiler): + ############################################################################### + """ + Return full test names in the form: + TESTCASE.GRID.COMPSET.MACHINE_COMPILER.TESTMODS + Testmods are optional + + Testargs can be categories or test names and support the NOT symbol '^' + + >>> get_full_test_names(["cime_tiny"], "melvin", "gnu") + ['ERS.f19_g16_rx1.A.melvin_gnu', 'NCK.f19_g16_rx1.A.melvin_gnu'] + + >>> get_full_test_names(["cime_tiny", "PEA_P1_M.f45_g37_rx1.A"], "melvin", "gnu") + ['ERS.f19_g16_rx1.A.melvin_gnu', 'NCK.f19_g16_rx1.A.melvin_gnu', 'PEA_P1_M.f45_g37_rx1.A.melvin_gnu'] + + >>> get_full_test_names(['ERS.f19_g16_rx1.A', 'NCK.f19_g16_rx1.A', 'PEA_P1_M.f45_g37_rx1.A'], "melvin", "gnu") + ['ERS.f19_g16_rx1.A.melvin_gnu', 'NCK.f19_g16_rx1.A.melvin_gnu', 'PEA_P1_M.f45_g37_rx1.A.melvin_gnu'] + + >>> get_full_test_names(["cime_tiny", "^NCK.f19_g16_rx1.A"], "melvin", "gnu") + ['ERS.f19_g16_rx1.A.melvin_gnu'] + + >>> get_full_test_names(["cime_test_multi_inherit"], "melvin", "gnu") + ['TESTBUILDFAILEXC_P1.f19_g16_rx1.A.melvin_gnu', 'TESTBUILDFAIL_P1.f19_g16_rx1.A.melvin_gnu', 'TESTMEMLEAKFAIL_P1.f09_g16.X.melvin_gnu', 'TESTMEMLEAKPASS_P1.f09_g16.X.melvin_gnu', 'TESTRUNDIFF_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNFAILEXC_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNFAIL_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P1.f19_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P1.f45_g37_rx1.A.melvin_gnu', 'TESTRUNPASS_P1.ne30_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P2.ne30_g16_rx1.A.melvin_gnu', 'TESTRUNPASS_P4.f45_g37_rx1.A.melvin_gnu', 'TESTRUNSTARCFAIL_P1.f19_g16_rx1.A.melvin_gnu', 'TESTTESTDIFF_P1.f19_g16_rx1.A.melvin_gnu'] + """ + expect(machine is not None, "Must define a machine") + expect(compiler is not None, "Must define a compiler") + e3sm_test_suites = get_test_suites() + + tests_to_run = set() + negations = set() + + for testarg in testargs: + # remove any whitespace in name + testarg = testarg.strip() + if testarg.startswith("^"): + negations.add(testarg[1:]) + elif testarg in e3sm_test_suites: + tests_to_run.update(get_test_suite(testarg, machine, compiler)) + else: + try: + tests_to_run.add( + CIME.utils.get_full_test_name( + testarg, machine=machine, compiler=compiler + ) + ) + except Exception: + if "." not in testarg: + expect(False, "Unrecognized test suite '{}'".format(testarg)) + else: + raise + + for negation in negations: + if negation in e3sm_test_suites: + tests_to_run -= set(get_test_suite(negation, machine, compiler)) + else: + fullname = CIME.utils.get_full_test_name( + negation, machine=machine, compiler=compiler + ) + if fullname in tests_to_run: + tests_to_run.remove(fullname) + + return list(sorted(tests_to_run))
+ + + +############################################################################### + + + + +############################################################################### +
+[docs] +def key_test_time(test_full_name): + ############################################################################### + result = get_recommended_test_time(test_full_name) + return 99999999 if result is None else convert_to_seconds(result)
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/get_timing.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/get_timing.html new file mode 100644 index 00000000000..7cb2bbcdced --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/get_timing.html @@ -0,0 +1,1054 @@ + + + + + + CIME.get_timing — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.get_timing

+#!/usr/bin/env python3
+
+"""
+Library for implementing getTiming tool which gets timing
+information from a run.
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import safe_copy
+
+import datetime, re
+
+logger = logging.getLogger(__name__)
+
+
+class _GetTimingInfo:
+    def __init__(self, name):
+        self.name = name
+        self.tmin = 0
+        self.tmax = 0
+        self.adays = 0
+
+
+class _TimingParser:
+    def __init__(self, case, lid="999999-999999"):
+        self.case = case
+        self.caseroot = case.get_value("CASEROOT")
+        self.lid = lid
+        self.finlines = None
+        self.fout = None
+        self.adays = 0
+        self._driver = case.get_value("COMP_INTERFACE")
+        self.models = {}
+        self.ncount = 0
+        self.nprocs = 0
+        self.version = -1
+
+    def write(self, text):
+        self.fout.write(text)
+
+    def prttime(self, label, offset=None, div=None, coff=-999):
+        if offset is None:
+            offset = self.models["CPL"].offset
+        if div is None:
+            div = self.adays
+        datalen = 20
+        cstr = "<---->"
+        clen = len(cstr)
+
+        minval, maxval, found = self.gettime(label)
+        if div >= 1.0:
+            mind = minval / div
+            maxd = maxval / div
+        else:
+            mind = minval
+            maxd = maxval
+
+        pstrlen = 25
+        if mind >= 0 and maxd >= 0 and found:
+            if coff >= 0:
+                zoff = pstrlen + coff + int((datalen - clen) / 2)
+                csp = offset - coff - int((datalen - clen) / 2)
+                self.write(
+                    " {label:<{width1}}{cstr:<{width2}} {minv:8.3f}:{maxv:8.3f} \n".format(
+                        label=label,
+                        width1=zoff,
+                        cstr=cstr,
+                        width2=csp,
+                        minv=mind,
+                        maxv=maxd,
+                    )
+                )
+            else:
+                zoff = pstrlen + offset
+                self.write(
+                    " {label:<{width1}} {minv:8.3f}:{maxv:8.3f} \n".format(
+                        label=label, width1=zoff, minv=mind, maxv=maxd
+                    )
+                )
+
+    def gettime2(self, heading_padded):
+        if self._driver == "mct" or self._driver == "moab":
+            return self._gettime2_mct(heading_padded)
+        elif self._driver == "nuopc":
+            if self.version < 0:
+                self._get_esmf_profile_version()
+            return self._gettime2_nuopc()
+
+    def _gettime2_mct(self, heading_padded):
+        nprocs = 0
+        ncount = 0
+
+        heading = '"' + heading_padded.strip() + '"'
+        for line in self.finlines:
+            m = re.match(r"\s*{}\s+\S\s+(\d+)\s*\d+\s*(\S+)".format(heading), line)
+            if m:
+                nprocs = int(float(m.groups()[0]))
+                ncount = int(float(m.groups()[1]))
+                return (nprocs, ncount)
+            else:
+                m = re.match(r"\s*{}\s+\S\s+(\d+)\s".format(heading), line)
+                if m:
+                    nprocs = 1
+                    ncount = int(float(m.groups()[0]))
+                    return (nprocs, ncount)
+        return (0, 0)
+
+    def _gettime2_nuopc(self):
+        self.nprocs = 0
+        self.ncount = 0
+        if self.version < 0:
+            self._get_esmf_profile_version()
+        if self.version == 0:
+            expression = re.compile(r"\s*\[ATM]\s*RunPhase1\s+(\d+)\s+(\d+)")
+        else:
+            expression = re.compile(r"\s*\[ATM]\s*RunPhase1\s+\d+\s+(\d+)\s+(\d+)")
+
+        for line in self.finlines:
+            match = expression.match(line)
+            if match:
+                self.nprocs = int(match.group(1))
+                self.ncount = int(match.group(2))
+                return (self.nprocs, self.ncount)
+
+        return (0, 0)
+
+    def gettime(self, heading_padded):
+        if self._driver == "mct" or self._driver == "moab":
+            return self._gettime_mct(heading_padded)
+        elif self._driver == "nuopc":
+            if self.version < 0:
+                self._get_esmf_profile_version()
+            return self._gettime_nuopc(heading_padded)
+
+    def _gettime_mct(self, heading_padded):
+        found = False
+        heading = '"' + heading_padded.strip() + '"'
+        minval = 0
+        maxval = 0
+        for line in self.finlines:
+            m = re.match(
+                r"\s*{}\s+\S\s+\d+\s*\d+\s*\S+\s*\S+\s*(\d*\.\d+)\s*\(.*\)\s*(\d*\.\d+)\s*\(.*\)".format(
+                    heading
+                ),
+                line,
+            )
+            if m:
+                maxval = float(m.groups()[0])
+                minval = float(m.groups()[1])
+                found = True
+                return (minval, maxval, found)
+        return (0, 0, False)
+
+    def _get_esmf_profile_version(self):
+        """
+        Prior to ESMF8_3_0_beta_snapshot_04 the PEs column was not in ESMF_Profile.summary
+        this routine looks for that in the header field to determine if this file was produced
+        by a newer (version 1) or older (version 0) ESMF library.
+        """
+        expect(self.finlines, " No ESMF_Profile.summary file found")
+        for line in self.finlines:
+            if line.startswith("Region"):
+                if "PEs" in line:
+                    self.version = 1
+                else:
+                    self.version = 0
+
+    def _gettime_nuopc(self, heading, instance="0001"):
+        if instance == "":
+            instance = "0001"
+        minval = 0
+        maxval = 0
+        m = None
+        timeline = []
+        #  PETs   Count    Mean (s)    Min (s)     Min PET Max (s)     Max PET
+        timeline.append(
+            re.compile(
+                r"\s*{}\s+\d+\s+\d+\s+(\d*\.\d+)\s+(\d*\.\d+)\s+\d+\s+(\d*\.\d+)\s+\d+".format(
+                    re.escape(heading)
+                )
+            )
+        )
+        #  PETs   PEs  Count    Mean (s)    Min (s)     Min PET Max (s)     Max PET
+        timeline.append(
+            re.compile(
+                r"\s*{}\s+\d+\s+\d+\s+\d+\s+(\d*\.\d+)\s+(\d*\.\d+)\s+\d+\s+(\d*\.\d+)\s+\d+".format(
+                    re.escape(heading)
+                )
+            )
+        )
+        phase = None
+        for line in self.finlines:
+            phase = self._get_nuopc_phase(line, instance, phase)
+            if phase != "run" and not "[ensemble]" in heading:
+                continue
+            if heading in line:
+                m = timeline[self.version].match(line)
+                if m:
+                    minval = float(m.group(2))
+                    maxval = float(m.group(3))
+                    return (minval, maxval, True)
+                else:
+                    expect(False, "Parsing error in ESMF_Profile.summary file")
+
+        return (0, 0, False)
+
+    @staticmethod
+    def _get_nuopc_phase(line, instance, phase):
+        if "[ensemble] Init 1" in line:
+            phase = "init"
+        elif "[ESM" + instance + "] RunPhase1" in line:
+            phase = "run"
+        elif "[ESM" + instance + "] Finalize" in line:
+            phase = "finalize"
+        elif "[ESM" in line and "RunPhase1" in line:
+            phase = "other"
+        return phase
+
+    def getMEDtime(self, instance):
+        if instance == "":
+            instance = "0001"
+
+        med_phase_line = []
+        med_connector_line = []
+        med_fraction_line = []
+        med_phase_line.append(
+            re.compile(r"\s*(\[MED\] med_phases\S+)\s+\d+\s+\d+\s+(\d*\.\d+)\s+")
+        )
+        med_connector_line.append(
+            re.compile(r"\s*(\[MED\] med_connectors\S+)\s+\d+\s+\d+\s+(\d*\.\d+)\s+")
+        )
+        med_fraction_line.append(
+            re.compile(r"\s*(\[MED\] med_fraction\S+)\s+\d+\s+\d+\s+(\d*\.\d+)\s+")
+        )
+        med_phase_line.append(
+            re.compile(r"\s*(\[MED\] med_phases\S+)\s+\d+\s+\d+\s+\d+\s+(\d*\.\d+)\s+")
+        )
+        med_connector_line.append(
+            re.compile(
+                r"\s*(\[MED\] med_connectors\S+)\s+\d+\s+\d+\s+\d+\s+(\d*\.\d+)\s+"
+            )
+        )
+        med_fraction_line.append(
+            re.compile(
+                r"\s*(\[MED\] med_fraction\S+)\s+\d+\s+\d+\s+\d+\s+(\d*\.\d+)\s+"
+            )
+        )
+
+        m = None
+        minval = 0
+        maxval = 0
+        phase = None
+        for line in self.finlines:
+            phase = self._get_nuopc_phase(line, instance, phase)
+            if phase != "run":
+                continue
+            m = med_phase_line[self.version].match(line)
+            if not m:
+                m = med_connector_line[self.version].match(line)
+            if not m:
+                m = med_fraction_line[self.version].match(line)
+            if m:
+                minval += float(m.group(2))
+                maxval += float(m.group(2))
+
+        return (minval, maxval)
+
+    def getCOMMtime(self, instance):
+        if instance == "":
+            instance = "0001"
+        comm_line = []
+        comm_line.append(
+            re.compile(r"\s*(\[\S+-TO-\S+\] RunPhase1)\s+\d+\s+\d+\s+(\d*\.\d+)\s+")
+        )
+        comm_line.append(
+            re.compile(
+                r"\s*(\[\S+-TO-\S+\] RunPhase1)\s+\d+\s+\d+\s+\d+\s+(\d*\.\d+)\s+"
+            )
+        )
+        m = None
+        maxval = 0
+        phase = None
+        for line in self.finlines:
+            phase = self._get_nuopc_phase(line, instance, phase)
+            if phase != "run":
+                continue
+            m = comm_line[self.version].match(line)
+            if m:
+                heading = m.group(1)
+                maxv = float(m.group(2))
+                maxval += maxv
+                logger.debug("{} time={} sum={}".format(heading, maxv, maxval))
+        return maxval
+
+    def getTiming(self):
+        ninst = 1
+        multi_driver = self.case.get_value("MULTI_DRIVER")
+        if multi_driver:
+            ninst = self.case.get_value("NINST_MAX")
+
+        if ninst > 1:
+            for inst in range(ninst):
+                self._getTiming(inst + 1)
+        else:
+            self._getTiming()
+
+    def _getTiming(self, inst=0):
+        components = self.case.get_values("COMP_CLASSES")
+        for s in components:
+            self.models[s] = _GetTimingInfo(s)
+        atm = None
+        lnd = None
+        rof = None
+        ice = None
+        ocn = None
+        glc = None
+        cpl = None
+        if "ATM" in self.models:
+            atm = self.models["ATM"]
+        if "LND" in self.models:
+            lnd = self.models["LND"]
+        if "ROF" in self.models:
+            rof = self.models["ROF"]
+        if "ICE" in self.models:
+            ice = self.models["ICE"]
+        if "OCN" in self.models:
+            ocn = self.models["OCN"]
+        if "GLC" in self.models:
+            glc = self.models["GLC"]
+        if "CPL" in self.models:
+            cpl = self.models["CPL"]
+
+        cime_model = self.case.get_value("MODEL")
+        caseid = self.case.get_value("CASE")
+        mach = self.case.get_value("MACH")
+        user = self.case.get_value("USER")
+        continue_run = self.case.get_value("CONTINUE_RUN")
+        rundir = self.case.get_value("RUNDIR")
+        run_type = self.case.get_value("RUN_TYPE")
+        ncpl_base_period = self.case.get_value("NCPL_BASE_PERIOD")
+        ncpl = 0
+        ocn_ncpl = None
+        for compclass in self.case.get_values("COMP_CLASSES"):
+            comp_ncpl = self.case.get_value("{}_NCPL".format(compclass))
+            if compclass == "OCN":
+                ocn_ncpl = comp_ncpl
+            if comp_ncpl is not None:
+                ncpl = max(ncpl, comp_ncpl)
+
+        compset = self.case.get_value("COMPSET")
+        if compset is None:
+            compset = ""
+        grid = self.case.get_value("GRID")
+        run_type = self.case.get_value("RUN_TYPE")
+        stop_option = self.case.get_value("STOP_OPTION")
+        stop_n = self.case.get_value("STOP_N")
+
+        cost_pes = self.case.get_value("COST_PES")
+        costpes_per_node = self.case.get_value("COSTPES_PER_NODE")
+
+        totalpes = self.case.get_value("TOTALPES")
+        max_mpitasks_per_node = self.case.get_value("MAX_MPITASKS_PER_NODE")
+        smt_factor = max(
+            1, int(self.case.get_value("MAX_TASKS_PER_NODE") / max_mpitasks_per_node)
+        )
+
+        if cost_pes > 0:
+            pecost = cost_pes
+        elif costpes_per_node:
+            pecost = self.case.num_nodes * costpes_per_node
+        else:
+            pecost = totalpes
+
+        for m in self.models.values():
+            for key in ["NTASKS", "ROOTPE", "PSTRID", "NTHRDS", "NINST"]:
+                if key == "NINST" and m.name == "CPL":
+                    m.ninst = 1
+                else:
+                    setattr(
+                        m,
+                        key.lower(),
+                        int(self.case.get_value("{}_{}".format(key, m.name))),
+                    )
+
+            m.comp = self.case.get_value("COMP_{}".format(m.name))
+            m.pemax = m.rootpe + m.ntasks * m.pstrid - 1
+
+        now = datetime.datetime.ctime(datetime.datetime.now())
+        inittype = "FALSE"
+        if (run_type == "startup" or run_type == "hybrid") and not continue_run:
+            inittype = "TRUE"
+
+        if inst > 0:
+            inst_label = "_{:04d}".format(inst)
+        else:
+            inst_label = ""
+        if self._driver == "mct" or self._driver == "moab":
+            binfilename = os.path.join(
+                rundir, "timing", "model_timing{}_stats".format(inst_label)
+            )
+            finfilename = os.path.join(
+                self.caseroot,
+                "timing",
+                "{}_timing{}_stats.{}".format(cime_model, inst_label, self.lid),
+            )
+        elif self._driver == "nuopc":
+            binfilename = os.path.join(rundir, "ESMF_Profile.summary")
+            finfilename = os.path.join(
+                self.caseroot,
+                "timing",
+                "{}.ESMF_Profile.summary.{}".format(cime_model, self.lid),
+            )
+
+        foutfilename = os.path.join(
+            self.caseroot,
+            "timing",
+            "{}_timing{}.{}.{}".format(cime_model, inst_label, caseid, self.lid),
+        )
+
+        timingDir = os.path.join(self.caseroot, "timing")
+        if not os.path.isfile(binfilename):
+            logger.warning("No timing file found in run directory")
+            return
+
+        if not os.path.isdir(timingDir):
+            os.makedirs(timingDir)
+
+        safe_copy(binfilename, finfilename)
+
+        os.chdir(self.caseroot)
+        try:
+            fin = open(finfilename, "r")
+            self.finlines = fin.readlines()
+            fin.close()
+        except Exception as e:
+            logger.critical("Unable to open file {}".format(finfilename))
+            raise e
+
+        tlen = 1.0
+        if ncpl_base_period == "decade":
+            tlen = 3650.0
+        elif ncpl_base_period == "year":
+            tlen = 365.0
+        elif ncpl_base_period == "day":
+            tlen = 1.0
+        elif ncpl_base_period == "hour":
+            tlen = 1.0 / 24.0
+        else:
+            logger.warning("Unknown NCPL_BASE_PERIOD={}".format(ncpl_base_period))
+
+        # at this point the routine becomes driver specific
+        if self._driver == "mct" or self._driver == "moab":
+            nprocs, ncount = self.gettime2("CPL:CLOCK_ADVANCE ")
+            nsteps = ncount / nprocs
+        elif self._driver == "nuopc":
+            nprocs, nsteps = self.gettime2("")
+        adays = nsteps * tlen / ncpl
+        odays = nsteps * tlen / ncpl
+        if ocn_ncpl and inittype == "TRUE":
+            odays = odays - (tlen / ocn_ncpl)
+
+        peminmax = max([m.rootpe for m in self.models.values()]) + 1
+        if ncpl_base_period in ["decade", "year", "day"] and int(adays) > 0:
+            adays = int(adays)
+            if tlen % ocn_ncpl == 0:
+                odays = int(odays)
+        self.adays = adays
+        maxoffset = 40
+        extraoff = 20
+        for m in self.models.values():
+            m.offset = int((maxoffset * m.rootpe) / peminmax) + extraoff
+        if cpl:
+            cpl.offset = 0
+        try:
+            self.fout = open(foutfilename, "w")
+        except Exception as e:
+            logger.critical("Could not open file for writing: {}".format(foutfilename))
+            raise e
+
+        self.write("---------------- TIMING PROFILE ---------------------\n")
+
+        self.write("  Case        : {}\n".format(caseid))
+        self.write("  LID         : {}\n".format(self.lid))
+        self.write("  Machine     : {}\n".format(mach))
+        self.write("  Caseroot    : {}\n".format(self.caseroot))
+        self.write("  Timeroot    : {}/Tools\n".format(self.caseroot))
+        self.write("  User        : {}\n".format(user))
+        self.write("  Curr Date   : {}\n".format(now))
+        if self._driver == "nuopc":
+            self.write("  Driver      : CMEPS\n")
+        elif self._driver == "mct" or self._driver == "moab":
+            self.write("  Driver      : CPL7\n")
+
+        self.write("  grid        : {}\n".format(grid))
+        self.write("  compset     : {}\n".format(compset))
+        self.write(
+            "  run type    : {}, continue_run = {} (inittype = {})\n".format(
+                run_type, str(continue_run).upper(), inittype
+            )
+        )
+        self.write("  stop option : {}, stop_n = {}\n".format(stop_option, stop_n))
+        self.write("  run length  : {} days ({} for ocean)\n\n".format(adays, odays))
+
+        self.write(
+            "  component       comp_pes    root_pe   tasks  "
+            "x threads"
+            " instances (stride) \n"
+        )
+        self.write(
+            "  ---------        ------     -------   ------   "
+            "------  ---------  ------  \n"
+        )
+        maxthrds = 0
+        xmax = 0
+        for k in self.case.get_values("COMP_CLASSES"):
+            m = self.models[k]
+            if m.comp == "cpl":
+                comp_label = m.comp + inst_label
+            else:
+                comp_label = m.comp
+            self.write(
+                "  {} = {:<8s}   {:<6d}      {:<6d}   {:<6d} x {:<6d}  {:<6d} ({:<6d}) \n".format(
+                    m.name.lower(),
+                    comp_label,
+                    (m.ntasks * m.nthrds),
+                    m.rootpe,
+                    m.ntasks,
+                    m.nthrds,
+                    m.ninst,
+                    m.pstrid,
+                )
+            )
+            if m.nthrds > maxthrds:
+                maxthrds = m.nthrds
+        if self._driver == "nuopc":
+            for k in components:
+                m = self.models[k]
+                if k != "CPL":
+                    m.tmin, m.tmax, _ = self._gettime_nuopc(
+                        " [{}] RunPhase1 ".format(m.name), inst_label[1:]
+                    )
+                else:
+                    m.tmin, m.tmax = self.getMEDtime(inst_label[1:])
+            nmax = self.gettime("[ensemble] Init 1")[1]
+            tmax = self.gettime("[ensemble] RunPhase1")[1]
+            fmax = self.gettime("[ensemble] FinalizePhase1")[1]
+            xmax = self.getCOMMtime(inst_label[1:])
+
+        if self._driver == "mct" or self._driver == "moab":
+            for k in components:
+                if k != "CPL":
+                    m = self.models[k]
+                    m.tmin, m.tmax, _ = self.gettime(" CPL:{}_RUN ".format(m.name))
+            nmax = self.gettime(" CPL:INIT ")[1]
+            tmax = self.gettime(" CPL:RUN_LOOP ")[1]
+            wtmin = self.gettime(" CPL:TPROF_WRITE ")[0]
+            fmax = self.gettime(" CPL:FINAL ")[1]
+            otmin, otmax, _ = self.gettime(" CPL:OCNT_RUN ")
+
+            # pick OCNT_RUN for tight coupling
+            if otmax > ocn.tmax:
+                ocn.tmin = otmin
+                ocn.tmax = otmax
+
+            cpl.tmin, cpl.tmax, _ = self.gettime(" CPL:RUN ")
+            xmax = self.gettime(" CPL:COMM ")[1]
+            ocnwaittime = self.gettime(" CPL:C2O_INITWAIT")[0]
+
+            if odays != 0:
+                ocnrunitime = ocn.tmax * (adays / odays - 1.0)
+            else:
+                ocnrunitime = 0.0
+
+            correction = max(0, ocnrunitime - ocnwaittime)
+
+            tmax = tmax + wtmin + correction
+            ocn.tmax += ocnrunitime
+
+        for m in self.models.values():
+            m.tmaxr = 0
+            if m.tmax > 0:
+                m.tmaxr = adays * 86400.0 / (m.tmax * 365.0)
+        xmaxr = 0
+        if xmax > 0:
+            xmaxr = adays * 86400.0 / (xmax * 365.0)
+        tmaxr = 0
+        if tmax > 0:
+            tmaxr = adays * 86400.0 / (tmax * 365.0)
+
+        self.write("\n")
+        self.write("  total pes active           : {} \n".format(totalpes * smt_factor))
+        self.write("  mpi tasks per node         : {} \n".format(max_mpitasks_per_node))
+        self.write("  pe count for cost estimate : {} \n".format(pecost))
+        self.write("\n")
+
+        self.write("  Overall Metrics: \n")
+        if adays > 0:
+            self.write(
+                "    Model Cost:         {:10.2f}   pe-hrs/simulated_year \n".format(
+                    (tmax * 365.0 * pecost) / (3600.0 * adays)
+                )
+            )
+        if tmax > 0:
+            self.write(
+                "    Model Throughput:   {:10.2f}   simulated_years/day \n".format(
+                    (86400.0 * adays) / (tmax * 365.0)
+                )
+            )
+
+        self.write("\n")
+
+        self.write("    Init Time   :  {:10.3f} seconds \n".format(nmax))
+        if adays > 0:
+            self.write(
+                "    Run Time    :  {:10.3f} seconds   {:10.3f} seconds/day \n".format(
+                    tmax, tmax / adays
+                )
+            )
+        self.write("    Final Time  :  {:10.3f} seconds \n".format(fmax))
+
+        self.write("\n")
+        if self._driver == "mct" or self._driver == "moab":
+            self.write(
+                "    Actual Ocn Init Wait Time     :  {:10.3f} seconds \n".format(
+                    ocnwaittime
+                )
+            )
+            self.write(
+                "    Estimated Ocn Init Run Time   :  {:10.3f} seconds \n".format(
+                    ocnrunitime
+                )
+            )
+            self.write(
+                "    Estimated Run Time Correction :  {:10.3f} seconds \n".format(
+                    correction
+                )
+            )
+            self.write(
+                "      (This correction has been applied to the ocean and"
+                " total run times) \n"
+            )
+
+        self.write("\n")
+        self.write(
+            "Runs Time in total seconds, seconds/model-day, and"
+            " model-years/wall-day \n"
+        )
+        self.write(
+            "CPL Run Time represents time in CPL pes alone, "
+            "not including time associated with data exchange "
+            "with other components \n"
+        )
+        self.write("\n")
+
+        if adays > 0:
+            self.write(
+                "    TOT Run Time:  {:10.3f} seconds   {:10.3f} seconds/mday   {:10.2f} myears/wday \n".format(
+                    tmax, tmax / adays, tmaxr
+                )
+            )
+            for k in self.case.get_values("COMP_CLASSES"):
+                m = self.models[k]
+                self.write(
+                    "    {} Run Time:  {:10.3f} seconds   {:10.3f} seconds/mday   {:10.2f} myears/wday \n".format(
+                        k, m.tmax, m.tmax / adays, m.tmaxr
+                    )
+                )
+            self.write(
+                "    CPL COMM Time: {:10.3f} seconds   {:10.3f} seconds/mday   {:10.2f} myears/wday \n".format(
+                    xmax, xmax / adays, xmaxr
+                )
+            )
+
+            pstrlen = 25
+            hoffset = 1
+            self.write("   NOTE: min:max driver timers (seconds/day):   \n")
+
+            for k in self.case.get_values("COMP_CLASSES"):
+                m = self.models[k]
+                xspace = (pstrlen + hoffset + m.offset) * " "
+                self.write(
+                    " {} {} (pes {:d} to {:d}) \n".format(xspace, k, m.rootpe, m.pemax)
+                )
+            self.write("\n")
+
+            self.prttime(" CPL:CLOCK_ADVANCE ")
+            self.prttime(" CPL:OCNPRE1_BARRIER ")
+            self.prttime(" CPL:OCNPRE1 ")
+            self.prttime(" CPL:ATMOCN1_BARRIER ")
+            self.prttime(" CPL:ATMOCN1 ")
+            self.prttime(" CPL:OCNPREP_BARRIER ")
+            self.prttime(" CPL:OCNPREP ")
+            self.prttime(
+                " CPL:C2O_BARRIER ", offset=ocn.offset, div=odays, coff=cpl.offset
+            )
+            self.prttime(" CPL:C2O ", offset=ocn.offset, div=odays, coff=cpl.offset)
+            self.prttime(" CPL:LNDPREP_BARRIER ")
+            self.prttime(" CPL:LNDPREP ")
+            self.prttime(" CPL:C2L_BARRIER ", offset=lnd.offset, coff=cpl.offset)
+            self.prttime(" CPL:C2L ", offset=lnd.offset, coff=cpl.offset)
+            self.prttime(" CPL:ICEPREP_BARRIER ")
+            self.prttime(" CPL:ICEPREP ")
+            self.prttime(" CPL:C2I_BARRIER ", offset=ice.offset, coff=cpl.offset)
+            self.prttime(" CPL:C2I ", offset=ice.offset, coff=cpl.offset)
+            self.prttime(" CPL:WAVPREP_BARRIER ")
+            self.prttime(" CPL:WAVPREP ")
+            self.prttime(" CPL:C2W_BARRIER ", offset=ice.offset, coff=cpl.offset)
+            self.prttime(" CPL:C2W ", offset=ice.offset, coff=cpl.offset)
+            self.prttime(" CPL:ROFPREP_BARRIER ")
+            self.prttime(" CPL:ROFPREP ")
+            self.prttime(" CPL:C2R_BARRIER ", offset=rof.offset, coff=cpl.offset)
+            self.prttime(" CPL:C2R ", offset=rof.offset, coff=cpl.offset)
+            self.prttime(" CPL:ICE_RUN_BARRIER ", offset=ice.offset)
+            self.prttime(" CPL:ICE_RUN ", offset=ice.offset)
+            self.prttime(" CPL:LND_RUN_BARRIER ", offset=lnd.offset)
+            self.prttime(" CPL:LND_RUN ", offset=lnd.offset)
+            self.prttime(" CPL:ROF_RUN_BARRIER ", offset=rof.offset)
+            self.prttime(" CPL:ROF_RUN ", offset=rof.offset)
+            self.prttime(" CPL:WAV_RUN_BARRIER ", offset=rof.offset)
+            self.prttime(" CPL:WAV_RUN ", offset=rof.offset)
+            self.prttime(" CPL:OCNT_RUN_BARRIER ", offset=ocn.offset, div=odays)
+            self.prttime(" CPL:OCNT_RUN ", offset=ocn.offset, div=odays)
+            self.prttime(
+                " CPL:O2CT_BARRIER ", offset=ocn.offset, div=odays, coff=cpl.offset
+            )
+            self.prttime(" CPL:O2CT ", offset=ocn.offset, div=odays, coff=cpl.offset)
+            self.prttime(" CPL:OCNPOSTT_BARRIER ")
+            self.prttime(" CPL:OCNPOSTT ")
+            self.prttime(" CPL:ATMOCNP_BARRIER ")
+            self.prttime(" CPL:ATMOCNP ")
+            self.prttime(" CPL:L2C_BARRIER ", offset=lnd.offset, coff=cpl.offset)
+            self.prttime(" CPL:L2C ", offset=lnd.offset, div=cpl.offset)
+            self.prttime(" CPL:LNDPOST_BARRIER ")
+            self.prttime(" CPL:LNDPOST ")
+            self.prttime(" CPL:GLCPREP_BARRIER ")
+            self.prttime(" CPL:GLCPREP ")
+            self.prttime(" CPL:C2G_BARRIER ", offset=glc.offset, coff=cpl.offset)
+            self.prttime(" CPL:C2G ", offset=glc.offset, coff=cpl.offset)
+            self.prttime(" CPL:R2C_BARRIER ", offset=rof.offset, coff=cpl.offset)
+            self.prttime(" CPL:R2C ", offset=rof.offset, coff=cpl.offset)
+            self.prttime(" CPL:ROFPOST_BARRIER ")
+            self.prttime(" CPL:ROFPOST ")
+            self.prttime(" CPL:BUDGET1_BARRIER ")
+            self.prttime(" CPL:BUDGET1 ")
+            self.prttime(" CPL:I2C_BARRIER ", offset=ice.offset, coff=cpl.offset)
+            self.prttime(" CPL:I2C ", offset=ice.offset, coff=cpl.offset)
+            self.prttime(" CPL:ICEPOST_BARRIER ")
+            self.prttime(" CPL:ICEPOST ")
+            self.prttime(" CPL:FRACSET_BARRIER ")
+            self.prttime(" CPL:FRACSET ")
+            self.prttime(" CPL:ATMOCN2_BARRIER ")
+            self.prttime(" CPL:ATMOCN2 ")
+            self.prttime(" CPL:OCNPRE2_BARRIER ")
+            self.prttime(" CPL:OCNPRE2 ")
+            self.prttime(
+                " CPL:C2O2_BARRIER ", offset=ocn.offset, div=odays, coff=cpl.offset
+            )
+            self.prttime(" CPL:C2O2 ", offset=ocn.offset, div=odays, coff=cpl.offset)
+            self.prttime(" CPL:ATMOCNQ_BARRIER")
+            self.prttime(" CPL:ATMOCNQ ")
+            self.prttime(" CPL:ATMPREP_BARRIER ")
+            self.prttime(" CPL:ATMPREP ")
+            self.prttime(" CPL:C2A_BARRIER ", offset=atm.offset, coff=cpl.offset)
+            self.prttime(" CPL:C2A ", offset=atm.offset, coff=cpl.offset)
+            self.prttime(" CPL:OCN_RUN_BARRIER ", offset=ocn.offset, div=odays)
+            self.prttime(" CPL:OCN_RUN ", offset=ocn.offset, div=odays)
+            self.prttime(" CPL:ATM_RUN_BARRIER ", offset=atm.offset)
+            self.prttime(" CPL:ATM_RUN ", offset=atm.offset)
+            self.prttime(" CPL:GLC_RUN_BARRIER ", offset=glc.offset)
+            self.prttime(" CPL:GLC_RUN ", offset=glc.offset)
+            self.prttime(" CPL:W2C_BARRIER ", offset=glc.offset, coff=cpl.offset)
+            self.prttime(" CPL:W2C ", offset=glc.offset, coff=cpl.offset)
+            self.prttime(" CPL:WAVPOST_BARRIER ")
+            self.prttime(" CPL:WAVPOST ", cpl.offset)
+            self.prttime(" CPL:G2C_BARRIER ", offset=glc.offset, coff=cpl.offset)
+            self.prttime(" CPL:G2C ", offset=glc.offset, coff=cpl.offset)
+            self.prttime(" CPL:GLCPOST_BARRIER ")
+            self.prttime(" CPL:GLCPOST ")
+            self.prttime(" CPL:A2C_BARRIER ", offset=atm.offset, coff=cpl.offset)
+            self.prttime(" CPL:A2C ", offset=atm.offset, coff=cpl.offset)
+            self.prttime(" CPL:ATMPOST_BARRIER ")
+            self.prttime(" CPL:ATMPOST ")
+            self.prttime(" CPL:BUDGET2_BARRIER ")
+            self.prttime(" CPL:BUDGET2 ")
+            self.prttime(" CPL:BUDGET3_BARRIER ")
+            self.prttime(" CPL:BUDGET3 ")
+            self.prttime(" CPL:BUDGETF_BARRIER ")
+            self.prttime(" CPL:BUDGETF ")
+            self.prttime(
+                " CPL:O2C_BARRIER ", offset=ocn.offset, div=odays, coff=cpl.offset
+            )
+            self.prttime(" CPL:O2C ", offset=ocn.offset, div=odays, coff=cpl.offset)
+            self.prttime(" CPL:OCNPOST_BARRIER ")
+            self.prttime(" CPL:OCNPOST ")
+            self.prttime(" CPL:RESTART_BARRIER ")
+            self.prttime(" CPL:RESTART")
+            self.prttime(" CPL:HISTORY_BARRIER ")
+            self.prttime(" CPL:HISTORY ")
+            self.prttime(" CPL:TSTAMP_WRITE ")
+            self.prttime(" CPL:TPROF_WRITE ")
+            self.prttime(" CPL:RUN_LOOP_BSTOP ")
+
+            self.write("\n\n")
+            self.write("More info on coupler timing:\n")
+
+            self.write("\n")
+            self.prttime(" CPL:OCNPRE1 ")
+            self.prttime(" CPL:ocnpre1_atm2ocn ")
+
+            self.write("\n")
+            self.prttime(" CPL:OCNPREP ")
+            self.prttime(" CPL:OCNPRE2 ")
+            self.prttime(" CPL:ocnprep_avg ")
+            self.prttime(" CPL:ocnprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:LNDPREP ")
+            self.prttime(" CPL:lndprep_atm2lnd ")
+            self.prttime(" CPL:lndprep_mrgx2l ")
+            self.prttime(" CPL:lndprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:ICEPREP ")
+            self.prttime(" CPL:iceprep_ocn2ice ")
+            self.prttime(" CPL:iceprep_atm2ice ")
+            self.prttime(" CPL:iceprep_mrgx2i ")
+            self.prttime(" CPL:iceprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:WAVPREP ")
+            self.prttime(" CPL:wavprep_atm2wav ")
+            self.prttime(" CPL:wavprep_ocn2wav ")
+            self.prttime(" CPL:wavprep_ice2wav ")
+            self.prttime(" CPL:wavprep_mrgx2w ")
+            self.prttime(" CPL:wavprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:ROFPREP ")
+            self.prttime(" CPL:rofprep_l2xavg ")
+            self.prttime(" CPL:rofprep_lnd2rof ")
+            self.prttime(" CPL:rofprep_mrgx2r ")
+            self.prttime(" CPL:rofprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:GLCPREP ")
+            self.prttime(" CPL:glcprep_avg ")
+            self.prttime(" CPL:glcprep_lnd2glc ")
+            self.prttime(" CPL:glcprep_mrgx2g ")
+            self.prttime(" CPL:glcprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:ATMPREP ")
+            self.prttime(" CPL:atmprep_xao2atm ")
+            self.prttime(" CPL:atmprep_ocn2atm ")
+            self.prttime(" CPL:atmprep_alb2atm ")
+            self.prttime(" CPL:atmprep_ice2atm ")
+            self.prttime(" CPL:atmprep_lnd2atm ")
+            self.prttime(" CPL:atmprep_mrgx2a ")
+            self.prttime(" CPL:atmprep_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:ATMOCNP ")
+            self.prttime(" CPL:ATMOCN1 ")
+            self.prttime(" CPL:ATMOCN2 ")
+            self.prttime(" CPL:atmocnp_ice2ocn ")
+            self.prttime(" CPL:atmocnp_wav2ocn ")
+            self.prttime(" CPL:atmocnp_fluxo ")
+            self.prttime(" CPL:atmocnp_fluxe ")
+            self.prttime(" CPL:atmocnp_mrgx2o ")
+            self.prttime(" CPL:atmocnp_accum ")
+            self.prttime(" CPL:atmocnp_ocnalb ")
+
+            self.write("\n")
+            self.prttime(" CPL:ATMOCNQ ")
+            self.prttime(" CPL:atmocnq_ocn2atm ")
+            self.prttime(" CPL:atmocnq_fluxa ")
+            self.prttime(" CPL:atmocnq_atm2ocnf ")
+
+            self.write("\n")
+            self.prttime(" CPL:OCNPOSTT ")
+            self.prttime(" CPL:OCNPOST ")
+            self.prttime(" CPL:ocnpost_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:LNDPOST ")
+            self.prttime(" CPL:lndpost_diagav ")
+            self.prttime(" CPL:lndpost_acc2lr ")
+            self.prttime(" CPL:lndpost_acc2lg ")
+
+            self.write("\n")
+            self.prttime(" CPL:ROFOST ")
+            self.prttime(" CPL:rofpost_diagav ")
+            self.prttime(" CPL:rofpost_histaux ")
+            self.prttime(" CPL:rofpost_rof2lnd ")
+            self.prttime(" CPL:rofpost_rof2ice ")
+            self.prttime(" CPL:rofpost_rof2ocn ")
+
+            self.write("\n")
+            self.prttime(" CPL:ICEPOST ")
+            self.prttime(" CPL:icepost_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:WAVPOST ")
+            self.prttime(" CPL:wavpost_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:GLCPOST ")
+            self.prttime(" CPL:glcpost_diagav ")
+            self.prttime(" CPL:glcpost_glc2lnd ")
+            self.prttime(" CPL:glcpost_glc2ice ")
+            self.prttime(" CPL:glcpost_glc2ocn ")
+
+            self.write("\n")
+            self.prttime(" CPL:ATMPOST ")
+            self.prttime(" CPL:atmpost_diagav ")
+
+            self.write("\n")
+            self.prttime(" CPL:BUDGET ")
+            self.prttime(" CPL:BUDGET1 ")
+            self.prttime(" CPL:BUDGET2 ")
+            self.prttime(" CPL:BUDGET3 ")
+            self.prttime(" CPL:BUDGETF ")
+            self.write("\n\n")
+
+        self.fout.close()
+
+
+
+[docs] +def get_timing(case, lid): + parser = _TimingParser(case, lid) + parser.getTiming()
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/hist_utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/hist_utils.html new file mode 100644 index 00000000000..ba8535f3b72 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/hist_utils.html @@ -0,0 +1,916 @@ + + + + + + CIME.hist_utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.hist_utils

+"""
+Functions for actions pertaining to history files.
+"""
+from CIME.XML.standard_module_setup import *
+from CIME.config import Config
+from CIME.test_status import TEST_NO_BASELINES_COMMENT, TEST_STATUS_FILENAME
+from CIME.utils import (
+    get_current_commit,
+    get_timestamp,
+    safe_copy,
+    SharedArea,
+    parse_test_name,
+)
+
+import logging, os, re, filecmp
+
+logger = logging.getLogger(__name__)
+
+BLESS_LOG_NAME = "bless_log"
+
+# ------------------------------------------------------------------------
+# Strings used in the comments generated by cprnc
+# ------------------------------------------------------------------------
+
+CPRNC_FIELDLISTS_DIFFER = "files differ only in their field lists"
+
+# ------------------------------------------------------------------------
+# Strings used in the comments generated by _compare_hists
+# ------------------------------------------------------------------------
+
+NO_COMPARE = "had no compare counterpart"
+NO_ORIGINAL = "had no original counterpart"
+FIELDLISTS_DIFFER = "had a different field list from"
+DIFF_COMMENT = "did NOT match"
+# COMPARISON_COMMENT_OPTIONS should include all of the above: these are any of the special
+# comment strings that describe the reason for a comparison failure
+COMPARISON_COMMENT_OPTIONS = set(
+    [NO_COMPARE, NO_ORIGINAL, FIELDLISTS_DIFFER, DIFF_COMMENT]
+)
+# Comments that indicate a true baseline comparison failure
+COMPARISON_FAILURE_COMMENT_OPTIONS = COMPARISON_COMMENT_OPTIONS - set(
+    [NO_COMPARE, FIELDLISTS_DIFFER]
+)
+
+NO_HIST_TESTS = ["IRT", "PFS", "TSC"]
+
+
+def _iter_model_file_substrs(case):
+    models = case.get_compset_components()
+    models.append("cpl")
+    for model in models:
+        yield model
+
+
+
+[docs] +def copy_histfiles(case, suffix, match_suffix=None): + """Copy the most recent batch of hist files in a case, adding the given suffix. + + This can allow you to temporarily "save" these files so they won't be blown + away if you re-run the case. + + case - The case containing the files you want to save + suffix - The string suffix you want to add to saved files, this can be used to find them later. + + returns (comments, num_copied) + """ + rundir = case.get_value("RUNDIR") + ref_case = case.get_value("RUN_REFCASE") + casename = case.get_value("CASE") + # Loop over models + archive = case.get_env("archive") + comments = "Copying hist files to suffix '{}'\n".format(suffix) + num_copied = 0 + for model in _iter_model_file_substrs(case): + if case.get_value("TEST") and archive.exclude_testing(model): + logger.info( + "Case is a test and component %r is excluded from comparison", model + ) + + continue + comments += " Copying hist files for model '{}'\n".format(model) + test_hists = archive.get_latest_hist_files( + casename, model, rundir, suffix=match_suffix, ref_case=ref_case + ) + num_copied += len(test_hists) + for test_hist in test_hists: + test_hist = os.path.join(rundir, test_hist) + if not test_hist.endswith(".nc") or "once" in os.path.basename(test_hist): + logger.info("Will not compare non-netcdf file {}".format(test_hist)) + continue + new_file = "{}.{}".format(test_hist, suffix) + if os.path.exists(new_file): + os.remove(new_file) + + comments += " Copying '{}' to '{}'\n".format(test_hist, new_file) + + # Need to copy rather than move in case there are some history files + # that will need to continue to be filled on the next phase; this + # can be the case for a restart run. + # + # (If it weren't for that possibility, a move/rename would be more + # robust here: The problem with a copy is that there can be + # confusion after the second run as to which files were created by + # the first run and which by the second. For example, if the second + # run fails to output any history files, the test will still pass, + # because the test system will think that run1's files were output + # by run2. But we live with that downside for the sake of the reason + # noted above.) + safe_copy(test_hist, new_file) + + expect( + num_copied > 0, + "copy_histfiles failed: no hist files found in rundir '{}'".format(rundir), + ) + + return comments, num_copied
+ + + +
+[docs] +def rename_all_hist_files(case, suffix): + """Renaming all hist files in a case, adding the given suffix. + + case - The case containing the files you want to save + suffix - The string suffix you want to add to saved files, this can be used to find them later. + """ + rundir = case.get_value("RUNDIR") + ref_case = case.get_value("RUN_REFCASE") + # Loop over models + archive = case.get_env("archive") + comments = "Renaming hist files by adding suffix '{}'\n".format(suffix) + num_renamed = 0 + for model in _iter_model_file_substrs(case): + comments += " Renaming hist files for model '{}'\n".format(model) + + if model == "cpl": + mname = "drv" + else: + mname = model + test_hists = archive.get_all_hist_files( + case.get_value("CASE"), mname, rundir, ref_case=ref_case + ) + num_renamed += len(test_hists) + for test_hist in test_hists: + test_hist = os.path.join(rundir, test_hist) + new_file = "{}.{}".format(test_hist, suffix) + if os.path.exists(new_file): + os.remove(new_file) + + comments += " Renaming '{}' to '{}'\n".format(test_hist, new_file) + + os.rename(test_hist, new_file) + + expect( + num_renamed > 0, + "renaming failed: no hist files found in rundir '{}'".format(rundir), + ) + + return comments
+ + + +def _hists_match(model, hists1, hists2, suffix1="", suffix2=""): + """ + return (num in set 1 but not 2 , num in set 2 but not 1, matchups) + + >>> hists1 = ['FOO.G.cpl.h1.nc', 'FOO.G.cpl.h2.nc', 'FOO.G.cpl.h3.nc'] + >>> hists2 = ['cpl.h2.nc', 'cpl.h3.nc', 'cpl.h4.nc'] + >>> _hists_match('cpl', hists1, hists2) + (['FOO.G.cpl.h1.nc'], ['cpl.h4.nc'], [('FOO.G.cpl.h2.nc', 'cpl.h2.nc'), ('FOO.G.cpl.h3.nc', 'cpl.h3.nc')]) + >>> hists1 = ['FOO.G.cpl.h1.nc.SUF1', 'FOO.G.cpl.h2.nc.SUF1', 'FOO.G.cpl.h3.nc.SUF1'] + >>> hists2 = ['cpl.h2.nc.SUF2', 'cpl.h3.nc.SUF2', 'cpl.h4.nc.SUF2'] + >>> _hists_match('cpl', hists1, hists2, 'SUF1', 'SUF2') + (['FOO.G.cpl.h1.nc.SUF1'], ['cpl.h4.nc.SUF2'], [('FOO.G.cpl.h2.nc.SUF1', 'cpl.h2.nc.SUF2'), ('FOO.G.cpl.h3.nc.SUF1', 'cpl.h3.nc.SUF2')]) + >>> hists1 = ['cam.h0.1850-01-08-00000.nc'] + >>> hists2 = ['cam_0001.h0.1850-01-08-00000.nc','cam_0002.h0.1850-01-08-00000.nc'] + >>> _hists_match('cam', hists1, hists2, '', '') + ([], [], [('cam.h0.1850-01-08-00000.nc', 'cam_0001.h0.1850-01-08-00000.nc'), ('cam.h0.1850-01-08-00000.nc', 'cam_0002.h0.1850-01-08-00000.nc')]) + >>> hists1 = ['cam_0001.h0.1850-01-08-00000.nc.base','cam_0002.h0.1850-01-08-00000.nc.base'] + >>> hists2 = ['cam_0001.h0.1850-01-08-00000.nc.rest','cam_0002.h0.1850-01-08-00000.nc.rest'] + >>> _hists_match('cam', hists1, hists2, 'base', 'rest') + ([], [], [('cam_0001.h0.1850-01-08-00000.nc.base', 'cam_0001.h0.1850-01-08-00000.nc.rest'), ('cam_0002.h0.1850-01-08-00000.nc.base', 'cam_0002.h0.1850-01-08-00000.nc.rest')]) + """ + normalized1, normalized2 = [], [] + multi_normalized1, multi_normalized2 = [], [] + multiinst = False + + if model == "ww3dev": + model = "ww3" + + for hists, suffix, normalized, multi_normalized in [ + (hists1, suffix1, normalized1, multi_normalized1), + (hists2, suffix2, normalized2, multi_normalized2), + ]: + for hist in hists: + hist_basename = os.path.basename(hist) + offset = hist_basename.rfind(model) + expect( + offset >= 0, + "ERROR: cant find model name {} in {}".format(model, hist_basename), + ) + normalized_name = os.path.basename(hist_basename[offset:]) + if suffix != "": + expect( + normalized_name.endswith(suffix), + "How did '{}' not have suffix '{}'".format(hist, suffix), + ) + normalized_name = normalized_name[ + : len(normalized_name) - len(suffix) - 1 + ] + + m = re.search("(.+)_[0-9]{4}(.+.nc)", normalized_name) + if m is not None: + multiinst = True + multi_normalized.append(m.group(1) + m.group(2)) + + normalized.append(normalized_name) + + set_of_1_not_2 = set(normalized1) - set(normalized2) + set_of_2_not_1 = set(normalized2) - set(normalized1) + + one_not_two = sorted([hists1[normalized1.index(item)] for item in set_of_1_not_2]) + two_not_one = sorted([hists2[normalized2.index(item)] for item in set_of_2_not_1]) + + both = set(normalized1) & set(normalized2) + + match_ups = sorted( + [ + (hists1[normalized1.index(item)], hists2[normalized2.index(item)]) + for item in both + ] + ) + + # Special case - comparing multiinstance to single instance files + + if multi_normalized1 != multi_normalized2: + # in this case hists1 contains multiinstance hists2 does not + if set(multi_normalized1) == set(normalized2): + for idx, norm_hist1 in enumerate(multi_normalized1): + for idx1, hist2 in enumerate(hists2): + norm_hist2 = normalized2[idx1] + if norm_hist1 == norm_hist2: + match_ups.append((hists1[idx], hist2)) + if hist2 in two_not_one: + two_not_one.remove(hist2) + if hists1[idx] in one_not_two: + one_not_two.remove(hists1[idx]) + # in this case hists2 contains multiinstance hists1 does not + if set(multi_normalized2) == set(normalized1): + for idx, norm_hist2 in enumerate(multi_normalized2): + for idx1, hist1 in enumerate(hists1): + norm_hist1 = normalized1[idx1] + if norm_hist2 == norm_hist1: + match_ups.append((hist1, hists2[idx])) + if hist1 in one_not_two: + one_not_two.remove(hist1) + if hists2[idx] in two_not_one: + two_not_one.remove(hists2[idx]) + + if not multiinst: + expect( + len(match_ups) + len(set_of_1_not_2) == len(hists1), "Programming error1" + ) + expect( + len(match_ups) + len(set_of_2_not_1) == len(hists2), "Programming error2" + ) + + return one_not_two, two_not_one, match_ups + + +def _compare_hists( + case, + from_dir1, + from_dir2, + suffix1="", + suffix2="", + outfile_suffix="", + ignore_fieldlist_diffs=False, +): + """ + Compares two sets of history files + + Returns (success (True if all matched), comments, num_compared) + """ + if from_dir1 == from_dir2: + expect(suffix1 != suffix2, "Comparing files to themselves?") + + casename = case.get_value("CASE") + testcase = case.get_value("TESTCASE") + casedir = case.get_value("CASEROOT") + all_success = True + num_compared = 0 + comments = "Comparing hists for case '{}' dir1='{}', suffix1='{}', dir2='{}' suffix2='{}'\n".format( + casename, from_dir1, suffix1, from_dir2, suffix2 + ) + multiinst_driver_compare = False + archive = case.get_env("archive") + ref_case = case.get_value("RUN_REFCASE") + for model in _iter_model_file_substrs(case): + if case.get_value("TEST") and archive.exclude_testing(model): + logger.info( + "Case is a test and component %r is excluded from comparison", model + ) + + continue + if model == "cpl" and suffix2 == "multiinst": + multiinst_driver_compare = True + comments += " comparing model '{}'\n".format(model) + hists1 = archive.get_latest_hist_files( + casename, model, from_dir1, suffix=suffix1, ref_case=ref_case + ) + hists2 = archive.get_latest_hist_files( + casename, model, from_dir2, suffix=suffix2, ref_case=ref_case + ) + + if len(hists1) == 0 and len(hists2) == 0: + comments += " no hist files found for model {}\n".format(model) + continue + + one_not_two, two_not_one, match_ups = _hists_match( + model, hists1, hists2, suffix1, suffix2 + ) + for item in one_not_two: + if "initial" in item: + continue + comments += " File '{}' {} in '{}' with suffix '{}'\n".format( + item, NO_COMPARE, from_dir2, suffix2 + ) + all_success = False + + for item in two_not_one: + if "initial" in item: + continue + comments += " File '{}' {} in '{}' with suffix '{}'\n".format( + item, NO_ORIGINAL, from_dir1, suffix1 + ) + all_success = False + + num_compared += len(match_ups) + + for hist1, hist2 in match_ups: + if not ".nc" in hist1: + logger.info("Ignoring non-netcdf file {}".format(hist1)) + continue + try: + success, cprnc_log_file, cprnc_comment = cprnc( + model, + os.path.join(from_dir1, hist1), + os.path.join(from_dir2, hist2), + case, + from_dir1, + multiinst_driver_compare=multiinst_driver_compare, + outfile_suffix=outfile_suffix, + ignore_fieldlist_diffs=ignore_fieldlist_diffs, + ) + except: + cprnc_comment = "CPRNC executable not found" + cprnc_log_file = None + success = False + + if success: + comments += " {} matched {}\n".format(hist1, hist2) + else: + if not cprnc_log_file: + comments += cprnc_comment + all_success = False + return all_success, comments, 0 + elif cprnc_comment == CPRNC_FIELDLISTS_DIFFER: + comments += " {} {} {}\n".format(hist1, FIELDLISTS_DIFFER, hist2) + else: + comments += " {} {} {}\n".format(hist1, DIFF_COMMENT, hist2) + comments += " cat " + cprnc_log_file + "\n" + expected_log_file = os.path.join( + casedir, os.path.basename(cprnc_log_file) + ) + if not ( + os.path.exists(expected_log_file) + and filecmp.cmp(cprnc_log_file, expected_log_file) + ): + try: + safe_copy(cprnc_log_file, casedir) + except (OSError, IOError) as _: + logger.warning( + "Could not copy {} to {}".format(cprnc_log_file, casedir) + ) + + all_success = False + + # Some tests don't save history files. + if num_compared == 0 and testcase not in NO_HIST_TESTS: + all_success = False + comments += "Did not compare any hist files! Missing baselines?\n" + + comments += "PASS" if all_success else "FAIL" + + return all_success, comments, num_compared + + +
+[docs] +def compare_test(case, suffix1, suffix2, ignore_fieldlist_diffs=False): + """ + Compares two sets of component history files in the testcase directory + + case - The case containing the hist files to compare + suffix1 - The suffix that identifies the first batch of hist files + suffix1 - The suffix that identifies the second batch of hist files + ignore_fieldlist_diffs (bool): If True, then: If the two cases differ only in their + field lists (i.e., all shared fields are bit-for-bit, but one case has some + diagnostic fields that are missing from the other case), treat the two cases as + identical. + + returns (SUCCESS, comments, num_compared) + """ + rundir = case.get_value("RUNDIR") + + return _compare_hists( + case, + rundir, + rundir, + suffix1, + suffix2, + ignore_fieldlist_diffs=ignore_fieldlist_diffs, + )
+ + + +
+[docs] +def cprnc( + model, + file1, + file2, + case, + rundir, + multiinst_driver_compare=False, + outfile_suffix="", + ignore_fieldlist_diffs=False, + cprnc_exe=None, +): + """ + Run cprnc to compare two individual nc files + + file1 - the full or relative path of the first file + file2 - the full or relative path of the second file + case - the case containing the files + rundir - the rundir for the case + outfile_suffix - if non-blank, then the output file name ends with this + suffix (with a '.' added before the given suffix). + Use None to avoid permissions issues in the case dir. + ignore_fieldlist_diffs (bool): If True, then: If the two cases differ only in their + field lists (i.e., all shared fields are bit-for-bit, but one case has some + diagnostic fields that are missing from the other case), treat the two cases as + identical. + + returns (True if the files matched, log_name, comment) + where 'comment' is either an empty string or one of the module-level constants + beginning with CPRNC_ (e.g., CPRNC_FIELDLISTS_DIFFER) + """ + if not cprnc_exe: + cprnc_exe = case.get_value("CCSM_CPRNC") + expect( + os.path.isfile(cprnc_exe) and os.access(cprnc_exe, os.X_OK), + f"cprnc {cprnc_exe} does not exist or is not executable", + ) + + basename = os.path.basename(file1) + multiinst_regex = re.compile(r".*%s[^_]*(_[0-9]{4})[.]h.?[.][^.]+?[.]nc" % model) + mstr = "" + mstr1 = "" + mstr2 = "" + # If one is a multiinstance file but the other is not add an instance string + m1 = multiinst_regex.match(file1) + m2 = multiinst_regex.match(file2) + if m1 is not None: + mstr1 = m1.group(1) + if m2 is not None: + mstr2 = m2.group(1) + if mstr1 != mstr2: + mstr = mstr1 + mstr2 + + output_filename = os.path.join(rundir, "{}{}.cprnc.out".format(basename, mstr)) + if outfile_suffix: + output_filename += ".{}".format(outfile_suffix) + + if outfile_suffix is None: + cpr_stat, out, _ = run_cmd( + "{} -m {} {}".format(cprnc_exe, file1, file2), combine_output=True + ) + else: + # Remove existing output file if it exists + if os.path.exists(output_filename): + os.remove(output_filename) + + cpr_stat = run_cmd( + "{} -m {} {}".format(cprnc_exe, file1, file2), + combine_output=True, + arg_stdout=output_filename, + )[0] + with open(output_filename, "r", encoding="utf-8") as fd: + out = fd.read() + + comment = "" + if cpr_stat == 0: + # Successful exit from cprnc + if multiinst_driver_compare: + # In a multiinstance test the cpl hist file will have a different number of + # dimensions and so cprnc will indicate that the files seem to be DIFFERENT + # in this case we only want to check that the fields we are able to compare + # have no differences. + files_match = " 0 had non-zero differences" in out + else: + if "the two files seem to be DIFFERENT" in out: + files_match = False + elif "the two files DIFFER only in their field lists" in out: + if ignore_fieldlist_diffs: + files_match = True + else: + files_match = False + comment = CPRNC_FIELDLISTS_DIFFER + elif "files seem to be IDENTICAL" in out: + files_match = True + else: + expect( + False, + "Did not find an expected summary string in cprnc output:\n{}".format( + out + ), + ) + else: + # If there is an error in cprnc, we do the safe thing of saying the comparison failed + files_match = False + + return (files_match, output_filename, comment)
+ + + +
+[docs] +def compare_baseline(case, baseline_dir=None, outfile_suffix=""): + """ + compare the current test output to a baseline result + + case - The case containing the hist files to be compared against baselines + baseline_dir - Optionally, specify a specific baseline dir, otherwise it will be computed from case config + outfile_suffix - if non-blank, then the cprnc output file name ends with + this suffix (with a '.' added before the given suffix). if None, no output file saved. + + returns (SUCCESS, comments) + SUCCESS means all hist files matched their corresponding baseline + """ + rundir = case.get_value("RUNDIR") + if baseline_dir is None: + baselineroot = case.get_value("BASELINE_ROOT") + basecmp_dir = os.path.join(baselineroot, case.get_value("BASECMP_CASE")) + dirs_to_check = (baselineroot, basecmp_dir) + else: + basecmp_dir = baseline_dir + dirs_to_check = (basecmp_dir,) + + for bdir in dirs_to_check: + if not os.path.isdir(bdir): + return False, "ERROR {} baseline directory '{}' does not exist".format( + TEST_NO_BASELINES_COMMENT, bdir + ) + + success, comments, _ = _compare_hists( + case, rundir, basecmp_dir, outfile_suffix=outfile_suffix + ) + if Config.instance().create_bless_log: + bless_log = os.path.join(basecmp_dir, BLESS_LOG_NAME) + if os.path.exists(bless_log): + lines = open(bless_log, "r", encoding="utf-8").readlines() + if lines: + last_line = lines[-1] + comments += "\n Most recent bless: {}".format(last_line) + + return success, comments
+ + + +
+[docs] +def generate_teststatus(testdir, baseline_dir): + """ + CESM stores it's TestStatus file in baselines. Do not let exceptions + escape from this function. + """ + try: + with SharedArea(): + if not os.path.isdir(baseline_dir): + os.makedirs(baseline_dir) + + safe_copy( + os.path.join(testdir, TEST_STATUS_FILENAME), + baseline_dir, + preserve_meta=False, + ) + except Exception as e: + logger.warning( + "Could not copy {} to baselines, {}".format( + os.path.join(testdir, TEST_STATUS_FILENAME), str(e) + ) + )
+ + + +def _generate_baseline_impl(case, baseline_dir=None, allow_baseline_overwrite=False): + """ + copy the current test output to baseline result + + case - The case containing the hist files to be copied into baselines + baseline_dir - Optionally, specify a specific baseline dir, otherwise it will be computed from case config + allow_baseline_overwrite must be true to generate baselines to an existing directory. + + returns (SUCCESS, comments) + """ + rundir = case.get_value("RUNDIR") + ref_case = case.get_value("RUN_REFCASE") + if baseline_dir is None: + baselineroot = case.get_value("BASELINE_ROOT") + basegen_dir = os.path.join(baselineroot, case.get_value("BASEGEN_CASE")) + else: + basegen_dir = baseline_dir + testcase = case.get_value("CASE") + archive = case.get_env("archive") + + if not os.path.isdir(basegen_dir): + os.makedirs(basegen_dir) + + if ( + os.path.isdir(os.path.join(basegen_dir, testcase)) + and not allow_baseline_overwrite + ): + expect(False, " Cowardly refusing to overwrite existing baseline directory") + + comments = "Generating baselines into '{}'\n".format(basegen_dir) + num_gen = 0 + for model in _iter_model_file_substrs(case): + + comments += " generating for model '{}'\n".format(model) + + hists = archive.get_latest_hist_files( + testcase, model, rundir, ref_case=ref_case + ) + logger.debug("latest_files: {}".format(hists)) + num_gen += len(hists) + + if model == "ww3dev": + model = "ww3" + + for hist in hists: + offset = hist.rfind(model) + expect( + offset >= 0, "ERROR: cant find model name {} in {}".format(model, hist) + ) + baseline = os.path.join(basegen_dir, hist[offset:]) + if os.path.exists(baseline): + os.remove(baseline) + + safe_copy(os.path.join(rundir, hist), baseline, preserve_meta=False) + comments += " generating baseline '{}' from file {}\n".format( + baseline, hist + ) + + # copy latest cpl log to baseline + # drop the date so that the name is generic + if case.get_value("COMP_INTERFACE") == "nuopc": + cplname = "med" + else: + cplname = "cpl" + + newestcpllogfile = case.get_latest_cpl_log( + coupler_log_path=case.get_value("RUNDIR"), cplname=cplname + ) + if newestcpllogfile is None: + logger.warning( + "No {}.log file found in directory {}".format( + cplname, case.get_value("RUNDIR") + ) + ) + else: + safe_copy( + newestcpllogfile, + os.path.join(basegen_dir, "{}.log.gz".format(cplname)), + preserve_meta=False, + ) + + testname = case.get_value("TESTCASE") + testopts = parse_test_name(case.get_value("CASEBASEID"))[1] + testopts = [] if testopts is None else testopts + expect( + num_gen > 0 or (testname in NO_HIST_TESTS or "B" in testopts), + "Could not generate any hist files for case '{}', something is seriously wrong".format( + os.path.join(rundir, testcase) + ), + ) + + if Config.instance().create_bless_log: + bless_log = os.path.join(basegen_dir, BLESS_LOG_NAME) + with open(bless_log, "a", encoding="utf-8") as fd: + fd.write( + "sha:{} date:{}\n".format( + get_current_commit(repo=case.get_value("SRCROOT")), + get_timestamp(timestamp_format="%Y-%m-%d_%H:%M:%S"), + ) + ) + + return True, comments + + +
+[docs] +def generate_baseline(case, baseline_dir=None, allow_baseline_overwrite=False): + with SharedArea(): + return _generate_baseline_impl( + case, + baseline_dir=baseline_dir, + allow_baseline_overwrite=allow_baseline_overwrite, + )
+ + + +
+[docs] +def get_ts_synopsis(comments): + r""" + Reduce case diff comments down to a single line synopsis so that we can put + something in the TestStatus file. It's expected that the comments provided + to this function came from compare_baseline, not compare_tests. + + >>> get_ts_synopsis('') + '' + >>> get_ts_synopsis('big error') + 'big error' + >>> get_ts_synopsis('big error\n') + 'big error' + >>> get_ts_synopsis('stuff\n File foo had a different field list from bar with suffix baz\nPass\n') + 'FIELDLIST field lists differ (otherwise bit-for-bit)' + >>> get_ts_synopsis('stuff\n File foo had no compare counterpart in bar with suffix baz\nPass\n') + 'ERROR BFAIL some baseline files were missing' + >>> get_ts_synopsis('stuff\n File foo had a different field list from bar with suffix baz\n File foo had no compare counterpart in bar with suffix baz\nPass\n') + 'MULTIPLE ISSUES: field lists differ and some baseline files were missing' + >>> get_ts_synopsis('stuff\n File foo did NOT match bar with suffix baz\nPass\n') + 'DIFF' + >>> get_ts_synopsis('stuff\n File foo did NOT match bar with suffix baz\n File foo had a different field list from bar with suffix baz\nPass\n') + 'DIFF' + >>> get_ts_synopsis('stuff\n File foo did NOT match bar with suffix baz\n File foo had no compare counterpart in bar with suffix baz\nPass\n') + 'DIFF' + >>> get_ts_synopsis('File foo had no compare counterpart in bar with suffix baz\n File foo had no original counterpart in bar with suffix baz\n') + 'DIFF' + """ + if not comments: + return "" + elif "\n" not in comments.strip(): + return comments.strip() + else: + has_fieldlist_differences = False + has_bfails = False + has_real_fails = False + for line in comments.splitlines(): + if FIELDLISTS_DIFFER in line: + has_fieldlist_differences = True + if NO_COMPARE in line: + has_bfails = True + for comparison_failure_comment in COMPARISON_FAILURE_COMMENT_OPTIONS: + if comparison_failure_comment in line: + has_real_fails = True + + if has_real_fails: + # If there are any real differences, we just report that: we assume that the + # user cares much more about those real differences than fieldlist or bfail + # issues, and we don't want to complicate the matter by trying to report all + # issues in this case. + return "DIFF" + else: + if has_fieldlist_differences and has_bfails: + # It's not clear which of these (if either) the user would care more + # about, so we report both. We deliberately avoid printing the keywords + # 'FIELDLIST' or TEST_NO_BASELINES_COMMENT (i.e., 'BFAIL'): if we printed + # those, then (e.g.) a 'grep -v FIELDLIST' (which the user might do if + # (s)he was expecting fieldlist differences) would also filter out this + # line, which we don't want. + return "MULTIPLE ISSUES: field lists differ and some baseline files were missing" + elif has_fieldlist_differences: + return "FIELDLIST field lists differ (otherwise bit-for-bit)" + elif has_bfails: + return "ERROR {} some baseline files were missing".format( + TEST_NO_BASELINES_COMMENT + ) + else: + return ""
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/jenkins_generic_job.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/jenkins_generic_job.html new file mode 100644 index 00000000000..0cfc576a9b1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/jenkins_generic_job.html @@ -0,0 +1,580 @@ + + + + + + CIME.jenkins_generic_job — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.jenkins_generic_job

+import CIME.wait_for_tests
+from CIME.utils import expect, run_cmd_no_fail
+from CIME.case import Case
+
+import os, shutil, glob, signal, logging, threading, sys, re, tarfile, time
+
+##############################################################################
+
+[docs] +def cleanup_queue(test_root, test_id): + ############################################################################### + """ + Delete all jobs left in the queue + """ + for teststatus_file in glob.iglob("{}/*{}*/TestStatus".format(test_root, test_id)): + case_dir = os.path.dirname(teststatus_file) + with Case(case_dir, read_only=True) as case: + jobmap = case.get_job_info() + jobkills = [] + for jobname, jobid in jobmap.items(): + logging.warning( + "Found leftover batch job {} ({}) that need to be deleted".format( + jobid, jobname + ) + ) + jobkills.append(jobid) + + case.cancel_batch_jobs(jobkills)
+ + + +############################################################################### +
+[docs] +def delete_old_test_data( + mach_comp, + test_id_root, + scratch_root, + test_root, + run_area, + build_area, + archive_area, + avoid_test_id, +): + ############################################################################### + # Remove old dirs + for clutter_area in [scratch_root, test_root, run_area, build_area, archive_area]: + for old_file in glob.glob( + "{}/*{}*{}*".format(clutter_area, mach_comp, test_id_root) + ): + if avoid_test_id not in old_file: + logging.info("TEST ARCHIVER: removing {}".format(old_file)) + if os.path.isdir(old_file): + shutil.rmtree(old_file) + else: + os.remove(old_file) + + else: + logging.info( + "TEST ARCHIVER: leaving case {} due to avoiding test id {}".format( + old_file, avoid_test_id + ) + )
+ + + +############################################################################### +
+[docs] +def scan_for_test_ids(old_test_archive, mach_comp, test_id_root): + ############################################################################### + results = set([]) + test_id_re = re.compile(".+[.]([^.]+)") + for item in glob.glob( + "{}/{}/*{}*{}*".format(old_test_archive, "old_cases", mach_comp, test_id_root) + ): + filename = os.path.basename(item) + the_match = test_id_re.match(filename) + if the_match: + test_id = the_match.groups()[0] + results.add(test_id) + + return list(results)
+ + + +############################################################################### +
+[docs] +def archive_old_test_data( + machine, + mach_comp, + test_id_root, + test_root, + old_test_archive, + avoid_test_id, +): + ############################################################################### + + gb_allowed = machine.get_value("MAX_GB_OLD_TEST_DATA") + gb_allowed = 500 if gb_allowed is None else gb_allowed + bytes_allowed = gb_allowed * 1000000000 + expect( + bytes_allowed > 0, + "Machine {} does not support test archiving".format(machine.get_machine_name()), + ) + + # Remove old cs.status, cs.submit. I don't think there's any value to leaving these around + # or archiving them + for old_cs_file in glob.glob("{}/cs.*.{}[0-9]*".format(test_root, test_id_root)): + if avoid_test_id not in old_cs_file: + logging.info("TEST ARCHIVER: Removing {}".format(old_cs_file)) + os.remove(old_cs_file) + + # Remove the old CTest XML, same reason as above + if os.path.isdir("Testing"): + logging.info( + "TEST ARCHIVER: Removing {}".format(os.path.join(os.getcwd(), "Testing")) + ) + shutil.rmtree("Testing") + + if not os.path.exists(old_test_archive): + os.mkdir(old_test_archive) + + # Archive old data by looking at old test cases + for old_case in glob.glob( + "{}/*{}*{}[0-9]*".format(test_root, mach_comp, test_id_root) + ): + if avoid_test_id not in old_case: + logging.info("TEST ARCHIVER: archiving case {}".format(old_case)) + exeroot, rundir, archdir = run_cmd_no_fail( + "./xmlquery EXEROOT RUNDIR DOUT_S_ROOT --value", from_dir=old_case + ).split(",") + + for the_dir, target_area in [ + (exeroot, "old_builds"), + (rundir, "old_runs"), + (archdir, "old_archives"), + (old_case, "old_cases"), + ]: + if os.path.exists(the_dir): + start_time = time.time() + logging.info( + "TEST ARCHIVER: archiving {} to {}".format( + the_dir, os.path.join(old_test_archive, target_area) + ) + ) + if not os.path.exists(os.path.join(old_test_archive, target_area)): + os.mkdir(os.path.join(old_test_archive, target_area)) + + old_case_name = os.path.basename(old_case) + with tarfile.open( + os.path.join( + old_test_archive, + target_area, + "{}.tar.gz".format(old_case_name), + ), + "w:gz", + ) as tfd: + tfd.add(the_dir, arcname=old_case_name) + + shutil.rmtree(the_dir) + + # Remove parent dir if it's empty + parent_dir = os.path.dirname(the_dir) + if not os.listdir(parent_dir) or os.listdir(parent_dir) == [ + "case2_output_root" + ]: + shutil.rmtree(parent_dir) + + end_time = time.time() + logging.info( + "TEST ARCHIVER: archiving {} took {} seconds".format( + the_dir, int(end_time - start_time) + ) + ) + + else: + logging.info( + "TEST ARCHIVER: leaving case {} due to avoiding test id {}".format( + old_case, avoid_test_id + ) + ) + + # Check size of archive + bytes_of_old_test_data = int( + run_cmd_no_fail("du -sb {}".format(old_test_archive)).split()[0] + ) + if bytes_of_old_test_data > bytes_allowed: + logging.info( + "TEST ARCHIVER: Too much test data, {}GB (actual) > {}GB (limit)".format( + bytes_of_old_test_data / 1000000000, bytes_allowed / 1000000000 + ) + ) + old_test_ids = scan_for_test_ids(old_test_archive, mach_comp, test_id_root) + for old_test_id in sorted(old_test_ids): + logging.info( + "TEST ARCHIVER: Removing old data for test {}".format(old_test_id) + ) + for item in ["old_cases", "old_builds", "old_runs", "old_archives"]: + for dir_to_rm in glob.glob( + "{}/{}/*{}*{}*".format( + old_test_archive, item, mach_comp, old_test_id + ) + ): + logging.info("TEST ARCHIVER: Removing {}".format(dir_to_rm)) + if os.path.isdir(dir_to_rm): + shutil.rmtree(dir_to_rm) + else: + os.remove(dir_to_rm) + + bytes_of_old_test_data = int( + run_cmd_no_fail("du -sb {}".format(old_test_archive)).split()[0] + ) + if bytes_of_old_test_data < bytes_allowed: + break + + else: + logging.info( + "TEST ARCHIVER: Test data is within accepted bounds, {}GB (actual) < {}GB (limit)".format( + bytes_of_old_test_data / 1000000000, bytes_allowed / 1000000000 + ) + )
+ + + +############################################################################### +
+[docs] +def handle_old_test_data( + machine, compiler, test_id_root, scratch_root, test_root, avoid_test_id +): + ############################################################################### + run_area = os.path.dirname( + os.path.dirname(machine.get_value("RUNDIR")) + ) # Assumes XXX/$CASE/run + build_area = os.path.dirname( + os.path.dirname(machine.get_value("EXEROOT")) + ) # Assumes XXX/$CASE/build + archive_area = os.path.dirname( + machine.get_value("DOUT_S_ROOT") + ) # Assumes XXX/archive/$CASE + old_test_archive = os.path.join(scratch_root, "old_test_archive") + + mach_comp = "{}_{}".format(machine.get_machine_name(), compiler) + + try: + archive_old_test_data( + machine, + mach_comp, + test_id_root, + test_root, + old_test_archive, + avoid_test_id, + ) + except Exception: + logging.warning( + "TEST ARCHIVER: Archiving of old test data FAILED: {}\nDeleting data instead".format( + sys.exc_info()[1] + ) + ) + delete_old_test_data( + mach_comp, + test_id_root, + scratch_root, + test_root, + run_area, + build_area, + archive_area, + avoid_test_id, + )
+ + + +############################################################################### +
+[docs] +def jenkins_generic_job( + generate_baselines, + submit_to_cdash, + no_batch, + baseline_name, + arg_cdash_build_name, + cdash_project, + arg_test_suite, + cdash_build_group, + baseline_compare, + scratch_root, + parallel_jobs, + walltime, + machine, + compiler, + real_baseline_name, + baseline_root, + update_success, + check_throughput, + check_memory, + ignore_memleak, + ignore_namelists, + save_timing, + pes_file, + jenkins_id, + queue, +): + ############################################################################### + """ + Return True if all tests passed + """ + use_batch = machine.has_batch_system() and not no_batch + test_suite = machine.get_value("TESTS") + proxy = machine.get_value("PROXY") + test_suite = test_suite if arg_test_suite is None else arg_test_suite + test_root = os.path.join(scratch_root, "J") + + if use_batch: + batch_system = machine.get_value("BATCH_SYSTEM") + expect( + batch_system is not None, + "Bad XML. Batch machine has no batch_system configuration.", + ) + + # + # Env changes + # + + if submit_to_cdash and proxy is not None: + os.environ["http_proxy"] = proxy + + if not os.path.isdir(scratch_root): + os.makedirs(scratch_root) + + # Important, need to set up signal handlers before we officially + # kick off tests. We don't want this process getting killed outright + # since it's critical that the cleanup in the finally block gets run + CIME.wait_for_tests.set_up_signal_handlers() + + # + # Clean up leftovers from previous run of jenkins_generic_job. This will + # break the previous run of jenkins_generic_job if it's still running. Set up + # the Jenkins jobs with timeouts to avoid this. + # + + if jenkins_id is not None: + test_id_root = jenkins_id + test_id = "%s%s" % (test_id_root, CIME.utils.get_timestamp("%y%m%d_%H%M%S")) + else: + test_id_root = "J{}{}".format( + baseline_name.capitalize(), test_suite.replace("e3sm_", "").capitalize() + ) + test_id = "%s%s" % (test_id_root, CIME.utils.get_timestamp()) + archiver_thread = threading.Thread( + target=handle_old_test_data, + args=(machine, compiler, test_id_root, scratch_root, test_root, test_id), + ) + archiver_thread.start() + + # + # Set up create_test command and run it + # + + create_test_args = [ + test_suite, + "--test-root %s" % test_root, + "-t %s" % test_id, + "--machine %s" % machine.get_machine_name(), + "--compiler %s" % compiler, + ] + if generate_baselines: + create_test_args.append("-g -b " + real_baseline_name) + elif baseline_compare: + create_test_args.append("-c -b " + real_baseline_name) + + if scratch_root != machine.get_value("CIME_OUTPUT_ROOT"): + create_test_args.append("--output-root=" + scratch_root) + + if no_batch: + create_test_args.append("--no-batch") + + if parallel_jobs is not None: + create_test_args.append("-j {:d}".format(parallel_jobs)) + + if walltime is not None: + create_test_args.append("--walltime " + walltime) + + if baseline_root is not None: + create_test_args.append("--baseline-root " + baseline_root) + + if pes_file is not None: + create_test_args.append("--pesfile " + pes_file) + + if queue is not None: + create_test_args.append("--queue " + queue) + + if save_timing: + create_test_args.append("--save-timing") + + create_test_cmd = "./create_test " + " ".join(create_test_args) + + if not CIME.wait_for_tests.SIGNAL_RECEIVED: + create_test_stat = CIME.utils.run_cmd( + create_test_cmd, + from_dir=CIME.utils.get_scripts_root(), + verbose=True, + arg_stdout=None, + arg_stderr=None, + )[0] + # Create_test should have either passed, detected failing tests, or timed out + expect( + create_test_stat in [0, CIME.utils.TESTS_FAILED_ERR_CODE, -signal.SIGTERM], + "Create_test script FAILED with error code '{:d}'!".format( + create_test_stat + ), + ) + + # + # Wait for tests + # + + if submit_to_cdash: + cdash_build_name = ( + "_".join([test_suite, baseline_name, compiler]) + if arg_cdash_build_name is None + else arg_cdash_build_name + ) + else: + cdash_build_name = None + + os.environ["CIME_MACHINE"] = machine.get_machine_name() + + if submit_to_cdash: + logging.info( + "To resubmit to dashboard: wait_for_tests {}/*{}/TestStatus --no-wait -b {}".format( + test_root, test_id, cdash_build_name + ) + ) + + tests_passed = CIME.wait_for_tests.wait_for_tests( + glob.glob("{}/*{}/TestStatus".format(test_root, test_id)), + no_wait=not use_batch, # wait if using queue + check_throughput=check_throughput, + check_memory=check_memory, + ignore_namelists=ignore_namelists, + ignore_memleak=ignore_memleak, + cdash_build_name=cdash_build_name, + cdash_project=cdash_project, + cdash_build_group=cdash_build_group, + update_success=update_success, + ) + + logging.info("TEST ARCHIVER: Waiting for archiver thread") + archiver_thread.join() + logging.info("TEST ARCHIVER: Waiting for archiver finished") + + if use_batch and CIME.wait_for_tests.SIGNAL_RECEIVED: + # Cleanup + cleanup_queue(test_root, test_id) + + return tests_passed
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/locked_files.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/locked_files.html new file mode 100644 index 00000000000..81683ab3c4e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/locked_files.html @@ -0,0 +1,172 @@ + + + + + + CIME.locked_files — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.locked_files

+from CIME.XML.standard_module_setup import *
+from CIME.utils import safe_copy
+from CIME.XML.generic_xml import GenericXML
+
+logger = logging.getLogger(__name__)
+
+LOCKED_DIR = "LockedFiles"
+
+
+
+[docs] +def lock_file(filename, caseroot=None, newname=None): + expect("/" not in filename, "Please just provide basename of locked file") + caseroot = os.getcwd() if caseroot is None else caseroot + newname = filename if newname is None else newname + fulllockdir = os.path.join(caseroot, LOCKED_DIR) + if not os.path.exists(fulllockdir): + os.mkdir(fulllockdir) + + logging.debug("Locking file {}".format(filename)) + + # JGF: It is extremely dangerous to alter our database (xml files) without + # going through the standard API. The copy below invalidates all existing + # GenericXML instances that represent this file and all caching that may + # have involved this file. We should probably seek a safer way of locking + # files. + safe_copy(os.path.join(caseroot, filename), os.path.join(fulllockdir, newname)) + GenericXML.invalidate(os.path.join(fulllockdir, newname))
+ + + +
+[docs] +def unlock_file(filename, caseroot=None): + expect("/" not in filename, "Please just provide basename of locked file") + caseroot = os.getcwd() if caseroot is None else caseroot + locked_path = os.path.join(caseroot, LOCKED_DIR, filename) + if os.path.exists(locked_path): + os.remove(locked_path) + + logging.debug("Unlocking file {}".format(filename))
+ + + +
+[docs] +def is_locked(filename, caseroot=None): + expect("/" not in filename, "Please just provide basename of locked file") + caseroot = os.getcwd() if caseroot is None else caseroot + return os.path.exists(os.path.join(caseroot, LOCKED_DIR, filename))
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/namelist.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/namelist.html new file mode 100644 index 00000000000..98750a5d2f6 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/namelist.html @@ -0,0 +1,2506 @@ + + + + + + CIME.namelist — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.namelist

+"""Module containing tools for dealing with Fortran namelists.
+
+The public interface consists of the following functions:
+- `character_literal_to_string`
+- `compress_literal_list`
+- `expand_literal_list`
+- `fortran_namelist_base_value`
+- `is_valid_fortran_name`
+- `is_valid_fortran_namelist_literal`
+- `literal_to_python_value`
+- `merge_literal_lists`
+- `parse`
+- `string_to_character_literal`
+
+In addition, the `Namelist` class represents a namelist held in memory.
+
+For the moment, only a subset of namelist syntax is supported; specifically, we
+assume that only variables of intrinsic type are used, and indexing/co-indexing
+of arrays to set a portion of a variable is not supported. (However, null values
+and repeated values may be used to set or fill a variable as indexing would.)
+
+We also always assume that a period (".") is the decimal separator, not a comma
+(","). We also assume that the file encoding is UTF-8 or some compatible format
+(e.g. ASCII).
+
+Otherwise, most Fortran syntax rules implemented here are compatible with
+Fortran 2008 (which is largely the same as previous standards, and will be
+similar to Fortran 2015). The only exceptions should be cases where (a) we
+deliberately prohibit "troublesome" behavior that would be allowed by the
+standard, or (b) we rely on conventions shared by all major compilers.
+
+One important convention is that newline characters can be used to denote the
+end of a record. This makes them equivalent to spaces at most locations in a
+Fortran namelist, except that newlines also end comments, and they are ignored
+entirely within strings.
+
+While the treatment of comments in this module is standard, it may be somewhat
+surprising. Namelist comments are only allowed in two situations:
+
+(1) As the only thing on a line (aside from optional indentation with spaces).
+(2) Immediately after a "value separator" (the space, newline, comma, or slash
+after a value).
+
+This implies that all lines except for the last are syntax errors, in this
+example:
+
+```
+&group_name! This is not a valid comment because it's after the group name.
+foo ! Neither is this, because it's between a name and an equals sign.
+= 2 ! Nor this, because it comes between the value and the following comma.
+, bar = ! Nor this, because it's between an equals sign and a value.
+2! Nor this, because it should be separated from the value by a comma or space.
+bazz = 3 ! Nor this, because it comes between the value and the following slash.
+/! This is fine, but technically it is outside the namelist, not a comment.
+```
+
+However, the above would actually be valid if all the "comments" were removed.
+The Fortran standard is not clear about whether whitespace is allowed after
+inline comments and before subsequent non-whitespace text (!), but this module
+allows such whitespace, to preserve the sanity of both implementors and users.
+
+The Fortran standard only applies to the interior of namelist groups, and not to
+text between one namelist group and the next. This module assumes that namelist
+groups are separated by (optional) whitespace and comments, and nothing else.
+"""
+
+###############################################################################
+#
+# Lexer/parser design notes
+#
+# The bulk of the complexity of this module is in the `_NamelistParser` object.
+# Lexing, parsing, and translation of namelist data is all performed in a single
+# pass (though it would be possible to use separate stages if needed). The style
+# is that of a recursive descent parser, i.e. the functions correspond roughly
+# to concepts in the Fortran namelist grammar, and top-down parsing is used.
+# Parsing is done left-to-right with no backtracking.
+#
+# The most important attributes of a `_NamelistParser` are the input text
+# itself (`_text`), and the current position in the text (`_pos`). The position
+# is only changed via the `_advance` method, which also maintains line and
+# column numbers for error-reporting purposes. The `_settings` attribute
+# holds the final output, i.e. the variable name-value pairs.
+#
+# Parsing errors are signaled by one of two exceptions. The first is
+# `_NamelistParseError`, which always signals an unrecoverable error. This is
+# caught and translated to a user-visible error in `parse`. The second is
+# `_NamelistEOF`, which may or may not represent a true error. During parsing of
+# a standard namelist, it is treated in the same manner as
+# `_NamelistParseError`, unless it occurs outside of any namelist group, in
+# which case the `parse_namelist` method will catch it and return normally.
+#
+# The non-standard "groupless" format complicates things significantly by
+# allowing an end-of-file at any location where a '/' would normally be. This is
+# the reason for most of the `allow_eof` flags and related logic, since any
+# `_NamelistEOF` exceptions raised must be caught and dealt with.
+#
+###############################################################################
+
+# Disable these because of doctest, and because we don't typically follow the
+# (rather specific) pylint naming conventions.
+# pylint: disable=line-too-long,too-many-lines,invalid-name
+
+import re
+import collections
+
+# Disable these because this is our standard setup
+# pylint: disable=wildcard-import,unused-wildcard-import
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import expect, string_in_list
+
+logger = logging.getLogger(__name__)
+
+# Fortran syntax regular expressions.
+# Variable names.
+# FORTRAN_NAME_REGEX = re.compile(r"(^[a-z][a-z0-9_]{0,62})(\([+-]?\d*:?[+-]?\d*:?[+-]?\d*\))?$", re.IGNORECASE)
+FORTRAN_NAME_REGEX = re.compile(
+    r"""(^[a-z][a-z0-9_@]{0,62})                            #  The variable name
+                                  (\(                                                   # begin optional index expression
+                                  (([+-]?\d+)                                           # Single valued index
+                                  |                                                     # or
+                                  (([+-]?\d+)?:([+-]?\d+)?:?([+-]?\d+)?))               # colon seperated triplet
+                                  \))?\s*$""",  # end optional index expression
+    re.IGNORECASE | re.VERBOSE,
+)
+
+FORTRAN_LITERAL_REGEXES = {}
+# Integer literals.
+_int_re_string = r"(\+|-)?[0-9]+"
+FORTRAN_LITERAL_REGEXES["integer"] = re.compile("^" + _int_re_string + "$")
+# Real/complex literals.
+_ieee_exceptional_re_string = r"inf(inity)?|nan(\([^)]+\))?"
+_float_re_string = r"((\+|-)?([0-9]+(\.[0-9]*)?|\.[0-9]+)([ed]?{})?|{})".format(
+    _int_re_string, _ieee_exceptional_re_string
+)
+FORTRAN_LITERAL_REGEXES["real"] = re.compile(
+    "^" + _float_re_string + "$", re.IGNORECASE
+)
+FORTRAN_LITERAL_REGEXES["complex"] = re.compile(
+    r"^\([ \n]*"
+    + _float_re_string
+    + r"[ \n]*,[ \n]*"
+    + _float_re_string
+    + r"[ \n]*\)$",
+    re.IGNORECASE,
+)
+# Character literals.
+_char_single_re_string = r"'[^']*(''[^']*)*'"
+_char_double_re_string = r'"[^"]*(""[^"]*)*"'
+FORTRAN_LITERAL_REGEXES["character"] = re.compile(
+    "^(" + _char_single_re_string + "|" + _char_double_re_string + ")$"
+)
+# Logical literals.
+FORTRAN_LITERAL_REGEXES["logical"] = re.compile(r"^\.?[tf][^=/ \n]*$", re.IGNORECASE)
+# Repeated value prefix.
+FORTRAN_REPEAT_PREFIX_REGEX = re.compile(r"^[0-9]*[1-9]+[0-9]*\*")
+
+
+
+[docs] +def is_valid_fortran_name(string): + """Check that a variable name is allowed in Fortran. + + The rules are: + 1. The name must start with a letter. + 2. All characters in a name must be alphanumeric (or underscores). + 3. The maximum name length is 63 characters. + 4. We only handle a single dimension !!! + + >>> is_valid_fortran_name("") + False + >>> is_valid_fortran_name("a") + True + >>> is_valid_fortran_name("A") + True + >>> is_valid_fortran_name("A(4)") + True + >>> is_valid_fortran_name("A(::)") + True + >>> is_valid_fortran_name("A(1:2:3)") + True + >>> is_valid_fortran_name("A(1::)") + True + >>> is_valid_fortran_name("A(:-2:)") + True + >>> is_valid_fortran_name("A(1::+3)") + True + >>> is_valid_fortran_name("A(1,3)") + False + >>> is_valid_fortran_name("2") + False + >>> is_valid_fortran_name("_") + False + >>> is_valid_fortran_name("abc#123") + False + >>> is_valid_fortran_name("aLiBi_123") + True + >>> is_valid_fortran_name("A" * 64) + False + >>> is_valid_fortran_name("A" * 63) + True + """ + return FORTRAN_NAME_REGEX.search(string) is not None
+ + + +
+[docs] +def get_fortran_name_only(full_var): + """remove array section if any and return only the variable name + >>> get_fortran_name_only('foo') + 'foo' + >>> get_fortran_name_only('foo(3)') + 'foo' + >>> get_fortran_name_only('foo(::)') + 'foo' + >>> get_fortran_name_only('foo(1::)') + 'foo' + >>> get_fortran_name_only('foo(:+2:)') + 'foo' + >>> get_fortran_name_only('foo(::-3)') + 'foo' + >>> get_fortran_name_only('foo(::)') + 'foo' + """ + m = FORTRAN_NAME_REGEX.search(full_var) + return m.group(1)
+ + + +
+[docs] +def get_fortran_variable_indices(varname, varlen=1, allow_any_len=False): + """get indices from a fortran namelist variable as a triplet of minindex, maxindex and step + + >>> get_fortran_variable_indices('foo(3)') + (3, 3, 1) + >>> get_fortran_variable_indices('foo(1:2:3)') + (1, 2, 3) + >>> get_fortran_variable_indices('foo(::)', varlen=4) + (1, 4, 1) + >>> get_fortran_variable_indices('foo(::2)', varlen=4) + (1, 4, 2) + >>> get_fortran_variable_indices('foo(::)', allow_any_len=True) + (1, -1, 1) + """ + m = FORTRAN_NAME_REGEX.search(varname) + (minindex, maxindex, step) = (1, varlen, 1) + + if m.group(4) is not None: + minindex = int(m.group(4)) + maxindex = minindex + step = 1 + + elif m.group(5) is not None: + if m.group(6) is not None: + minindex = int(m.group(6)) + if m.group(7) is not None: + maxindex = int(m.group(7)) + if m.group(8) is not None: + step = int(m.group(8)) + + if allow_any_len and maxindex == minindex: + maxindex = -1 + + expect(step != 0, "Step size 0 not allowed") + + return (minindex, maxindex, step)
+ + + +
+[docs] +def fortran_namelist_base_value(string): + r"""Strip off whitespace and repetition syntax from a namelist value. + + >>> fortran_namelist_base_value("") + '' + >>> fortran_namelist_base_value("f") + 'f' + >>> fortran_namelist_base_value("6*") + '' + >>> fortran_namelist_base_value("6*f") + 'f' + >>> fortran_namelist_base_value(" \n6* \n") + '' + >>> fortran_namelist_base_value("\n 6*f\n ") + 'f' + """ + # Strip leading/trailing whitespace. + string = string.strip(" \n") + # Strip off repeated value prefix. + if FORTRAN_REPEAT_PREFIX_REGEX.search(string) is not None: + string = string[string.find("*") + 1 :] + return string
+ + + +
+[docs] +def character_literal_to_string(literal): + """Convert a Fortran character literal to a Python string. + + This function assumes (without checking) that `literal` is a valid literal. + + >>> character_literal_to_string("'blah'") + 'blah' + >>> character_literal_to_string('"blah"') + 'blah' + >>> character_literal_to_string("'don''t'") + "don't" + >>> character_literal_to_string('"' + '""Hello!""' + '"') + '"Hello!"' + """ + # Figure out whether a quote or apostrophe is the delimiter. + delimiter = None + for char in literal: + if char in ("'", '"'): + delimiter = char + # Find left and right edges of the string, extract middle. + left_pos = literal.find(delimiter) + right_pos = literal.rfind(delimiter) + new_literal = literal[left_pos + 1 : right_pos] + # Replace escaped quote and apostrophe characters. + return new_literal.replace(delimiter * 2, delimiter)
+ + + +
+[docs] +def string_to_character_literal(string): + r"""Convert a Python string to a Fortran character literal. + + This function always uses double quotes (") as the delimiter. + + >>> string_to_character_literal('blah') + '"blah"' + >>> string_to_character_literal("'blah'") + '"\'blah\'"' + >>> string_to_character_literal('She said "Hi!".') + '"She said ""Hi!""."' + """ + string = string.replace('"', '""') + return '"' + string + '"'
+ + + +
+[docs] +def is_valid_fortran_namelist_literal(type_, string): + r"""Determine whether a literal is valid in a Fortran namelist. + + Note that kind parameters are *not* allowed in namelists, which simplifies + this check a bit. Internal whitespace is allowed for complex and character + literals only. BOZ literals and compiler extensions (e.g. backslash escapes) + are not allowed. + + Null values, however, are allowed for all types. This means that passing in + a string containing nothing but spaces and newlines will always cause + `True` to be returned. Repetition (e.g. `5*'a'`) is also allowed, including + repetition of null values. + + Detailed rules and examples follow. + + Integers: Must be a sequence of one or more digits, with an optional sign. + + >>> is_valid_fortran_namelist_literal("integer", "") + True + >>> is_valid_fortran_namelist_literal("integer", " ") + True + >>> is_valid_fortran_namelist_literal("integer", "\n") + True + >>> is_valid_fortran_namelist_literal("integer", "5*") + True + >>> is_valid_fortran_namelist_literal("integer", "1") + True + >>> is_valid_fortran_namelist_literal("integer", "5*1") + True + >>> is_valid_fortran_namelist_literal("integer", " 5*1") + True + >>> is_valid_fortran_namelist_literal("integer", "5* 1") + False + >>> is_valid_fortran_namelist_literal("integer", "5 *1") + False + >>> is_valid_fortran_namelist_literal("integer", "a") + False + >>> is_valid_fortran_namelist_literal("integer", " 1") + True + >>> is_valid_fortran_namelist_literal("integer", "1 ") + True + >>> is_valid_fortran_namelist_literal("integer", "1 2") + False + >>> is_valid_fortran_namelist_literal("integer", "0123456789") + True + >>> is_valid_fortran_namelist_literal("integer", "+22") + True + >>> is_valid_fortran_namelist_literal("integer", "-26") + True + >>> is_valid_fortran_namelist_literal("integer", "2A") + False + >>> is_valid_fortran_namelist_literal("integer", "1_8") + False + >>> is_valid_fortran_namelist_literal("integer", "2.1") + False + >>> is_valid_fortran_namelist_literal("integer", "2e6") + False + + Reals: + - For fixed-point format, there is an optional sign, followed by an integer + part, or a decimal point followed by a fractional part, or both. + - Scientific notation is allowed, with an optional, case-insensitive "e" or + "d" followed by an optionally-signed integer exponent. (Either the "e"/"d" + or a sign must be present to separate the number from the exponent.) + - The (case-insensitive) strings "inf", "infinity", and "nan" are allowed. + NaN values can also contain additional information in parentheses, e.g. + "NaN(x1234ABCD)". + + >>> is_valid_fortran_namelist_literal("real", "") + True + >>> is_valid_fortran_namelist_literal("real", "a") + False + >>> is_valid_fortran_namelist_literal("real", "1") + True + >>> is_valid_fortran_namelist_literal("real", " 1") + True + >>> is_valid_fortran_namelist_literal("real", "1 ") + True + >>> is_valid_fortran_namelist_literal("real", "1 2") + False + >>> is_valid_fortran_namelist_literal("real", "+1") + True + >>> is_valid_fortran_namelist_literal("real", "-1") + True + >>> is_valid_fortran_namelist_literal("real", "1.") + True + >>> is_valid_fortran_namelist_literal("real", "1.5") + True + >>> is_valid_fortran_namelist_literal("real", ".5") + True + >>> is_valid_fortran_namelist_literal("real", "+.5") + True + >>> is_valid_fortran_namelist_literal("real", ".") + False + >>> is_valid_fortran_namelist_literal("real", "+") + False + >>> is_valid_fortran_namelist_literal("real", "1e6") + True + >>> is_valid_fortran_namelist_literal("real", "1e-6") + True + >>> is_valid_fortran_namelist_literal("real", "1e+6") + True + >>> is_valid_fortran_namelist_literal("real", ".5e6") + True + >>> is_valid_fortran_namelist_literal("real", "1e") + False + >>> is_valid_fortran_namelist_literal("real", "1D6") + True + >>> is_valid_fortran_namelist_literal("real", "1q6") + False + >>> is_valid_fortran_namelist_literal("real", "1+6") + True + >>> is_valid_fortran_namelist_literal("real", "1.6.5") + False + >>> is_valid_fortran_namelist_literal("real", "1._8") + False + >>> is_valid_fortran_namelist_literal("real", "1,5") + False + >>> is_valid_fortran_namelist_literal("real", "inf") + True + >>> is_valid_fortran_namelist_literal("real", "INFINITY") + True + >>> is_valid_fortran_namelist_literal("real", "NaN") + True + >>> is_valid_fortran_namelist_literal("real", "nan(x56)") + True + >>> is_valid_fortran_namelist_literal("real", "nan())") + False + + Complex numbers: + - A pair of real numbers enclosed by parentheses, and separated by a comma. + - Any number of spaces or newlines may be placed before or after each real. + + >>> is_valid_fortran_namelist_literal("complex", "") + True + >>> is_valid_fortran_namelist_literal("complex", "()") + False + >>> is_valid_fortran_namelist_literal("complex", "(,)") + False + >>> is_valid_fortran_namelist_literal("complex", "( ,\n)") + False + >>> is_valid_fortran_namelist_literal("complex", "(a,2.)") + False + >>> is_valid_fortran_namelist_literal("complex", "(1.,b)") + False + >>> is_valid_fortran_namelist_literal("complex", "(1,2)") + True + >>> is_valid_fortran_namelist_literal("complex", "(-1.e+06,+2.d-5)") + True + >>> is_valid_fortran_namelist_literal("complex", "(inf,NaN)") + True + >>> is_valid_fortran_namelist_literal("complex", "( 1. , 2. )") + True + >>> is_valid_fortran_namelist_literal("complex", "( \n \n 1. \n,\n 2.\n)") + True + >>> is_valid_fortran_namelist_literal("complex", " (1.,2.)") + True + >>> is_valid_fortran_namelist_literal("complex", "(1.,2.) ") + True + + Character sequences (strings): + - Must begin and end with the same delimiter character, either a single + quote (apostrophe), or a double quote (quotation mark). + - Whichever character is used as a delimiter must not appear in the + string itself, unless it appears in doubled pairs (e.g. '''' or "'" are the + two ways of representing a string containing a single apostrophe). + - Note that newlines cannot be represented in a namelist character literal + since they are interpreted as an "end of record", but they are allowed as + long as they don't come between one of the aforementioned double pairs of + characters. + + >>> is_valid_fortran_namelist_literal("character", "") + True + >>> is_valid_fortran_namelist_literal("character", "''") + True + >>> is_valid_fortran_namelist_literal("character", " ''") + True + >>> is_valid_fortran_namelist_literal("character", "'\n'") + True + >>> is_valid_fortran_namelist_literal("character", "''\n''") + False + >>> is_valid_fortran_namelist_literal("character", "'''") + False + >>> is_valid_fortran_namelist_literal("character", "''''") + True + >>> is_valid_fortran_namelist_literal("character", "'''Cookie'''") + True + >>> is_valid_fortran_namelist_literal("character", "'''Cookie''") + False + >>> is_valid_fortran_namelist_literal("character", "'\"'") + True + >>> is_valid_fortran_namelist_literal("character", "'\"\"'") + True + >>> is_valid_fortran_namelist_literal("character", '""') + True + >>> is_valid_fortran_namelist_literal("character", '"" ') + True + >>> is_valid_fortran_namelist_literal("character", '"\n"') + True + >>> is_valid_fortran_namelist_literal("character", '""\n""') + False + >>> is_valid_fortran_namelist_literal("character", '""' + '"') + False + >>> is_valid_fortran_namelist_literal("character", '""' + '""') + True + >>> is_valid_fortran_namelist_literal("character", '"' + '""Cookie""' + '"') + True + >>> is_valid_fortran_namelist_literal("character", '""Cookie""' + '"') + False + >>> is_valid_fortran_namelist_literal("character", '"\'"') + True + >>> is_valid_fortran_namelist_literal("character", '"\'\'"') + True + + Logicals: + - Must contain a (case-insensitive) "t" or "f". + - This must be either the first nonblank character, or the second following + a period. + - The rest of the string is ignored, but cannot contain a comma, newline, + equals sign, slash, or space (except that trailing spaces are allowed and + ignored). + + >>> is_valid_fortran_namelist_literal("logical", "") + True + >>> is_valid_fortran_namelist_literal("logical", "t") + True + >>> is_valid_fortran_namelist_literal("logical", "F") + True + >>> is_valid_fortran_namelist_literal("logical", ".T") + True + >>> is_valid_fortran_namelist_literal("logical", ".f") + True + >>> is_valid_fortran_namelist_literal("logical", " f") + True + >>> is_valid_fortran_namelist_literal("logical", " .t") + True + >>> is_valid_fortran_namelist_literal("logical", "at") + False + >>> is_valid_fortran_namelist_literal("logical", ".TRUE.") + True + >>> is_valid_fortran_namelist_literal("logical", ".false.") + True + >>> is_valid_fortran_namelist_literal("logical", ".TEXAS$") + True + >>> is_valid_fortran_namelist_literal("logical", ".f=") + False + >>> is_valid_fortran_namelist_literal("logical", ".f/1") + False + >>> is_valid_fortran_namelist_literal("logical", ".t\nted") + False + >>> is_valid_fortran_namelist_literal("logical", ".Fant astic") + False + >>> is_valid_fortran_namelist_literal("logical", ".t2 ") + True + """ + expect( + type_ in FORTRAN_LITERAL_REGEXES, + "Invalid Fortran type for a namelist: {!r}".format(str(type_)), + ) + # Strip off whitespace and repetition. + string = fortran_namelist_base_value(string) + # Null values are always allowed. + if string == "": + return True + return FORTRAN_LITERAL_REGEXES[type_].search(string) is not None
+ + + +
+[docs] +def literal_to_python_value(literal, type_=None): + r"""Convert a Fortran literal string to a Python value. + + This function assumes that the input contains a single value, i.e. + repetition syntax is not used. The type can be specified by passing a string + as the `type_` argument, or if this option is not provided, this function + will attempt to autodetect the variable type. + + Note that it is not possible to be certain whether a literal like "123" is + intended to represent an integer or a floating-point value, however, nor can + we be certain of the precision that will be used to hold this value in + actual Fortran code. We also cannot use the optional information in a NaN + float, so this will cause the function to throw an error if that information + is present (e.g. a string like "NAN(1234)" will cause an error). + + The Python type of the return value is as follows for different `type_` + arguments: + "character" - `str` + "complex" - `complex` + "integer" - `int` + "logical" - `bool` + "real" - `float` + + If a null value is input (i.e. the empty string), `None` will be returned. + + >>> literal_to_python_value("'She''s a winner!'") + "She's a winner!" + >>> literal_to_python_value("1") + 1 + >>> literal_to_python_value("1.") + 1.0 + >>> literal_to_python_value(" (\n 1. , 2. )\n ") + (1+2j) + >>> literal_to_python_value(".true.") + True + >>> literal_to_python_value("Fortune") + False + >>> literal_to_python_value("bacon") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: 'bacon' is not a valid literal for any Fortran type. + >>> literal_to_python_value("1", type_="real") + 1.0 + >>> literal_to_python_value("bacon", type_="logical") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: 'bacon' is not a valid literal of type 'logical'. + >>> literal_to_python_value("1", type_="booga") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Invalid Fortran type for a namelist: 'booga' + >>> literal_to_python_value("2*1") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Cannot use repetition syntax in literal_to_python_value + >>> literal_to_python_value("") + >>> literal_to_python_value("-1.D+10") + -10000000000.0 + >>> shouldRaise(ValueError, literal_to_python_value, "nan(1234)") + """ + expect( + FORTRAN_REPEAT_PREFIX_REGEX.search(literal) is None, + "Cannot use repetition syntax in literal_to_python_value", + ) + # Handle null value. + if fortran_namelist_base_value(literal) == "": + return None + if type_ is None: + # Autodetect type. + for test_type in ("character", "complex", "integer", "logical", "real"): + if is_valid_fortran_namelist_literal(test_type, literal): + type_ = test_type + break + expect( + type_ is not None, + "{!r} is not a valid literal for any Fortran type.".format(str(literal)), + ) + else: + # Check that type is valid. + expect( + is_valid_fortran_namelist_literal(type_, literal), + "{!r} is not a valid literal of type {!r}.".format( + str(literal), str(type_) + ), + ) + # Conversion for each type. + if type_ == "character": + return character_literal_to_string(literal) + elif type_ == "complex": + literal = literal.lstrip(" \n(").rstrip(" \n)") + real_part, _, imag_part = literal.partition(",") + return complex(float(real_part), float(imag_part)) + elif type_ == "integer": + return int(literal) + elif type_ == "logical": + literal = literal.lstrip(" \n.") + return literal[0] in "tT" + elif type_ == "real": + literal = literal.lower().replace("d", "e") + return float(literal)
+ + + +
+[docs] +def expand_literal_list(literals): + """Expands a list of literal values to get rid of repetition syntax. + + >>> expand_literal_list([]) + [] + >>> expand_literal_list(['true']) + ['true'] + >>> expand_literal_list(['1', '2', 'f*', '3*3', '5']) + ['1', '2', 'f*', '3', '3', '3', '5'] + >>> expand_literal_list(['2*f*']) + ['f*', 'f*'] + """ + expanded = [] + for literal in literals: + if FORTRAN_REPEAT_PREFIX_REGEX.search(literal) is not None: + num, _, value = literal.partition("*") + expanded += int(num) * [value] + else: + expanded.append(literal) + + return expanded
+ + + +
+[docs] +def compress_literal_list(literals): + """Uses repetition syntax to shorten a literal list. + + >>> compress_literal_list([]) + [] + >>> compress_literal_list(['true']) + ['true'] + >>> compress_literal_list(['1', '2', 'f*', '3', '3', '3', '5']) + ['1', '2', 'f*', '3', '3', '3', '5'] + >>> compress_literal_list(['f*', 'f*']) + ['f*', 'f*'] + """ + compressed = [] + if len(literals) == 0: + return compressed + # for right now do not compress + do_compression = False + if do_compression: + # Start with the first literal. + old_literal = literals[0] + num_reps = 1 + for literal in literals[1:]: + if literal == old_literal: + # For each new literal, if it matches the old one, it increases the + # number of repetitions by one. + num_reps += 1 + else: + # Otherwise, write out the previous literal and start tracking the + # new one. + rep_str = str(num_reps) + "*" if num_reps > 1 else "" + if isinstance(old_literal, str): + compressed.append(rep_str + old_literal) + else: + compressed.append(rep_str + str(old_literal)) + old_literal = literal + num_reps = 1 + rep_str = str(num_reps) + "*" if num_reps > 1 else "" + if isinstance(old_literal, str): + compressed.append(rep_str + old_literal) + else: + compressed.append(rep_str + str(old_literal)) + return compressed + else: + for literal in literals: + if isinstance(literal, str): + compressed.append(literal) + else: + compressed.append(str(literal)) + return compressed
+ + + +
+[docs] +def merge_literal_lists(default, overwrite): + """Merge two lists of literal value strings. + + The `overwrite` values have higher precedence, so will overwrite the + `default` values. However, if `overwrite` contains null values, or is + shorter than `default` (and thus implicitly ends in null values), the + elements of `default` will be used where `overwrite` is null. + + >>> merge_literal_lists([], []) + [] + >>> merge_literal_lists(['true'], ['false']) + ['false'] + >>> merge_literal_lists([], ['false']) + ['false'] + >>> merge_literal_lists(['true'], ['']) + ['true'] + >>> merge_literal_lists([], ['']) + [''] + >>> merge_literal_lists(['true'], []) + ['true'] + >>> merge_literal_lists(['true'], []) + ['true'] + >>> merge_literal_lists(['3*false', '3*true'], ['true', '4*', 'false']) + ['true', 'false', 'false', 'true', 'true', 'false'] + """ + merged = [] + default = expand_literal_list(default) + overwrite = expand_literal_list(overwrite) + + for default_elem, elem in zip(default, overwrite): + if elem == "": + merged.append(default_elem) + else: + merged.append(elem) + def_len = len(default) + ovw_len = len(overwrite) + if ovw_len < def_len: + merged[ovw_len:def_len] = default[ovw_len:def_len] + else: + merged[def_len:ovw_len] = overwrite[def_len:ovw_len] + return compress_literal_list(merged)
+ + + +
+[docs] +def parse(in_file=None, text=None, groupless=False, convert_tab_to_space=True): + """Parse a Fortran namelist. + + The `in_file` argument must be either a `str` or `unicode` object containing + a file name, or a text I/O object with a `read` method that returns the text + of the namelist. + + Alternatively, the `text` argument can be provided, in which case it must be + the text of the namelist itself. + + The `groupless` argument changes namelist parsing in two ways: + + 1. `parse` allows an alternate file format where no group names or slashes + are present. In effect, the file is parsed as if an invisible, arbitrary + group name was prepended, and an invisible slash was appended. However, + if any group names actually are present, the file is parsed normally. + 2. The return value of this function is not a `Namelist` object. Instead a + single, flattened dictionary of name-value pairs is returned. + + The `convert_tab_to_space` option can be used to force all tabs in the file + to be converted to spaces, and is on by default. Note that this will usually + allow files that use tabs as whitespace to be parsed. However, the + implementation of this option is crude; it converts *all* tabs in the file, + including those in character literals. (Note that there are many characters + that cannot be passed in via namelist in any standard way, including '\n', + so it is already a bad idea to assume that the namelist will preserve + whitespace in strings, aside from simple spaces.) + + The return value, if `groupless=False`, is a `Namelist` object. + + All names and values returned are ultimately unicode strings. E.g. a value + of "6*2" is returned as that string; it is not converted to 6 copies of the + Python integer `2`. Null values are returned as the empty string (""). + """ + expect( + in_file is not None or text is not None, + "Must specify an input file or text to the namelist parser.", + ) + expect( + in_file is None or text is None, + "Cannot specify both input file and text to the namelist parser.", + ) + if isinstance(in_file, str): + logger.debug("Reading namelist at: {}".format(in_file)) + with open(in_file) as in_file_obj: + text = in_file_obj.read() + elif in_file is not None: + logger.debug("Reading namelist from file object") + text = in_file.read() + if convert_tab_to_space: + text = text.replace("\t", " ") + try: + namelist_dict = _NamelistParser(text, groupless).parse_namelist() + except (_NamelistEOF, _NamelistParseError) as error: + # Deal with unexpected EOF or other parsing errors. + expect(False, str(error)) + if groupless: + return namelist_dict + else: + return Namelist(namelist_dict)
+ + + +
+[docs] +def shouldRaise(eclass, method, *args, **kw): + """ + A helper function to make doctests py3 compatible + http://python3porting.com/problems.html#running-doctests + """ + try: + method(*args, **kw) + except BaseException: + e = sys.exc_info()[1] + if not isinstance(e, eclass): + raise + return + raise Exception("Expected exception %s not raised" % str(eclass))
+ + + +
+[docs] +class Namelist(object): + + """Class representing a Fortran namelist. + + Public methods: + __init__ + delete_variable + get_group_names + get_value + get_variable_names + get_variable_value + merge_nl + set_variable_value + write + """ + + def __init__(self, groups=None): + """Construct a new `Namelist` object. + + The `groups` argument is a dictionary associating group names to + dictionaries of name/value pairs. If omitted, an empty namelist object + is created. + + Unless you are deliberately creating an empty `Namelist`, it is easier/ + safer to use `parse` than to directly call this constructor. + """ + self._groups = {} + if groups is not None: + for group_name in groups: + expect(group_name is not None, " Got None in groups {}".format(groups)) + self._groups[group_name] = collections.OrderedDict() + for variable_name in groups[group_name]: + self._groups[group_name][variable_name] = groups[group_name][ + variable_name + ] + +
+[docs] + def clean_groups(self): + self._groups = collections.OrderedDict()
+ + +
+[docs] + def get_group_names(self): + """Return a list of all groups in the namelist. + + >>> Namelist().get_group_names() + [] + >>> sorted(parse(text='&foo / &bar /').get_group_names()) + ['bar', 'foo'] + """ + return list(self._groups.keys())
+ + +
+[docs] + def get_variable_names(self, group_name): + """Return a list of all variables in the given namelist group. + + If the specified group is not in the namelist, returns an empty list. + + >>> Namelist().get_variable_names('foo') + [] + >>> x = parse(text='&foo bar=,bazz=true,bazz(2)=fred,bang=6*""/') + >>> sorted(x.get_variable_names('fOo')) + ['bang', 'bar', 'bazz', 'bazz(2)'] + >>> x = parse(text='&foo bar=,bazz=true,bang=6*""/') + >>> sorted(x.get_variable_names('fOo')) + ['bang', 'bar', 'bazz'] + >>> x = parse(text='&foo bar(::)=,bazz=false,bazz(2)=true,bazz(:2:)=6*""/') + >>> sorted(x.get_variable_names('fOo')) + ['bar(::)', 'bazz', 'bazz(2)', 'bazz(:2:)'] + """ + gn = string_in_list(group_name, self._groups) + if not gn: + return [] + return list(self._groups[gn].keys())
+ + +
+[docs] + def get_variable_value(self, group_name, variable_name): + """Return the value of the specified variable. + + This function always returns a non-empty list containing strings. If the + specified `group_name` or `variable_name` is not present, `['']` is + returned. + + >>> Namelist().get_variable_value('foo', 'bar') + [''] + >>> parse(text='&foo bar=1,2 /').get_variable_value('foo', 'bazz') + [''] + >>> parse(text='&foo bar=1,2 /').get_variable_value('foO', 'Bar') + ['1', '2'] + """ + gn = string_in_list(group_name, self._groups) + if gn: + vn = string_in_list(variable_name, self._groups[gn]) + if vn: + # Make a copy of the list so that any modifications done by the caller + # don't modify the internal values. + return self._groups[gn][vn][:] + return [""]
+ + +
+[docs] + def get_value(self, variable_name): + """Return the value of a uniquely-named variable. + + This function is similar to `get_variable_value`, except that it does + not require a `group_name`, and it requires that the `variable_name` be + unique across all groups. + + >>> parse(text='&foo bar=1 / &bazz bar=1 /').get_value('bar') # doctest: +ELLIPSIS +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Namelist.get_value: Variable {} is present in multiple groups: ... + >>> parse(text='&foo bar=1 / &bazz /').get_value('Bar') + ['1'] + >>> parse(text='&foo bar(2)=1 / &bazz /').get_value('Bar(2)') + ['1'] + >>> parse(text='&foo / &bazz /').get_value('bar') + [''] + """ + possible_groups = [] + vn = None + for group_name in self._groups: + vnt = string_in_list(variable_name, self._groups[group_name]) + if vnt: + vn = vnt + possible_groups.append(group_name) + expect( + len(possible_groups) <= 1, + "Namelist.get_value: Variable {} is present in multiple groups: " + + str(possible_groups), + ) + if possible_groups: + return self._groups[possible_groups[0]][vn] + else: + return [""]
+ + +
+[docs] + def set_variable_value(self, group_name, variable_name, value, var_size=1): + """Set the value of the specified variable. + + >>> x = parse(text='&foo bar=1 /') + >>> x.get_variable_value('foo', 'bar') + ['1'] + >>> x.set_variable_value('foo', 'bar(2)', ['3'], var_size=4) + >>> x.get_variable_value('foo', 'bar') + ['1', '3'] + >>> x.set_variable_value('foo', 'bar(1)', ['2']) + >>> x.get_variable_value('foo', 'bar') + ['2', '3'] + >>> x.set_variable_value('foo', 'bar', ['1']) + >>> x.get_variable_value('foo', 'bar') + ['1', '3'] + >>> x.set_variable_value('foo', 'bazz', ['3']) + >>> x.set_variable_value('Brack', 'baR', ['4']) + >>> x.get_variable_value('foo', 'bazz') + ['3'] + >>> x.get_variable_value('brack', 'bar') + ['4'] + >>> x.set_variable_value('foo', 'red(2:6:2)', ['2', '4', '6'], var_size=12) + >>> x.get_variable_value('foo', 'red') + ['', '2', '', '4', '', '6'] + """ + minindex, maxindex, step = get_fortran_variable_indices(variable_name, var_size) + variable_name = get_fortran_name_only(variable_name) + + expect( + minindex > 0, + "Indices < 1 not supported in CIME interface to fortran namelists... lower bound={}".format( + minindex + ), + ) + gn = string_in_list(group_name, self._groups) + if not gn: + gn = group_name + self._groups[gn] = {} + + tlen = 1 + vn = string_in_list(variable_name, self._groups[gn]) + if vn: + tlen = len(self._groups[gn][vn]) + else: + vn = variable_name + tlen = 1 + self._groups[gn][vn] = [""] + + if minindex > tlen: + self._groups[gn][vn].extend([""] * (minindex - tlen - 1)) + + for i in range(minindex, maxindex + 2 * step, step): + while len(self._groups[gn][vn]) < i: + self._groups[gn][vn].append("") + self._groups[gn][vn][i - 1] = value.pop(0) + if len(value) == 0: + break
+ + +
+[docs] + def delete_variable(self, group_name, variable_name): + """Delete a variable from a specified group. + + If the specified group or variable does not exist, this is a no-op. + + >>> x = parse(text='&foo bar=1 /') + >>> x.delete_variable('FOO', 'BAR') + >>> x.delete_variable('foo', 'bazz') + >>> x.delete_variable('brack', 'bazz') + >>> x.get_variable_names('foo') + [] + >>> x.get_variable_names('brack') + [] + """ + gn = string_in_list(group_name, self._groups) + if gn: + vn = string_in_list(variable_name, self._groups[gn]) + if vn: + del self._groups[gn][vn]
+ + +
+[docs] + def merge_nl(self, other, overwrite=False): + """Merge this namelist object with another. + + Values in the invoking (`self`) `Namelist` will take precedence over + values in the `other` `Namelist`, unless `overwrite=True` is passed in, + in which case `other` values take precedence. + + >>> x = parse(text='&foo bar=1 bazz=,2 brat=3/') + >>> y = parse(text='&foo bar=2 bazz=3*1 baker=4 / &foo2 barter=5 /') + >>> y.get_value('bazz') + ['1', '1', '1'] + >>> x.merge_nl(y) + >>> sorted(x.get_group_names()) + ['foo', 'foo2'] + >>> sorted(x.get_variable_names('foo')) + ['baker', 'bar', 'bazz', 'brat'] + >>> sorted(x.get_variable_names('foo2')) + ['barter'] + >>> x.get_value('bar') + ['1'] + >>> x.get_value('bazz') + ['1', '2', '1'] + >>> x.get_value('brat') + ['3'] + >>> x.get_value('baker') + ['4'] + >>> x.get_value('barter') + ['5'] + >>> x = parse(text='&foo bar=1 bazz=,2 brat=3/') + >>> y = parse(text='&foo bar=2 bazz=3*1 baker=4 / &foo2 barter=5 /') + >>> x.merge_nl(y, overwrite=True) + >>> sorted(x.get_group_names()) + ['foo', 'foo2'] + >>> sorted(x.get_variable_names('foo')) + ['baker', 'bar', 'bazz', 'brat'] + >>> sorted(x.get_variable_names('foo2')) + ['barter'] + >>> x.get_value('bar') + ['2'] + >>> x.get_value('bazz') + ['1', '1', '1'] + >>> x.get_value('brat') + ['3'] + >>> x.get_value('baker') + ['4'] + >>> x.get_value('barter') + ['5'] + """ + # Pretty simple strategy: go through the entire other namelist, and + # merge all values with this one's. + for group_name in other.get_group_names(): + for variable_name in other.get_variable_names(group_name): + self_val = self.get_variable_value(group_name, variable_name) + other_val = other.get_variable_value(group_name, variable_name) + if overwrite: + merged_val = merge_literal_lists(self_val, other_val) + else: + merged_val = merge_literal_lists(other_val, self_val) + self.set_variable_value( + group_name, variable_name, merged_val, var_size=len(merged_val) + )
+ + +
+[docs] + def get_group_variables(self, group_name): + group_variables = {} + group = self._groups[group_name] + for name in sorted(group.keys()): + value = group[name][0] + group_variables[name] = value + return group_variables
+ + +
+[docs] + def write( + self, out_file, groups=None, append=False, format_="nml", sorted_groups=True + ): + + """Write a the output data (normally fortran namelist) to the out_file + + As with `parse`, the `out_file` argument can be either a file name, or a + file object with a `write` method that accepts unicode. If specified, + the `groups` argument specifies a subset of all groups to write out. + + If `out_file` is a file name, and `append=True` is passed in, the + namelist will be appended to the named file instead of overwriting it. + The `append` option has no effect if a file object is passed in. + + The `format_` option can be either 'nml' (namelist) or 'rc', and + specifies the file format. Formats other than 'nml' may not support all + possible output values. + """ + expect( + format_ in ("nml", "rc", "nmlcontents"), + "Namelist.write: unexpected output format {!r}".format(str(format_)), + ) + if isinstance(out_file, str): + logger.debug("Writing namelist to: {}".format(out_file)) + flag = "a" if append else "w" + with open(out_file, flag) as file_obj: + self._write(file_obj, groups, format_, sorted_groups=sorted_groups) + else: + logger.debug("Writing namelist to file object") + self._write(out_file, groups, format_, sorted_groups=sorted_groups)
+ + + def _write(self, out_file, groups, format_, sorted_groups): + """Unwrapped version of `write` assuming that a file object is input.""" + if groups is None: + groups = list(self._groups.keys()) + if format_ == "nml" or format_ == "nmlcontents": + equals = " =" + elif format_ == "rc": + equals = ":" + if sorted_groups: + group_names = sorted(group for group in groups) + else: + group_names = groups + for group_name in group_names: + if format_ == "nml": + out_file.write("&{}\n".format(group_name)) + # allow empty group + if group_name in self._groups: + group = self._groups[group_name] + for name in sorted(group.keys()): + values = group[name] + + # @ is used in a namelist to put the same namelist variable in multiple groups + # in the write phase, all characters in the namelist variable name after + # the @ and including the @ should be removed + if "@" in name: + name = re.sub("@.+$", "", name) + + # To prettify things for long lists of values, build strings + # line-by-line. + if values[0] == "True" or values[0] == "False": + values[0] = ( + values[0] + .replace("True", ".true.") + .replace("False", ".false.") + ) + lines = [" {}{} {}".format(name, equals, values[0])] + for value in values[1:]: + if value == "True" or value == "False": + value = value.replace("True", ".true.").replace( + "False", ".false." + ) + if len(lines[-1]) + len(value) <= 77: + lines[-1] += ", " + value + else: + lines[-1] += ",\n" + lines.append(" " + value) + lines[-1] += "\n" + for line in lines: + out_file.write(line) + if format_ == "nml": + out_file.write("/\n") + if format_ == "nmlcontents": + out_file.write("\n") + +
+[docs] + def write_nuopc(self, out_file, groups=None, sorted_groups=True): + """Write a nuopc config file out_file + + As with `parse`, the `out_file` argument can be either a file name, or a + file object with a `write` method that accepts unicode. If specified, + the `groups` argument specifies a subset of all groups to write out. + """ + if isinstance(out_file, str): + logger.debug("Writing nuopc config file to: {}".format(out_file)) + flag = "w" + with open(out_file, flag) as file_obj: + self._write_nuopc(file_obj, groups, sorted_groups=sorted_groups) + else: + logger.debug("Writing nuopc config data to file object") + self._write_nuopc(out_file, groups, sorted_groups=sorted_groups)
+ + + def _write_nuopc(self, out_file, groups, sorted_groups): + """Unwrapped version of `write` assuming that a file object is input.""" + if groups is None: + groups = self._groups.keys() + + if sorted_groups: + group_names = sorted(group for group in groups) + else: + group_names = groups + + for group_name in group_names: + if ( + "_modelio" not in group_name + and "_attributes" not in group_name + and "nuopc_" not in group_name + and "_no_group" not in group_name + ): + continue + if "_attributes" in group_name or "_modelio" in group_name: + out_file.write("{}::\n".format(group_name)) + indent = True + + group = self._groups[group_name] + for name in sorted(group.keys()): + values = group[name] + + # @ is used in a namelist to put the same namelist variable in multiple groups + # in the write phase, all characters in the namelist variable name after + # the @ and including the @ should be removed + if "@" in name: + name = re.sub("@.+$", "", name) + + equals = " =" + if "_var" in group_name: + equals = ":" + + # To prettify things for long lists of values, build strings + # line-by-line. + if values[0] == "True" or values[0] == "False": + values[0] = ( + values[0].replace("True", ".true.").replace("False", ".false.") + ) + + if indent: + lines = [" {}{} {}".format(name, equals, values[0])] + else: + lines = ["{}{} {}".format(name, equals, values[0])] + + for value in values[1:]: + if value == "True" or value == "False": + value = value.replace("True", ".true.").replace( + "False", ".false." + ) + if len(lines[-1]) + len(value) <= 77: + lines[-1] += ", " + value + else: + lines[-1] += ",\n" + lines.append(" " + value) + lines[-1] += "\n" + for line in lines: + line = line.replace('"', "") + out_file.write(line) + + if indent: + out_file.write("::\n\n") + indent = False
+ + + +class _NamelistEOF(Exception): + + """Exception thrown for an unexpected end-of-file in a namelist. + + This is an internal helper class, and should never be raised in a context + where it would be visible to a user. (Typically it should be caught and + converted to some other error, or ignored.) + """ + + def __init__(self, message=None): + """Create a `_NamelistEOF`, optionally using an error message.""" + super(_NamelistEOF, self).__init__() + self._message = message + + def __str__(self): + """Get an error message suitable for display.""" + string = "Unexpected end of file encountered in namelist." + if self._message is not None: + string += " ({})".format(self._message) + return string + + +class _NamelistParseError(Exception): + + """Exception thrown when namelist input has a syntax error. + + This is an internal helper class, and should never be raised in a context + where it would be visible to a user. (Typically it should be caught and + converted to some other error, or ignored.) + """ + + def __init__(self, message=None): + """Create a `_NamelistParseError`, optionally using an error message.""" + super(_NamelistParseError, self).__init__() + self._message = message + + def __str__(self): + """Get an error message suitable for display.""" + string = "Error in parsing namelist" + if self._message is not None: + string += ": {}".format(self._message) + return string + + +class _NamelistParser(object): # pylint:disable=too-few-public-methods + + """Class to validate and read from Fortran namelist input. + + This is intended to be an internal helper class and should not be used + directly. Use the `parse` function in this module instead. + """ + + def __init__(self, text, groupless=False): + """Create a `_NamelistParser` given text to parse in a string.""" + # Current location within the file. + self._pos = 0 + self._line = 1 + self._col = 0 + # Text and its size. + self._text = str(text) + self._len = len(self._text) + # Dictionary with group names as keys, and dictionaries of variable + # name-value pairs as values. (Or a single flat dictionary if + # `groupless=True`.) + self._settings = collections.OrderedDict() + # Fortran allows setting a particular index of an array + # such as foo(2)='k' + # this dict is set to that value if used. + self._groupless = groupless + + def _line_col_string(self): + r"""Return a string specifying the current line and column number. + + >>> x = _NamelistParser('abc\nd\nef') + >>> x._advance(5) + >>> x._line_col_string() + 'line 2, column 1' + """ + return "line {}, column {}".format(self._line, self._col) + + def _curr(self): + """Return the character at the current position.""" + return self._text[self._pos] + + def _next(self): + """Return the character at the next position. + + >>> shouldRaise(_NamelistEOF, _NamelistParser(' ')._next) + + """ + # If at the end of the file, we should raise _NamelistEOF. The easiest + # way to do this is to just advance. + if self._pos == self._len - 1: + self._advance() + return self._text[self._pos + 1] + + def _advance(self, nchars=1, check_eof=False): + r"""Advance the parser's current position by `nchars` characters. + + The `nchars` argument must be non-negative. If the end of file is + reached, an exception is thrown, unless `check_eof=True` is passed. If + `check_eof=True` is passed, the position is advanced past the end of the + file (`self._pos == `self._len`), and a boolean is returned to signal + whether or not the end of the file was reached. + + >>> _NamelistParser('abcd')._advance(-1) + Traceback (most recent call last): + ... + AssertionError: _NamelistParser attempted to 'advance' backwards + >>> x = _NamelistParser('abc\nd\nef') + >>> (x._pos, x._line, x._col) + (0, 1, 0) + >>> x._advance(0) + >>> (x._pos, x._line, x._col) + (0, 1, 0) + >>> x._advance(2) + >>> (x._pos, x._line, x._col) + (2, 1, 2) + >>> x._advance(1) + >>> (x._pos, x._line, x._col) + (3, 1, 3) + >>> x._advance(1) + >>> (x._pos, x._line, x._col) + (4, 2, 0) + >>> x._advance(3) + >>> (x._pos, x._line, x._col) + (7, 3, 1) + >>> shouldRaise(_NamelistEOF, x._advance, 1) + + >>> shouldRaise(_NamelistEOF, _NamelistParser('abc\n')._advance, 4) + + >>> x = _NamelistParser('ab') + >>> x._advance(check_eof=True) + False + >>> x._curr() + 'b' + >>> x._advance(check_eof=True) + True + """ + assert nchars >= 0, "_NamelistParser attempted to 'advance' backwards" + new_pos = min(self._pos + nchars, self._len) + consumed_text = self._text[self._pos : new_pos] + self._pos = new_pos + lines = consumed_text.count("\n") + self._line += lines + # If we started a new line, set self._col to be relative to the start of + # the current line. + if lines > 0: + self._col = -(consumed_text.rfind("\n") + 1) + self._col += len(consumed_text) + end_of_file = new_pos == self._len + if check_eof: + return end_of_file + elif end_of_file: + raise _NamelistEOF(message=None) + + def _eat_whitespace(self, allow_initial_comment=False): + r"""Advance until the next non-whitespace character. + + Returns a boolean representing whether anything was eaten. Note that + this function also skips past new lines containing comments. Comments in + the current line will be skipped if `allow_initial_comment=True` is + passed in. + + >>> x = _NamelistParser(' \n a ') + >>> x._eat_whitespace() + True + >>> x._curr() + 'a' + >>> x._eat_whitespace() + False + >>> x._advance() + >>> shouldRaise(_NamelistEOF, x._eat_whitespace) + + >>> x = _NamelistParser(' \n! blah\n ! blah\n a') + >>> x._eat_whitespace() + True + >>> x._curr() + 'a' + >>> x = _NamelistParser('! blah\n a') + >>> x._eat_whitespace() + False + >>> x._curr() + '!' + >>> x = _NamelistParser(' ! blah\n a') + >>> x._eat_whitespace() + True + >>> x._curr() + '!' + >>> x = _NamelistParser(' ! blah\n a') + >>> x._eat_whitespace(allow_initial_comment=True) + True + >>> x._curr() + 'a' + """ + eaten = False + comment_allowed = allow_initial_comment + while True: + while self._curr() in (" ", "\n"): + comment_allowed |= self._curr() == "\n" + eaten = True + self._advance() + # Note the reliance on short-circuit `and` here. + if not (comment_allowed and self._eat_comment()): + break + return eaten + + def _eat_comment(self): + r"""If currently positioned at a '!', advance past the comment's end. + + Only works properly if not currently inside a comment or string. Returns + a boolean representing whether anything was eaten. + + >>> x = _NamelistParser('! foo\n ! bar\na ! bazz') + >>> x._eat_comment() + True + >>> x._curr() + ' ' + >>> x._eat_comment() + False + >>> x._eat_whitespace() + True + >>> x._eat_comment() + True + >>> x._curr() + 'a' + >>> x._advance(2) + >>> shouldRaise(_NamelistEOF, x._eat_comment) + + >>> x = _NamelistParser('! foo\n') + >>> shouldRaise(_NamelistEOF, x._eat_comment) + + """ + if self._curr() != "!": + return False + newline_pos = self._text[self._pos :].find("\n") + if newline_pos == -1: + # This is the last line. + self._advance(self._len - self._pos) + else: + # Advance to the next line. + self._advance(newline_pos) + # Advance to the first character of the next line. + self._advance() + return True + + def _expect_char(self, chars): + """Raise an error if the wrong character is present. + + Does not return anything, but raises a `_NamelistParseError` if `chars` + does not contain the character at the current position. + + >>> x = _NamelistParser('ab') + >>> x._expect_char('a') + >>> x._advance() + >>> shouldRaise(_NamelistParseError, x._expect_char, 'a') + + >>> x._expect_char('ab') + """ + if self._curr() not in chars: + if len(chars) == 1: + char_description = repr(str(chars)) + else: + char_description = "one of the characters in {!r}".format(str(chars)) + raise _NamelistParseError( + "expected {} but found {!r}".format(char_description, str(self._curr())) + ) + + def _parse_namelist_group_name(self): + r"""Parses and returns a namelist group name at the current position. + + >>> shouldRaise(_NamelistParseError, _NamelistParser('abc')._parse_namelist_group_name) + + >>> shouldRaise(_NamelistEOF, _NamelistParser('&abc')._parse_namelist_group_name) + + >>> _NamelistParser('&abc ')._parse_namelist_group_name() + 'abc' + >>> _NamelistParser('&abc\n')._parse_namelist_group_name() + 'abc' + >>> shouldRaise(_NamelistParseError, _NamelistParser('&abc/ ')._parse_namelist_group_name) + + >>> shouldRaise(_NamelistParseError, _NamelistParser('&abc= ')._parse_namelist_group_name) + + >>> shouldRaise(_NamelistParseError, _NamelistParser('& ')._parse_namelist_group_name) + + """ + self._expect_char("&") + self._advance() + return self._parse_variable_name(allow_equals=False) + + def _parse_variable_name(self, allow_equals=True): + r"""Parses and returns a variable name at the current position. + + The `allow_equals` flag controls whether '=' can denote the end of the + variable name; if it is `False`, only white space can be used for this + purpose. + + >>> shouldRaise(_NamelistEOF, _NamelistParser('abc')._parse_variable_name) + + >>> _NamelistParser('foo(2)= ')._parse_variable_name() + 'foo(2)' + >>> _NamelistParser('abc ')._parse_variable_name() + 'abc' + >>> _NamelistParser('ABC ')._parse_variable_name() + 'ABC' + >>> _NamelistParser('abc\n')._parse_variable_name() + 'abc' + >>> _NamelistParser('abc%fred\n')._parse_variable_name() + 'abc%fred' + >>> _NamelistParser('abc(2)@fred\n')._parse_variable_name() + 'abc(2)@fred' + >>> _NamelistParser('abc(1:2:3)\n')._parse_variable_name() + 'abc(1:2:3)' + >>> _NamelistParser('aBc=')._parse_variable_name() + 'aBc' + >>> try: + ... _NamelistParser('abc(1,2) ')._parse_variable_name() + ... raise AssertionError("_NamelistParseError not raised") + ... except _NamelistParseError: + ... pass + >>> try: + ... _NamelistParser('abc, ')._parse_variable_name() + ... raise AssertionError("_NamelistParseError not raised") + ... except _NamelistParseError: + ... pass + >>> try: + ... _NamelistParser(' ')._parse_variable_name() + ... raise AssertionError("_NamelistParseError not raised") + ... except _NamelistParseError: + ... pass + >>> _NamelistParser('foo+= ')._parse_variable_name() + 'foo' + """ + old_pos = self._pos + separators = (" ", "\n", "=", "+") if allow_equals else (" ", "\n") + while self._curr() not in separators: + self._advance() + text = self._text[old_pos : self._pos] + if "(" in text: + expect(")" in text, "Parsing error ") + elif ")" in text: + expect(False, "Parsing error ") + + # @ is used in a namelist to put the same namelist variable in multiple groups + # in the write phase, all characters in the namelist variable name after + # the @ and including the @ should be removed + if "%" in text: + text_check = re.sub("%.+$", "", text) + elif "@" in text: + text_check = re.sub("@.+$", "", text) + else: + text_check = text + + if not is_valid_fortran_name(text_check): + if re.search(r".*\(.*\,.*\)", text_check): + err_str = "Multiple dimensions not supported in CIME namelist variables {!r}".format( + str(text) + ) + else: + err_str = "{!r} is not a valid variable name".format(str(text)) + raise _NamelistParseError(err_str) + return text + + def _parse_character_literal(self): + """Parse and return a character literal (a string). + + Position on return is the last character of the string; we avoid + advancing past that in order to avoid potential EOF errors. + + >>> shouldRaise(_NamelistEOF, _NamelistParser('"abc')._parse_character_literal) + + >>> _NamelistParser('"abc" ')._parse_character_literal() + '"abc"' + >>> _NamelistParser("'abc' ")._parse_character_literal() + "'abc'" + >>> shouldRaise(_NamelistParseError, _NamelistParser("*abc* ")._parse_character_literal) + + >>> _NamelistParser("'abc''def' ")._parse_character_literal() + "'abc''def'" + >>> _NamelistParser("'abc''' ")._parse_character_literal() + "'abc'''" + >>> _NamelistParser("'''abc' ")._parse_character_literal() + "'''abc'" + """ + delimiter = self._curr() + old_pos = self._pos + self._advance() + while True: + while self._curr() != delimiter: + self._advance() + # Avoid end-of-file condition. + if self._pos == self._len - 1: + break + # Doubled delimiters are escaped. + if self._next() == delimiter: + self._advance(2) + else: + break + text = self._text[old_pos : self._pos + 1] + if not is_valid_fortran_namelist_literal("character", text): + raise _NamelistParseError( + "{} is not a valid character literal".format(text) + ) + return text + + def _parse_complex_literal(self): + """Parse and return a complex literal. + + Position on return is the last character of the string; we avoid + advancing past that in order to avoid potential EOF errors. + + >>> shouldRaise(_NamelistEOF, _NamelistParser('(1.,2.')._parse_complex_literal) + + >>> _NamelistParser('(1.,2.) ')._parse_complex_literal() + '(1.,2.)' + >>> shouldRaise(_NamelistParseError, _NamelistParser("(A,B) ")._parse_complex_literal) + + """ + old_pos = self._pos + while self._curr() != ")": + self._advance() + text = self._text[old_pos : self._pos + 1] + if not is_valid_fortran_namelist_literal("complex", text): + raise _NamelistParseError( + "{!r} is not a valid complex literal".format(str(text)) + ) + return text + + def _look_ahead_for_equals(self, pos): + r"""Look ahead to see if the next whitespace character is '='. + + The `pos` argument is the position in the text to start from while + looking. This function returns a boolean. + + >>> _NamelistParser('=')._look_ahead_for_equals(0) + True + >>> _NamelistParser('a \n=')._look_ahead_for_equals(1) + True + >>> _NamelistParser('')._look_ahead_for_equals(0) + False + >>> _NamelistParser('a=')._look_ahead_for_equals(0) + False + """ + for test_pos in range(pos, self._len): + if self._text[test_pos] not in (" ", "\n"): + if self._text[test_pos] == "=": + return True + else: + break + return False + + def _look_ahead_for_plusequals(self, pos): + r"""Look ahead to see if the next two non-whitespace character are '+='. + + The `pos` argument is the position in the text to start from while + looking. This function returns a boolean. + + >>> _NamelistParser('+=')._look_ahead_for_plusequals(0) + True + >>> _NamelistParser('a \n+=')._look_ahead_for_plusequals(1) + True + >>> _NamelistParser('')._look_ahead_for_plusequals(0) + False + >>> _NamelistParser('a+=')._look_ahead_for_plusequals(0) + False + """ + for test_pos in range(pos, self._len): + if self._text[test_pos] not in (" ", "\n"): + if self._text[test_pos] == "+": + return self._look_ahead_for_equals(test_pos + 1) + else: + break + return False + + def _parse_literal(self, allow_name=False, allow_eof_end=False): + r"""Parse and return a variable value at the current position. + + The basic strategy is this: + - If a value starts with an apostrophe/quotation mark, parse it as a + character value (string). + - If a value starts with a left parenthesis, parse it as a complex + number. + - Otherwise, read until the next value separator (comma, space, newline, + or slash). + + If the argument `allow_name=True` is passed in, we allow the possibility + that the current position is at the start of the variable name in a new + name-value pair. In this case, `None` is returned, and the current + position remains unchanged. + + If the argument `allow_eof_end=True` is passed in, we allow end-of-file + to mark the end of a literal. + + >>> _NamelistParser('"abc" ')._parse_literal() + '"abc"' + >>> _NamelistParser("'abc' ")._parse_literal() + "'abc'" + >>> shouldRaise(_NamelistEOF, _NamelistParser('"abc"')._parse_literal) + + >>> _NamelistParser('"abc"')._parse_literal(allow_eof_end=True) + '"abc"' + >>> _NamelistParser('(1.,2.) ')._parse_literal() + '(1.,2.)' + >>> shouldRaise(_NamelistEOF, _NamelistParser('(1.,2.)')._parse_literal) + + >>> _NamelistParser('(1.,2.)')._parse_literal(allow_eof_end=True) + '(1.,2.)' + >>> _NamelistParser('5 ')._parse_literal() + '5' + >>> _NamelistParser('6.9 ')._parse_literal() + '6.9' + >>> _NamelistParser('inf ')._parse_literal() + 'inf' + >>> _NamelistParser('nan(booga) ')._parse_literal() + 'nan(booga)' + >>> _NamelistParser('.FLORIDA$ ')._parse_literal() + '.FLORIDA$' + >>> shouldRaise(_NamelistParseError, _NamelistParser('hamburger ')._parse_literal) + + >>> _NamelistParser('5,')._parse_literal() + '5' + >>> _NamelistParser('5\n')._parse_literal() + '5' + >>> _NamelistParser('5/')._parse_literal() + '5' + >>> _NamelistParser(',')._parse_literal() + '' + >>> _NamelistParser('6*5 ')._parse_literal() + '6*5' + >>> _NamelistParser('6*(1., 2.) ')._parse_literal() + '6*(1., 2.)' + >>> _NamelistParser('6*"a" ')._parse_literal() + '6*"a"' + >>> shouldRaise(_NamelistEOF, _NamelistParser('6*')._parse_literal) + + >>> _NamelistParser('6*')._parse_literal(allow_eof_end=True) + '6*' + >>> shouldRaise(_NamelistParseError, _NamelistParser('foo= ')._parse_literal) + + >>> shouldRaise(_NamelistParseError, _NamelistParser('foo+= ')._parse_literal) + + >>> _NamelistParser('5,')._parse_literal(allow_name=True) + '5' + >>> x = _NamelistParser('foo= ') + >>> x._parse_literal(allow_name=True) + >>> x._curr() + 'f' + >>> x = _NamelistParser('foo+= ') + >>> x._parse_literal(allow_name=True) + >>> x._curr() + 'f' + >>> shouldRaise(_NamelistParseError, _NamelistParser('6*foo= ')._parse_literal, allow_name=True) + + >>> shouldRaise(_NamelistParseError, _NamelistParser('6*foo+= ')._parse_literal, allow_name=True) + + >>> x = _NamelistParser('foo = ') + >>> x._parse_literal(allow_name=True) + >>> x._curr() + 'f' + >>> x = _NamelistParser('foo\n= ') + >>> x._parse_literal(allow_name=True) + >>> x._curr() + 'f' + >>> _NamelistParser('')._parse_literal(allow_eof_end=True) + '' + """ + # Deal with empty input string. + if allow_eof_end and self._pos == self._len: + return "" + # Deal with a repeated value prefix. + old_pos = self._pos + if FORTRAN_REPEAT_PREFIX_REGEX.search(self._text[self._pos :]): + allow_name = False + while self._curr() != "*": + self._advance() + if self._advance(check_eof=allow_eof_end): + # In case the file ends with the 'r*' form of null value. + return self._text[old_pos:] + prefix = self._text[old_pos : self._pos] + # Deal with delimited literals. + if self._curr() in ('"', "'"): + literal = self._parse_character_literal() + self._advance(check_eof=allow_eof_end) + return prefix + literal + if self._curr() == "(": + literal = self._parse_complex_literal() + self._advance(check_eof=allow_eof_end) + return prefix + literal + # Deal with non-delimited literals. + new_pos = self._pos + separators = [" ", "\n", ",", "/"] + if allow_name: + separators.append("=") + separators.append("+") + while new_pos != self._len and self._text[new_pos] not in separators: + # allow commas if they are inside () + if self._text[new_pos] == "(": + separators.remove(",") + elif self._text[new_pos] == ")": + separators.append(",") + new_pos += 1 + + if not allow_eof_end and new_pos == self._len: + # At the end of the file, give up by throwing an EOF. + self._advance(self._len) + # If `allow_name` is set, we need to check and see if the next non-blank + # character is '=' or the next two are '+=', and return `None` if so. + if allow_name and self._look_ahead_for_equals(new_pos): + return + elif allow_name and self._look_ahead_for_plusequals(new_pos): + return + + self._advance(new_pos - self._pos, check_eof=allow_eof_end) + text = self._text[old_pos : self._pos] + if not any( + is_valid_fortran_namelist_literal(type_, text) + for type_ in ("integer", "logical", "real") + ): + raise _NamelistParseError( + "expected literal value, but got {!r}".format(str(text)) + ) + return text + + def _expect_separator(self, allow_eof=False): + r"""Advance past the current value separator. + + This function raises an error if we are not positioned at a valid value + separator. It returns `False` if the end-of-namelist ('/') was + encountered, in which case this function will leave the current position + at the '/'. This function returns `True` otherwise, and skips to the + location of the next non-whitespace character. + + If `allow_eof=True` is passed to this function, the meanings of '/' and + the end-of-file are reversed. That is, an exception will be raised if a + '/' is encountered, but the end-of-file will cause `False` to be + returned rather than `True`. (An end-of-file after a ',' will be taken + to be part of the next separator, and will not cause `False` to be + returned.) + + >>> x = _NamelistParser("\na") + >>> x._expect_separator() + True + >>> x._curr() + 'a' + >>> x = _NamelistParser(" a") + >>> x._expect_separator() + True + >>> x._curr() + 'a' + >>> x = _NamelistParser(",a") + >>> x._expect_separator() + True + >>> x._curr() + 'a' + >>> x = _NamelistParser("/a") + >>> x._expect_separator() + False + >>> x._curr() + '/' + >>> x = _NamelistParser("a") + >>> shouldRaise(_NamelistParseError, x._expect_separator) + + >>> x = _NamelistParser(" , a") + >>> x._expect_separator() + True + >>> x._curr() + 'a' + >>> x = _NamelistParser(" / a") + >>> x._expect_separator() + False + >>> x._curr() + '/' + >>> x = _NamelistParser(" , ! Some stuff\n a") + >>> x._expect_separator() + True + >>> x._curr() + 'a' + >>> x = _NamelistParser(" , ! Some stuff\n ! Other stuff\n a") + >>> x._expect_separator() + True + >>> x._curr() + 'a' + >>> _NamelistParser("")._expect_separator(allow_eof=True) + False + >>> x = _NamelistParser(" ") + >>> x._expect_separator(allow_eof=True) + False + >>> x = _NamelistParser(" ,") + >>> x._expect_separator(allow_eof=True) + True + >>> x = _NamelistParser(" / ") + >>> shouldRaise(_NamelistParseError, x._expect_separator, allow_eof=True) + + """ + errstring = "found group-terminating '/' in file without group names" + # Deal with the possibility that we are already at EOF. + if allow_eof and self._pos == self._len: + return False + # Must actually be at a value separator. + self._expect_char(" \n,/") + try: + self._eat_whitespace() + if self._curr() == "/": + if allow_eof: + raise _NamelistParseError(errstring) + else: + return False + except _NamelistEOF: + if allow_eof: + return False + else: + raise + try: + if self._curr() == ",": + self._advance() + self._eat_whitespace(allow_initial_comment=True) + except _NamelistEOF: + if not allow_eof: + raise + return True + + def _parse_name_and_values(self, allow_eof_end=False): + r"""Parse and return a variable name and values assigned to that name. + + The return value of this function is a tuple containing (a) the name of + the variable in a string, (b) a list of the variable's values, and + (c) whether or not to add the found value to existing variable. Null + values are represented by the empty string. + + If `allow_eof_end=True`, the end of the sequence of values might come + from an empty string rather than a slash. (This is used for the + alternate file format in "groupless" mode.) + + >>> _NamelistParser("foo='bar' /")._parse_name_and_values() + ('foo', ["'bar'"], False) + >>> _NamelistParser("foo(3)='bar' /")._parse_name_and_values() + ('foo(3)', ["'bar'"], False) + >>> _NamelistParser("foo ='bar' /")._parse_name_and_values() + ('foo', ["'bar'"], False) + >>> _NamelistParser("foo=\n'bar' /")._parse_name_and_values() + ('foo', ["'bar'"], False) + >>> shouldRaise(_NamelistParseError, _NamelistParser("foo 'bar' /")._parse_name_and_values) + + >>> _NamelistParser("foo='bar','bazz' /")._parse_name_and_values() + ('foo', ["'bar'", "'bazz'"], False) + >>> _NamelistParser("foo=,,'bazz',6*/")._parse_name_and_values() + ('foo', ['', '', "'bazz'", '6*'], False) + >>> _NamelistParser("foo='bar' 'bazz' foo2='ban'")._parse_name_and_values() + ('foo', ["'bar'", "'bazz'"], False) + >>> _NamelistParser("foo='bar' 'bazz' foo2(2)='ban'")._parse_name_and_values() + ('foo', ["'bar'", "'bazz'"], False) + >>> shouldRaise(_NamelistParseError, _NamelistParser("foo= foo2='ban' ")._parse_name_and_values) + + >>> _NamelistParser("foo=,,'bazz',6* ")._parse_name_and_values(allow_eof_end=True) + ('foo', ['', '', "'bazz'", '6*'], False) + >>> _NamelistParser("foo(3)='bazz'")._parse_name_and_values(allow_eof_end=True) + ('foo(3)', ["'bazz'"], False) + >>> shouldRaise(_NamelistEOF, _NamelistParser("foo=")._parse_name_and_values) + + >>> _NamelistParser("foo=")._parse_name_and_values(allow_eof_end=True) + ('foo', [''], False) + >>> _NamelistParser("foo= ")._parse_name_and_values(allow_eof_end=True) + ('foo', [''], False) + >>> _NamelistParser("foo=2")._parse_name_and_values(allow_eof_end=True) + ('foo', ['2'], False) + >>> _NamelistParser("foo=1,2")._parse_name_and_values(allow_eof_end=True) + ('foo', ['1', '2'], False) + >>> _NamelistParser("foo(1:2)=1,2,3 ")._parse_name_and_values(allow_eof_end=True) # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Too many values for array foo(1:2) + >>> _NamelistParser("foo=1,")._parse_name_and_values(allow_eof_end=True) + ('foo', ['1', ''], False) + >>> _NamelistParser("foo+=1")._parse_name_and_values(allow_eof_end=True) + ('foo', ['1'], True) + """ + name = self._parse_variable_name() + addto = False # This keeps track of whether += existed + + self._eat_whitespace() + # check to see if we have a "+=" + if self._curr() == "+": + self._advance() + addto = True # tell parser that we want to add to dictionary values + self._expect_char("=") + try: + self._advance() + self._eat_whitespace() + except _NamelistEOF: + # If we hit the end of file, return a name assigned to a null value. + if allow_eof_end: + return name, [""], addto + else: + raise + # Expect at least one literal, even if it's a null value. + values = [self._parse_literal(allow_eof_end=allow_eof_end)] + # While we haven't reached the end of the namelist group... + while self._expect_separator(allow_eof=allow_eof_end): + # see if we can parse a literal (we might get a variable name)... + literal = self._parse_literal(allow_name=True, allow_eof_end=allow_eof_end) + if literal is None: + break + # and if it really is a literal, add it. + values.append(literal) + (minindex, maxindex, step) = get_fortran_variable_indices( + name, allow_any_len=True + ) + if (minindex > 1 or maxindex > minindex or step > 1) and maxindex > 0: + arraylen = max(0, 1 + ((maxindex - minindex) / step)) + expect(len(values) <= arraylen, "Too many values for array {}".format(name)) + + return name, values, addto + + def _parse_namelist_group(self): + r"""Parse an entire namelist group, adding info to `self._settings`. + + This function assumes that we start at the beginning of the group name + (e.g. '&'), and will return at the end of the namelist group ('/'). + + >>> x = _NamelistParser("&group /") + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('group', {})]) + >>> x._curr() + '/' + >>> x = _NamelistParser("&group\n foo='bar','bazz'\n,, foo2=2*5\n /") + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('group', {'foo': ["'bar'", "'bazz'", ''], 'foo2': ['5', '5']})]) + >>> x = _NamelistParser("&group\n foo='bar','bazz'\n,, foo2=2*5\n /", groupless=True) + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('foo', ["'bar'", "'bazz'", '']), ('foo2', ['5', '5'])]) + >>> x._curr() + '/' + >>> x = _NamelistParser("&group /&group /") + >>> x._parse_namelist_group() + >>> x._advance() + >>> shouldRaise(_NamelistParseError, x._parse_namelist_group) + + >>> x = _NamelistParser("&group foo='bar', foo='bazz' /") + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('group', {'foo': ["'bazz'"]})]) + >>> x = _NamelistParser("&group foo='bar', foo= /") + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('group', {'foo': ["'bar'"]})]) + >>> x = _NamelistParser("&group foo='bar', foo= /", groupless=True) + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('foo', ["'bar'"])]) + >>> x = _NamelistParser("&group foo='bar', foo+='baz' /", groupless=True) + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('foo', ["'bar'", "'baz'"])]) + >>> x = _NamelistParser("&group foo+='bar' /", groupless=True) + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('foo', ["'bar'"])]) + >>> x = _NamelistParser("&group foo='bar', foo+='baz' /") + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('group', {'foo': ["'bar'", "'baz'"]})]) + >>> x = _NamelistParser("&group foo+='bar' /") + >>> x._parse_namelist_group() + >>> x._settings + OrderedDict([('group', {'foo': ["'bar'"]})]) + """ + group_name = self._parse_namelist_group_name() + if not self._groupless: + # Make sure that this is the first time we've seen this group. + if group_name in self._settings: + raise _NamelistParseError( + "Namelist group {!r} encountered twice.".format(str(group_name)) + ) + self._settings[group_name] = {} + self._eat_whitespace() + while self._curr() != "/": + name, values, addto = self._parse_name_and_values() + dsettings = [] + if self._groupless: + if name in self._settings: + dsettings = self._settings[name] + if addto: + values = self._settings[name] + values + if not addto: + values = merge_literal_lists(dsettings, values) + self._settings[name] = values + else: + group = self._settings[group_name] + if name in group: + dsettings = group[name] + if addto: + values = group[name] + values + if not addto: + values = merge_literal_lists(dsettings, values) + group[name] = values + + def parse_namelist(self): + r"""Parse the contents of an entire namelist file. + + Returned information is a dictionary of dictionaries, mapping variables + first by namelist group name, then by variable name. + + >>> _NamelistParser("").parse_namelist() + OrderedDict() + >>> _NamelistParser(" \n!Comment").parse_namelist() + OrderedDict() + >>> _NamelistParser(" &group /").parse_namelist() + OrderedDict([('group', {})]) + >>> _NamelistParser("! Comment \n &group /! Comment\n ").parse_namelist() + OrderedDict([('group', {})]) + >>> _NamelistParser("! Comment \n &group /! Comment ").parse_namelist() + OrderedDict([('group', {})]) + >>> _NamelistParser("&group1\n foo='bar','bazz'\n,, foo2=2*5\n / &group2 /").parse_namelist() + OrderedDict([('group1', {'foo': ["'bar'", "'bazz'", ''], 'foo2': ['5', '5']}), ('group2', {})]) + >>> _NamelistParser("!blah \n foo='bar','bazz'\n,, foo2=2*5\n ", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'", "'bazz'", '']), ('foo2', ['2*5'])]) + >>> _NamelistParser("!blah \n foo='bar','bazz'\n,, foo2=2*5,6\n ", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'", "'bazz'", '']), ('foo2', ['2*5', '6'])]) + >>> _NamelistParser("!blah \n foo='bar'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'"])]) + >>> _NamelistParser("foo='bar', foo(3)='bazz'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'"]), ('foo(3)', ["'bazz'"])]) + >>> _NamelistParser("foo(2)='bar'", groupless=True).parse_namelist() + OrderedDict([('foo(2)', ["'bar'"])]) + >>> _NamelistParser("foo(2)='bar', foo(3)='bazz'", groupless=True).parse_namelist() + OrderedDict([('foo(2)', ["'bar'"]), ('foo(3)', ["'bazz'"])]) + >>> _NamelistParser("foo='bar', foo='bazz'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bazz'"])]) + >>> _NamelistParser("foo='bar'\n foo+='bazz'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'", "'bazz'"])]) + >>> _NamelistParser("foo='bar', foo='bazz'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bazz'"])]) + >>> _NamelistParser("foo='bar', foo=", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'"])]) + >>> _NamelistParser("foo='bar', 'bazz'\n foo+='ban'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'", "'bazz'", "'ban'"])]) + >>> _NamelistParser("foo+='bar'", groupless=True).parse_namelist() + OrderedDict([('foo', ["'bar'"])]) + """ + # Return empty dictionary for empty files. + if self._len == 0: + return self._settings + # Remove initial whitespace and comments, and return empty dictionary if + # that's all we have. + try: + self._eat_whitespace(allow_initial_comment=True) + except _NamelistEOF: + return self._settings + # Handle case with no namelist groups. + if self._groupless and self._curr() != "&": + while self._pos < self._len: + name, values, addto = self._parse_name_and_values(allow_eof_end=True) + if name in self._settings: + if addto: + values = self._settings[name] + values + else: + values = merge_literal_lists(self._settings[name], values) + self._settings[name] = values + return self._settings + # Loop over namelist groups in the file. + while True: + self._parse_namelist_group() + # After each group, try to move forward to the next one. If we run + # out of text, return what we've found. + try: + self._advance() + self._eat_whitespace(allow_initial_comment=True) + except _NamelistEOF: + return self._settings +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/nmlgen.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/nmlgen.html new file mode 100644 index 00000000000..4d4e74afb99 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/nmlgen.html @@ -0,0 +1,1129 @@ + + + + + + CIME.nmlgen — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.nmlgen

+"""Class for generating component namelists."""
+
+# Typically ignore this.
+# pylint: disable=invalid-name
+
+# Disable these because this is our standard setup
+# pylint: disable=wildcard-import,unused-wildcard-import
+
+import datetime
+import re
+import hashlib
+
+from CIME.XML.standard_module_setup import *
+from CIME.namelist import (
+    Namelist,
+    parse,
+    character_literal_to_string,
+    string_to_character_literal,
+    expand_literal_list,
+    compress_literal_list,
+    merge_literal_lists,
+)
+from CIME.XML.namelist_definition import NamelistDefinition
+from CIME.utils import expect, safe_copy
+from CIME.XML.stream import Stream
+from CIME.XML.grids import GRID_SEP
+
+logger = logging.getLogger(__name__)
+
+_var_ref_re = re.compile(r"\$(\{)?(?P<name>\w+)(?(1)\})")
+
+_ymd_re = re.compile(r"%(?P<digits>[1-9][0-9]*)?y(?P<month>m(?P<day>d)?)?")
+
+_stream_mct_file_template = """<?xml version="1.0"?>
+<file id="stream" version="1.0">
+<dataSource>
+   GENERIC
+</dataSource>
+<domainInfo>
+  <variableNames>
+     {domain_varnames}
+  </variableNames>
+  <filePath>
+     {domain_filepath}
+  </filePath>
+  <fileNames>
+     {domain_filenames}
+  </fileNames>
+</domainInfo>
+<fieldInfo>
+   <variableNames>
+     {data_varnames}
+   </variableNames>
+   <filePath>
+     {data_filepath}
+   </filePath>
+   <fileNames>
+    {data_filenames}
+   </fileNames>
+   <offset>
+      {offset}
+   </offset>
+</fieldInfo>
+</file>
+"""
+
+
+
+[docs] +class NamelistGenerator(object): + + """Utility class for generating namelists for a given component.""" + + _streams_variables = [] + + # pylint:disable=too-many-arguments + def __init__(self, case, definition_files, files=None): + """Construct a namelist generator. + + Arguments: + `case` - `Case` object corresponding to the current case. + `infiles` - List of files with user namelist options. + `definition_files` - List of XML files containing namelist definitions. + `config` - A dictionary of attributes for matching defaults. + """ + # Save off important information from inputs. + self._case = case + self._din_loc_root = case.get_value("DIN_LOC_ROOT") + + # Create definition object - this will validate the xml schema in the definition file + self._definition = NamelistDefinition(definition_files[0], files=files) + + # Determine array of _stream_variables from definition object + # This is only applicable to data models + self._streams_namelists = {"streams": []} + self._streams_variables = self._definition.get_per_stream_entries() + for variable in self._streams_variables: + self._streams_namelists[variable] = [] + + # Create namelist object. + self._namelist = Namelist() + + # entries for which we should potentially call add_default (variables that do not + # set skip_default_entry) + self._default_nodes = [] + + # Define __enter__ and __exit__ so that we can use this as a context manager + def __enter__(self): + return self + + def __exit__(self, *_): + return False + +
+[docs] + def init_defaults( + self, + infiles, + config, + skip_groups=None, + skip_entry_loop=False, + skip_default_for_groups=None, + set_group_name=None, + ): + """Return array of names of all definition nodes + + infiles should be a list of file paths, each one giving namelist settings that + take precedence over the default values. Often there will be only one file in this + list. If there are multiple files, earlier files take precedence over later files. + + If skip_default_for_groups is provided, it should be a list of namelist group + names; the add_default call will not be done for any variables in these + groups. This is often paired with later conditional calls to + add_defaults_for_group. + + """ + if skip_default_for_groups is None: + skip_default_for_groups = [] + + # first clean out any settings left over from previous calls + self.new_instance() + + # Determine the array of entry nodes that will be acted upon + self._default_nodes = self._definition.set_nodes(skip_groups=skip_groups) + + # Add attributes to definition object + self._definition.add_attributes(config) + + # Parse the infile and create namelist settings for the contents of infile + # this will override all other settings in add_defaults + for file_ in infiles: + # Parse settings in "groupless" mode. + nml_dict = parse(in_file=file_, groupless=True) + + # Add groups using the namelist definition. + new_namelist = self._definition.dict_to_namelist(nml_dict, filename=file_) + + # Make sure that the input is actually valid. + self._definition.validate(new_namelist, filename=file_) + + # Merge into existing settings (earlier settings have precedence + # over later settings). + self._namelist.merge_nl(new_namelist) + + if not skip_entry_loop: + for entry in self._default_nodes: + if set_group_name: + group_name = set_group_name + else: + group_name = self._definition.get_group_name(entry) + if not group_name in skip_default_for_groups: + self.add_default(self._definition.get(entry, "id")) + + return [self._definition.get(entry, "id") for entry in self._default_nodes]
+ + +
+[docs] + def rename_group(self, group, newgroup): + """Pass through to namelist definition""" + return self._definition.rename_group(group, newgroup)
+ + +
+[docs] + def add_defaults_for_group(self, group): + """Call add_default for namelist variables in the given group + + This still skips variables that have attributes of skip_default_entry or + per_stream_entry. + + This must be called after init_defaults. It is often paired with use of + skip_default_for_groups in the init_defaults call. + """ + for entry in self._default_nodes: + group_name = self._definition.get_group_name(entry) + if group_name == group: + self.add_default(self._definition.get(entry, "id"))
+ + +
+[docs] + def confirm_group_is_empty(self, group_name, errmsg): + """Confirms that no values have been added to the given group + + If any values HAVE been added to this group, aborts with the given error message. + + This is often paired with use of skip_default_for_groups in the init_defaults call + and add_defaults_for_group, as in: + + if nmlgen.get_value("enable_frac_overrides") == ".true.": + nmlgen.add_defaults_for_group("glc_override_nml") + else: + nmlgen.confirm_empty("glc_override_nml", "some message") + + Args: + group_name: string - name of namelist group + errmsg: string - error message to print if group is not empty + """ + variables_in_group = self._namelist.get_variable_names(group_name) + fullmsg = "{}\nOffending variables: {}".format(errmsg, variables_in_group) + expect(len(variables_in_group) == 0, fullmsg)
+ + +
+[docs] + @staticmethod + def quote_string(string): + """Convert a string to a quoted Fortran literal. + + Does nothing if the string appears to be quoted already. + """ + if string == "" or (string[0] not in ('"', "'") or string[0] != string[-1]): + string = string_to_character_literal(string) + return string
+ + + def _to_python_value(self, name, literals): + """Transform a literal list as needed for `get_value`.""" + ( + var_type, + _, + var_size, + ) = self._definition.split_type_string(name) + if len(literals) > 0 and literals[0] is not None: + values = expand_literal_list(literals) + else: + return "" + + for i, scalar in enumerate(values): + if scalar == "": + values[i] = None + elif var_type == "character": + values[i] = character_literal_to_string(scalar) + + if var_size == 1: + return values[0] + else: + return values + + def _to_namelist_literals(self, name, values): + """Transform a literal list as needed for `set_value`. + + This is the inverse of `_to_python_value`, except that many of the + changes have potentially already been performed. + """ + ( + var_type, + _, + var_size, + ) = self._definition.split_type_string(name) + if var_size == 1 and not isinstance(values, list): + values = [values] + + for i, scalar in enumerate(values): + if scalar is None: + values[i] = "" + elif var_type == "character": + expect(not isinstance(scalar, list), name) + values[i] = self.quote_string(scalar) + + return compress_literal_list(values) + +
+[docs] + def get_value(self, name): + """Get the current value of a given namelist variable. + + Note that the return value of this function is always a string or a list + of strings. E.g. the scalar logical value .false. will be returned as + `".false."`, while an array of two .false. values will be returned as + `[".false.", ".false."]`. Whether or not a value is scalar is determined + by checking the array size in the namelist definition file. + + Null values are converted to `None`, and repeated values are expanded, + e.g. `['2*3']` is converted to `['3', '3', '3']`. + + For character variables, the value is converted to a Python string (e.g. + quotation marks are removed). + + All other literals are returned as the raw string values that will be + written to the namelist. + """ + return self._to_python_value(name, self._namelist.get_value(name))
+ + +
+[docs] + def set_value(self, name, value): + """Set the current value of a given namelist variable. + + Usually, you should use `add_default` instead of this function. + + The `name` argument is the name of the variable to set, and the `value` + is a list of strings to use as settings. If the variable is scalar, the + list is optional; i.e. a scalar logical can be set using either + `value='.false.'` or `value=['.false.']`. If the variable is of type + character, and the input is missing quotes, quotes will be added + automatically. If `None` is provided in place of a string, this will be + translated to a null value. + + Note that this function will overwrite the current value, which may hold + a user-specified setting. Even if `value` is (or contains) a null value, + the old setting for the variable will be thrown out completely. + """ + var_group = self._definition.get_group(name) + literals = self._to_namelist_literals(name, value) + ( + _, + _, + var_size, + ) = self._definition.split_type_string(name) + if len(literals) > 0 and literals[0] is not None: + self._namelist.set_variable_value(var_group, name, literals, var_size)
+ + +
+[docs] + def get_default(self, name, config=None, allow_none=False): + """Get the value of a variable from the namelist definition file. + + The `config` argument is passed through to the underlying + `NamelistDefaults.get_value` call as the `attribute` argument. + + The return value of this function is a list of values that were found in + the defaults file. If there is no matching default, this function + returns `None` if `allow_none=True` is passed, otherwise an error is + raised. + + Note that we perform some translation of the values, since there are a + few differences between Fortran namelist literals and values in the + defaults file: + 1) In the defaults file, whitespace is ignored except within strings, so + the output of this function strips out most whitespace. (This implies + that commas are the only way to separate array elements in the + defaults file.) + 2) In the defaults file, quotes around character literals (strings) are + optional, as long as the literal does not contain whitespace, commas, + or (single or double) quotes. If a setting for a character variable + does not seem to have quotes (and is not a null value), this function + will add them. + 3) Default values may refer to variables in a case's `env_*.xml` files. + This function replaces references of the form `$VAR` or `${VAR}` with + the value of the variable `VAR` in an env file, if that variable + exists. This behavior is suppressed within single-quoted strings + (similar to parameter expansion in shell scripts). + """ + default = self._definition.get_value_match( + name, attributes=config, exact_match=False + ) + if default is None: + expect(allow_none, "No default value found for {}.".format(name)) + return None + default = expand_literal_list(default) + + var_type, _, _ = self._definition.split_type_string(name) + + for i, scalar in enumerate(default): + # Skip single-quoted strings. + if ( + var_type == "character" + and scalar != "" + and scalar[0] == scalar[-1] == "'" + ): + continue + match = _var_ref_re.search(scalar) + while match: + env_val = self._case.get_value(match.group("name")) + if env_val is not None: + scalar = scalar.replace(match.group(0), str(env_val), 1) + match = _var_ref_re.search(scalar) + else: + scalar = None + logger.warning( + "Namelist default for variable {} refers to unknown XML variable {}.".format( + name, match.group("name") + ) + ) + match = None + default[i] = scalar + + # Deal with missing quotes. + + if var_type == "character": + for i, scalar in enumerate(default): + # Preserve null values. + if scalar != "": + default[i] = self.quote_string(scalar) + + default = self._to_python_value(name, default) + + return default
+ + +
+[docs] + def get_streams(self): + """Get a list of all streams used for the current data model mode.""" + return self.get_default("streamslist")
+ + +
+[docs] + def clean_streams(self): + for variable in self._streams_variables: + self._streams_namelists[variable] = [] + self._streams_namelists["streams"] = []
+ + +
+[docs] + def new_instance(self): + """Clean the object just enough to introduce a new instance""" + self.clean_streams() + self._namelist.clean_groups()
+ + + def _sub_fields(self, varnames): + """Substitute indicators with given values in a list of fields. + + Replace any instance of the following substring indicators with the + appropriate values: + %glc = two-digit GLC elevation class from 00 through glc_nec + + The difference between this function and `_sub_paths` is that this + function is intended to be used for variable names (especially from the + `strm_datvar` defaults), whereas `_sub_paths` is intended for use on + input data file paths. + + Returns a string. + + Example: If `_sub_fields` is called with an array containing two + elements, each of which contains two strings, and glc_nec=3: + foo bar + s2x_Ss_tsrf%glc tsrf%glc + then the returned array will be: + foo bar + s2x_Ss_tsrf00 tsrf00 + s2x_Ss_tsrf01 tsrf01 + s2x_Ss_tsrf02 tsrf02 + s2x_Ss_tsrf03 tsrf03 + """ + lines = varnames.split("\n") + new_lines = [] + for line in lines: + if not line: + continue + if "%glc" in line: + if self._case.get_value("GLC_NEC") == 0: + glc_nec_indices = [] + else: + glc_nec_indices = range(self._case.get_value("GLC_NEC") + 1) + for i in glc_nec_indices: + new_lines.append(line.replace("%glc", "{:02d}".format(i))) + else: + new_lines.append(line) + return "\n".join(new_lines) + + @staticmethod + def _days_in_month(month, year=1): + """Number of days in the given month (specified as an int, 1-12). + + The `year` argument gives the year for which to request the number of + days, in a Gregorian calendar. Defaults to `1` (not a leap year). + """ + month_start = datetime.date(year, month, 1) + if month == 12: + next_year = year + 1 + next_month = 1 + else: + next_year = year + next_month = month + 1 + next_month_start = datetime.date(next_year, next_month, 1) + return (next_month_start - month_start).days + + def _sub_paths(self, filenames, year_start, year_end): + """Substitute indicators with given values in a list of filenames. + + Replace any instance of the following substring indicators with the + appropriate values: + %y = year from the range year_start to year_end + %ym = year-month from the range year_start to year_end with all 12 + months + %ymd = year-month-day from the range year_start to year_end with + all 12 months + + For the date indicators, the year may be prefixed with a number of + digits to use (the default is 4). E.g. `%2ymd` can be used to change the + number of year digits from 4 to 2. + + Note that we assume that there is no mixing and matching of date + indicators, i.e. you cannot use `%4ymd` and `%2y` in the same line. Note + also that we use a no-leap calendar, i.e. every month has the same + number of days every year. + + The difference between this function and `_sub_fields` is that this + function is intended to be used for file names (especially from the + `strm_datfil` defaults), whereas `_sub_fields` is intended for use on + variable names. + + Returns a string (filenames separated by newlines). + """ + lines = [line for line in filenames.split("\n") if line] + new_lines = [] + for line in lines: + match = _ymd_re.search(filenames) + if match is None: + new_lines.append(line) + continue + if match.group("digits"): + year_format = "{:0" + match.group("digits") + "d}" + else: + year_format = "{:04d}" + for year in range(year_start, year_end + 1): + if match.group("day"): + for month in range(1, 13): + days = self._days_in_month(month) + for day in range(1, days + 1): + date_string = (year_format + "-{:02d}-{:02d}").format( + year, month, day + ) + new_line = line.replace(match.group(0), date_string) + new_lines.append(new_line) + elif match.group("month"): + for month in range(1, 13): + date_string = (year_format + "-{:02d}").format(year, month) + new_line = line.replace(match.group(0), date_string) + new_lines.append(new_line) + else: + date_string = year_format.format(year) + new_line = line.replace(match.group(0), date_string) + new_lines.append(new_line) + return "\n".join(new_lines) + + @staticmethod + def _add_xml_delimiter(list_to_deliminate, delimiter): + expect(delimiter and not " " in delimiter, "Missing or badly formed delimiter") + pred = "<{}>".format(delimiter) + postd = "</{}>".format(delimiter) + for n, _ in enumerate(list_to_deliminate): + list_to_deliminate[n] = pred + list_to_deliminate[n].strip() + postd + return "\n ".join(list_to_deliminate) + +
+[docs] + def create_stream_file_and_update_shr_strdata_nml( + self, + config, + caseroot, # pylint:disable=too-many-locals + stream, + stream_path, + data_list_path, + ): + """Write the pseudo-XML file corresponding to a given stream. + + Arguments: + `config` - Used to look up namelist defaults. This is used *in addition* + to the `config` used to construct the namelist generator. The + main reason to supply additional configuration options here + is to specify stream-specific settings. + `stream` - Name of the stream. + `stream_path` - Path to write the stream file to. + `data_list_path` - Path of file to append input data information to. + """ + + if os.path.exists(stream_path): + os.unlink(stream_path) + user_stream_path = os.path.join( + caseroot, "user_" + os.path.basename(stream_path) + ) + + # Use the user's stream file, or create one if necessary. + config = config.copy() + config["stream"] = stream + + # Stream-specific configuration. + if os.path.exists(user_stream_path): + safe_copy(user_stream_path, stream_path) + strmobj = Stream(infile=stream_path) + domain_filepath = strmobj.get_value("domainInfo/filePath") + data_filepath = strmobj.get_value("fieldInfo/filePath") + domain_filenames = strmobj.get_value("domainInfo/fileNames") + data_filenames = strmobj.get_value("fieldInfo/fileNames") + else: + # Figure out the details of this stream. + if stream in ("prescribed", "copyall"): + # Assume only one file for prescribed mode! + grid_file = self.get_default("strm_grid_file", config) + domain_filepath, domain_filenames = os.path.split(grid_file) + data_file = self.get_default("strm_data_file", config) + data_filepath, data_filenames = os.path.split(data_file) + else: + domain_filepath = self.get_default("strm_domdir", config) + domain_filenames = self.get_default("strm_domfil", config) + data_filepath = self.get_default("strm_datdir", config) + data_filenames = self.get_default("strm_datfil", config) + + domain_varnames = self._sub_fields(self.get_default("strm_domvar", config)) + data_varnames = self._sub_fields(self.get_default("strm_datvar", config)) + offset = self.get_default("strm_offset", config) + year_start = int(self.get_default("strm_year_start", config)) + year_end = int(self.get_default("strm_year_end", config)) + data_filenames = self._sub_paths(data_filenames, year_start, year_end) + domain_filenames = self._sub_paths(domain_filenames, year_start, year_end) + + # Overwrite domain_file if should be set from stream data + if domain_filenames == "null": + domain_filepath = data_filepath + domain_filenames = data_filenames.splitlines()[0] + + stream_file_text = _stream_mct_file_template.format( + domain_varnames=domain_varnames, + domain_filepath=domain_filepath, + domain_filenames=domain_filenames, + data_varnames=data_varnames, + data_filepath=data_filepath, + data_filenames=data_filenames, + offset=offset, + ) + + with open(stream_path, "w") as stream_file: + stream_file.write(stream_file_text) + + lines_hash = self._get_input_file_hash(data_list_path) + with open(data_list_path, "a") as input_data_list: + for i, filename in enumerate(domain_filenames.split("\n")): + if filename.strip() == "": + continue + filepath, filename = os.path.split(filename) + if not filepath: + filepath = os.path.join(domain_filepath, filename.strip()) + string = "domain{:d} = {}\n".format(i + 1, filepath) + hashValue = hashlib.md5(string.rstrip().encode("utf-8")).hexdigest() + if hashValue not in lines_hash: + input_data_list.write(string) + for i, filename in enumerate(data_filenames.split("\n")): + if filename.strip() == "": + continue + filepath = os.path.join(data_filepath, filename.strip()) + string = "file{:d} = {}\n".format(i + 1, filepath) + hashValue = hashlib.md5(string.rstrip().encode("utf-8")).hexdigest() + if hashValue not in lines_hash: + input_data_list.write(string) + self.update_shr_strdata_nml(config, stream, stream_path)
+ + +
+[docs] + def update_shr_strdata_nml(self, config, stream, stream_path): + """Updates values for the `shr_strdata_nml` namelist group. + + This should be done once per stream, and it shouldn't usually be called + directly, since `create_stream_file` calls this method itself. + """ + assert ( + config["stream"] == stream + ), "config stream is {}, but input stream is {}".format( + config["stream"], stream + ) + # Double-check the years for sanity. + year_start = int(self.get_default("strm_year_start", config)) + year_end = int(self.get_default("strm_year_end", config)) + year_align = int(self.get_default("strm_year_align", config)) + expect( + year_end >= year_start, + "Stream {} starts at year {:d}, but ends at earlier year {:d}.".format( + stream, year_start, year_end + ), + ) + # Add to streams file. + stream_string = "{} {:d} {:d} {:d}".format( + os.path.basename(stream_path), year_align, year_start, year_end + ) + self._streams_namelists["streams"].append(stream_string) + for variable in self._streams_variables: + default = self.get_default(variable, config) + expect( + len(default) == 1, + "Stream {} had multiple settings for variable {}.".format( + stream, variable + ), + ) + self._streams_namelists[variable].append(default[0])
+ + +
+[docs] + def set_abs_file_path(self, file_path): + """If `file_path` is relative, make it absolute using `DIN_LOC_ROOT`. + + If an absolute path is input, it is returned unchanged. + """ + if os.path.isabs(file_path): + return file_path + else: + fullpath = os.path.join(self._din_loc_root, file_path) + return fullpath
+ + +
+[docs] + def add_default(self, name, value=None, ignore_abs_path=None): + """Add a value for the specified variable to the namelist. + + If the specified variable is already defined in the object, the existing + value is preserved. Otherwise, the `value` argument, if provided, will + be used to set the value. If no such value is found, the defaults file + will be consulted. If null values are present in any of the above, the + result will be a merged array of values. + + If no value for the variable is found via any of the above, this method + will raise an exception. + """ + # pylint: disable=protected-access + group = self._definition.get_group(name) + + # Use this to see if we need to raise an error when nothing is found. + have_value = False + # Check for existing value. + current_literals = self._namelist.get_variable_value(group, name) + if current_literals != [""]: + have_value = True + + # Check for input argument. + if value is not None: + have_value = True + # if compression were to occur, this is where it does + literals = self._to_namelist_literals(name, value) + current_literals = merge_literal_lists(literals, current_literals) + + # Check for default value. + default = self.get_default(name, allow_none=True) + if default is not None: + have_value = True + default_literals = self._to_namelist_literals(name, default) + current_literals = merge_literal_lists(default_literals, current_literals) + expect( + have_value, + "No default value found for {} with attributes {}.".format( + name, self._definition.get_attributes() + ), + ) + + # Go through file names and prepend input data root directory for + # absolute pathnames. + var_type, _, var_size = self._definition.split_type_string(name) + if var_type == "character" and ignore_abs_path is None: + var_input_pathname = self._definition.get_input_pathname(name) + if var_input_pathname == "abs": + current_literals = expand_literal_list(current_literals) + for i, literal in enumerate(current_literals): + if literal == "": + continue + file_path = character_literal_to_string(literal) + abs_file_path = self._convert_to_abs_file_path(file_path, name) + current_literals[i] = string_to_character_literal(abs_file_path) + current_literals = compress_literal_list(current_literals) + + # Set the new value. + self._namelist.set_variable_value(group, name, current_literals, var_size)
+ + + def _convert_to_abs_file_path(self, file_path, name): + """Convert the given file_path to an abs file path and return the result + + It's possible that file_path actually contains multiple files delimited by + GRID_SEP. (This is the case when a component has multiple grids, and so has a file + for each grid.) In this case, we split it on GRID_SEP and handle each separated + portion as a separate file, then return a new GRID_SEP-delimited string. + + """ + abs_file_paths = [] + # In most cases, the list created by the following split will only contain a + # single element, but this split is needed to handle grid-related files for + # components with multiple grids (e.g., GLC). + for one_file_path in file_path.split(GRID_SEP): + # NOTE - these are hard-coded here and a better way is to make these extensible + if ( + one_file_path == "UNSET" + or one_file_path == "idmap" + or one_file_path == "idmap_ignore" + or one_file_path == "unset" + ): + abs_file_paths.append(one_file_path) + elif one_file_path in ("null", "create_mesh"): + abs_file_paths.append(one_file_path) + else: + one_abs_file_path = self.set_abs_file_path(one_file_path) + if not os.path.exists(one_abs_file_path): + logger.warning( + "File not found: {} = {}, will attempt to download in check_input_data phase".format( + name, one_abs_file_path + ) + ) + abs_file_paths.append(one_abs_file_path) + + return GRID_SEP.join(abs_file_paths) + +
+[docs] + def create_shr_strdata_nml(self): + """Set defaults for `shr_strdata_nml` variables other than the variable domainfile""" + self.add_default("datamode") + if self.get_value("datamode") != "NULL": + self.add_default("streams", value=self._streams_namelists["streams"]) + for variable in self._streams_variables: + self.add_default(variable, value=self._streams_namelists[variable])
+ + +
+[docs] + def get_group_variables(self, group_name): + return self._namelist.get_group_variables(group_name)
+ + + def _get_input_file_hash(self, data_list_path): + lines_hash = set() + if os.path.isfile(data_list_path): + with open(data_list_path, "r") as input_data_list: + for line in input_data_list: + hashValue = hashlib.md5(line.rstrip().encode("utf-8")).hexdigest() + logger.debug("Found line {} with hash {}".format(line, hashValue)) + lines_hash.add(hashValue) + return lines_hash + + def _write_input_files(self, data_list_path): + """Write input data files to list.""" + # append to input_data_list file + lines_hash = self._get_input_file_hash(data_list_path) + with open(data_list_path, "a") as input_data_list: + for group_name in self._namelist.get_group_names(): + for variable_name in self._namelist.get_variable_names(group_name): + input_pathname = self._definition.get_node_element_info( + variable_name, "input_pathname" + ) + if input_pathname is not None: + # This is where we end up for all variables that are paths + # to input data files. + literals = self._namelist.get_variable_value( + group_name, variable_name + ) + for literal in literals: + file_path = character_literal_to_string(literal) + self._add_file_to_input_data_list( + input_data_list=input_data_list, + variable_name=variable_name, + file_path=file_path, + input_pathname=input_pathname, + lines_hash=lines_hash, + ) + + def _add_file_to_input_data_list( + self, input_data_list, variable_name, file_path, input_pathname, lines_hash + ): + """Add one file to the input data list, if needed + + It's possible that file_path actually contains multiple files delimited by + GRID_SEP. (This is the case when a component has multiple grids, and so has a file + for each grid.) In this case, we split it on GRID_SEP and handle each separated + portion as a separate file. + + Args: + - input_data_list: file handle + - variable_name (string): name of variable to add + - file_path (string): path to file + - input_pathname (string): whether this is an absolute or relative path + - lines_hash (set): set of hashes of lines already in the given input data list + + """ + for one_file_path in file_path.split(GRID_SEP): + # NOTE - these are hard-coded here and a better way is to make these extensible + if ( + one_file_path == "UNSET" + or one_file_path == "idmap" + or one_file_path == "idmap_ignore" + ): + continue + if input_pathname == "abs": + # No further mangling needed for absolute paths. + # At this point, there are overwrites that should be ignored + if not os.path.isabs(one_file_path): + continue + else: + pass + elif input_pathname.startswith("rel:"): + # The part past "rel" is the name of a variable that + # this variable specifies its path relative to. + root_var = input_pathname[4:] + root_dir = self.get_value(root_var) + one_file_path = os.path.join(root_dir, one_file_path) + else: + expect(False, "Bad input_pathname value: {}.".format(input_pathname)) + + # Write to the input data list. + # + # Note that the same variable name is repeated for each file. This currently + # seems okay for check_input_data, but if it becomes a problem, we could + # change this, e.g., appending an index to the end of variable_name. + string = "{} = {}".format(variable_name, one_file_path) + hashValue = hashlib.md5(string.rstrip().encode("utf-8")).hexdigest() + if hashValue not in lines_hash: + logger.debug("Adding line {} with hash {}".format(string, hashValue)) + input_data_list.write(string + "\n") + else: + logger.debug("Line already in file {}".format(string)) + +
+[docs] + def write_output_file( + self, namelist_file, data_list_path=None, groups=None, sorted_groups=True + ): + """Write out the namelists and input data files. + + The `namelist_file` and `modelio_file` are the locations to which the + component and modelio namelists will be written, respectively. The + `data_list_path` argument is the location of the `*.input_data_list` + file, which will have the input data files added to it. + """ + self._definition.validate(self._namelist) + if groups is None: + groups = self._namelist.get_group_names() + + # remove groups that are never in namelist file + if "modelio" in groups: + groups.remove("modelio") + if "seq_maps" in groups: + groups.remove("seq_maps") + + # write namelist file + self._namelist.write(namelist_file, groups=groups, sorted_groups=sorted_groups) + + if data_list_path is not None: + self._write_input_files(data_list_path)
+ + + # For MCT +
+[docs] + def add_nmlcontents( + self, filename, group, append=True, format_="nmlcontents", sorted_groups=True + ): + """Write only contents of nml group""" + self._namelist.write( + filename, + groups=[group], + append=append, + format_=format_, + sorted_groups=sorted_groups, + )
+ + +
+[docs] + def write_seq_maps(self, filename): + """Write mct out seq_maps.rc""" + self._namelist.write(filename, groups=["seq_maps"], format_="rc")
+ + +
+[docs] + def write_modelio_file(self, filename): + """Write mct component modelio files""" + self._namelist.write(filename, groups=["modelio", "pio_inparm"], format_="nml")
+ + + # For NUOPC +
+[docs] + def write_nuopc_modelio_file(self, filename): + """Write nuopc component modelio files""" + self._namelist.write(filename, groups=["pio_inparm"], format_="nml")
+ + +
+[docs] + def write_nuopc_config_file( + self, filename, data_list_path=None, sorted_groups=False + ): + """Write the nuopc config file""" + self._definition.validate(self._namelist) + groups = self._namelist.get_group_names() + # write the config file + self._namelist.write_nuopc(filename, groups=groups, sorted_groups=sorted_groups) + # append to input_data_list file + if data_list_path is not None: + self._write_input_files(data_list_path)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/provenance.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/provenance.html new file mode 100644 index 00000000000..a175d4781dd --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/provenance.html @@ -0,0 +1,321 @@ + + + + + + CIME.provenance — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.provenance

+#!/usr/bin/env python3
+
+"""
+Library for saving build/run provenance.
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import (
+    SharedArea,
+    convert_to_babylonian_time,
+    get_current_commit,
+    run_cmd,
+)
+
+import sys
+
+logger = logging.getLogger(__name__)
+
+
+_WALLTIME_BASELINE_NAME = "walltimes"
+_WALLTIME_FILE_NAME = "walltimes"
+_GLOBAL_MINUMUM_TIME = 900
+_GLOBAL_WIGGLE = 1000
+_WALLTIME_TOLERANCE = ((600, 2.0), (1800, 1.5), (9999999999, 1.25))
+
+
+
+
+
+
+
+[docs] +def save_test_time(baseline_root, test, time_seconds, commit): + if baseline_root is not None: + try: + with SharedArea(): + the_dir = os.path.join(baseline_root, _WALLTIME_BASELINE_NAME, test) + if not os.path.exists(the_dir): + os.makedirs(the_dir) + + the_path = os.path.join(the_dir, _WALLTIME_FILE_NAME) + with open(the_path, "a") as fd: + fd.write("{} {}\n".format(int(time_seconds), commit)) + + except Exception: + # We NEVER want a failure here to kill the run + logger.warning("Failed to store test time: {}".format(sys.exc_info()[1]))
+ + + +_SUCCESS_BASELINE_NAME = "success-history" +_SUCCESS_FILE_NAME = "last-transitions" + + +def _read_success_data(baseline_root, test): + success_path = os.path.join( + baseline_root, _SUCCESS_BASELINE_NAME, test, _SUCCESS_FILE_NAME + ) + if os.path.exists(success_path): + with open(success_path, "r") as fd: + prev_results_raw = fd.read().strip() + prev_results = prev_results_raw.split() + expect( + len(prev_results) == 2, + "Bad success data: '{}'".format(prev_results_raw), + ) + else: + prev_results = ["None", "None"] + + # Convert "None" to None + for idx, item in enumerate(prev_results): + if item == "None": + prev_results[idx] = None + + return success_path, prev_results + + +def _is_test_working(prev_results, src_root, testing=False): + # If there is no history of success, prev run could not have succeeded and vice versa for failures + if prev_results[0] is None: + return False + elif prev_results[1] is None: + return True + else: + if not testing: + stat, out, err = run_cmd( + "git merge-base --is-ancestor {}".format(" ".join(prev_results)), + from_dir=src_root, + ) + expect( + stat in [0, 1], + "Unexpected status from ancestor check:\n{}\n{}".format(out, err), + ) + else: + # Hack for testing + stat = 0 if prev_results[0] < prev_results[1] else 1 + + # stat == 0 tells us that pass is older than fail, so we must have failed, otherwise we passed + return stat != 0 + + +
+[docs] +def get_test_success(baseline_root, src_root, test, testing=False): + """ + Returns (was prev run success, commit when test last passed, commit when test last transitioned from pass to fail) + + Unknown history is expressed as None + """ + if baseline_root is not None: + try: + prev_results = _read_success_data(baseline_root, test)[1] + prev_success = _is_test_working(prev_results, src_root, testing=testing) + return prev_success, prev_results[0], prev_results[1] + + except Exception: + # We NEVER want a failure here to kill the run + logger.warning("Failed to read test success: {}".format(sys.exc_info()[1])) + + return False, None, None
+ + + +
+[docs] +def save_test_success(baseline_root, src_root, test, succeeded, force_commit_test=None): + """ + Update success data accordingly based on succeeded flag + """ + if baseline_root is not None: + try: + with SharedArea(): + success_path, prev_results = _read_success_data(baseline_root, test) + + the_dir = os.path.dirname(success_path) + if not os.path.exists(the_dir): + os.makedirs(the_dir) + + prev_succeeded = _is_test_working( + prev_results, src_root, testing=(force_commit_test is not None) + ) + + # if no transition occurred then no update is needed + if ( + succeeded + or succeeded != prev_succeeded + or (prev_results[0] is None and succeeded) + or (prev_results[1] is None and not succeeded) + ): + + new_results = list(prev_results) + my_commit = ( + force_commit_test + if force_commit_test + else get_current_commit(repo=src_root) + ) + if succeeded: + new_results[0] = my_commit # we passed + else: + new_results[1] = my_commit # we transitioned to a failing state + + str_results = [ + "None" if item is None else item for item in new_results + ] + with open(success_path, "w") as fd: + fd.write("{}\n".format(" ".join(str_results))) + + except Exception: + # We NEVER want a failure here to kill the run + logger.warning("Failed to store test success: {}".format(sys.exc_info()[1]))
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_clone.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_clone.html new file mode 100644 index 00000000000..883d0a8df63 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_clone.html @@ -0,0 +1,294 @@ + + + + + + CIME.scripts.create_clone — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.scripts.create_clone

+#!/usr/bin/env python3
+
+from CIME.Tools.standard_script_setup import *
+from CIME.utils import expect
+from CIME.case import Case
+from argparse import RawTextHelpFormatter
+import re
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_command_line(args): + ############################################################################### + parser = argparse.ArgumentParser(formatter_class=RawTextHelpFormatter) + + CIME.utils.setup_standard_logging_options(parser) + + parser.add_argument( + "--case", + "-case", + required=True, + help="(required) Specify a new case name. If not a full pathname, " + "\nthe new case will be created under then current working directory.", + ) + + parser.add_argument( + "--clone", + "-clone", + required=True, + help="(required) Specify a case to be cloned. If not a full pathname, " + "\nthe case to be cloned is assumed to be under then current working directory.", + ) + + parser.add_argument( + "--ensemble", + default=1, + help="clone an ensemble of cases, the case name argument must end in an integer.\n" + "for example: ./create_clone --clone case.template --case case.001 --ensemble 4 \n" + "will create case.001, case.002, case.003, case.004 from existing case.template", + ) + + # This option supports multiple values, hence the plural ("user-mods-dirs"). However, + # we support the singular ("user-mods-dir") for backwards compatibility (and because + # the singular may be more intuitive for someone who only wants to use a single + # directory). + parser.add_argument( + "--user-mods-dirs", + "--user-mods-dir", + nargs="*", + help="Full pathname to a directory containing any combination of user_nl_* files " + "\nand a shell_commands script (typically containing xmlchange commands). " + "\nThe directory can also contain an SourceMods/ directory with the same structure " + "\nas would be found in a case directory." + "\nIt can also contain a file named 'include_user_mods' which gives the path to" + "\none or more other directories that should be included." + "\nMultiple directories can be given to the --user-mods-dirs argument," + "\nin which case changes from all of them are applied." + "\n(If there are conflicts, later directories take precedence.)" + "\n(Care is needed if multiple directories include the same directory via 'include_user_mods':" + "\nin this case, the included directory will be applied multiple times.)" + "\nIf this argument is used in conjunction " + "\nwith the --keepexe flag, then no changes will be permitted to the env_build.xml " + "\nin the newly created case directory. ", + ) + + parser.add_argument( + "--keepexe", + "-keepexe", + action="store_true", + help="Sets EXEROOT to point to original build. It is HIGHLY recommended " + "\nthat the original case be built BEFORE cloning it if the --keepexe flag is specfied. " + "\nThis flag will make the SourceMods/ directory in the newly created case directory a " + "\nsymbolic link to the SourceMods/ directory in the original case directory. ", + ) + + parser.add_argument( + "--mach-dir", + "-mach_dir", + help="Specify the locations of the Machines directory, other than the default. " + "\nThe default is CIMEROOT/machines.", + ) + + parser.add_argument( + "--project", + "-project", + help="Specify a project id for the case (optional)." + "\nUsed for accounting and directory permissions when on a batch system." + "\nThe default is user or machine specified by PROJECT." + "\nAccounting (only) may be overridden by user or machine specified CHARGE_ACCOUNT.", + ) + + parser.add_argument( + "--cime-output-root", + help="Specify the root output directory. The default is the setting in the original" + "\ncase directory. NOTE: create_clone will fail if this directory is not writable.", + ) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + if args.case is None: + expect(False, "Must specify -case as an input argument") + + if args.clone is None: + expect(False, "Must specify -clone as an input argument") + + startval = "1" + if int(args.ensemble) > 1: + m = re.search(r"(\d+)$", args.case) + expect(m, " case name must end in an integer to use this feature") + startval = m.group(1) + + return ( + args.case, + args.clone, + args.keepexe, + args.mach_dir, + args.project, + args.cime_output_root, + args.user_mods_dirs, + int(args.ensemble), + startval, + )
+ + + +############################################################################## +def _main_func(): + ############################################################################### + + ( + case, + clone, + keepexe, + mach_dir, + project, + cime_output_root, + user_mods_dirs, + ensemble, + startval, + ) = parse_command_line(sys.argv) + + cloneroot = os.path.abspath(clone) + expect(os.path.isdir(cloneroot), "Missing cloneroot directory %s " % cloneroot) + + if user_mods_dirs is not None: + user_mods_dirs = [ + os.path.abspath(one_user_mods_dir) + if os.path.isdir(one_user_mods_dir) + else one_user_mods_dir + for one_user_mods_dir in user_mods_dirs + ] + nint = len(startval) + + for i in range(int(startval), int(startval) + ensemble): + if ensemble > 1: + case = case[:-nint] + "{{0:0{0:d}d}}".format(nint).format(i) + with Case(cloneroot, read_only=False) as clone: + clone.create_clone( + case, + keepexe=keepexe, + mach_dir=mach_dir, + project=project, + cime_output_root=cime_output_root, + user_mods_dirs=user_mods_dirs, + ) + + +############################################################################### + +if __name__ == "__main__": + _main_func() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_newcase.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_newcase.html new file mode 100644 index 00000000000..a249b7d5380 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_newcase.html @@ -0,0 +1,603 @@ + + + + + + CIME.scripts.create_newcase — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.scripts.create_newcase

+#!/usr/bin/env python3
+
+# pylint: disable=W0621, W0613
+
+"""
+Script to create a new CIME Case Control System (CSS) experimental case.
+"""
+
+from CIME.Tools.standard_script_setup import *
+from CIME.utils import (
+    expect,
+    get_cime_config,
+    get_cime_default_driver,
+    get_src_root,
+)
+from CIME.config import Config
+from CIME.case import Case
+from argparse import RawTextHelpFormatter
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_command_line(args, cimeroot, description): + ############################################################################### + parser = argparse.ArgumentParser( + description=description, formatter_class=RawTextHelpFormatter + ) + + CIME.utils.setup_standard_logging_options(parser) + + customize_path = os.path.join(CIME.utils.get_src_root(), "cime_config", "customize") + + config = Config.load(customize_path) + + try: + cime_config = get_cime_config() + except Exception: + cime_config = None + + parser.add_argument( + "--case", + "-case", + required=True, + metavar="CASENAME", + help="(required) Specify the case name. " + "\nIf this is simply a name (not a path), the case directory is created in the current working directory." + "\nThis can also be a relative or absolute path specifying where the case should be created;" + "\nwith this usage, the name of the case will be the last component of the path.", + ) + + parser.add_argument( + "--compset", + "-compset", + required=True, + help="(required) Specify a compset. " + "\nTo see list of current compsets, use the utility ./query_config --compsets in this directory.\n", + ) + + parser.add_argument( + "--res", + "-res", + required=True, + metavar="GRID", + help="(required) Specify a model grid resolution. " + "\nTo see list of current model resolutions, use the utility " + "\n./query_config --grids in this directory.", + ) + + parser.add_argument( + "--machine", + "-mach", + help="Specify a machine. " + "The default value is the match to NODENAME_REGEX in config_machines.xml. To see " + "\nthe list of current machines, invoke ./query_config --machines.", + ) + + parser.add_argument( + "--compiler", + "-compiler", + help="Specify a compiler. " + "\nTo see list of supported compilers for each machine, use the utility " + "\n./query_config --machines in this directory. " + "\nThe default value will be the first one listed.", + ) + + parser.add_argument( + "--multi-driver", + action="store_true", + help="Specify that --ninst should modify the number of driver/coupler instances. " + "\nThe default is to have one driver/coupler supporting multiple component instances.", + ) + + parser.add_argument( + "--ninst", + default=1, + type=int, + help="Specify number of model ensemble instances. " + "\nThe default is multiple components and one driver/coupler. " + "\nUse --multi-driver to run multiple driver/couplers in the ensemble.", + ) + + parser.add_argument( + "--mpilib", + "-mpilib", + help="Specify the MPI library. " + "To see list of supported mpilibs for each machine, invoke ./query_config --machines." + "\nThe default is the first listing in MPILIBS in config_machines.xml.\n", + ) + + parser.add_argument( + "--project", + "-project", + help="Specify a project id for the case (optional)." + "\nUsed for accounting and directory permissions when on a batch system." + "\nThe default is user or machine specified by PROJECT." + "\nAccounting (only) may be overridden by user or machine specified CHARGE_ACCOUNT.", + ) + + parser.add_argument( + "--pecount", + "-pecount", + default="M", + help="Specify a target size description for the number of cores. " + "\nThis is used to query the appropriate config_pes.xml file and find the " + "\noptimal PE-layout for your case - if it exists there. " + "\nAllowed options are ('S','M','L','X1','X2','[0-9]x[0-9]','[0-9]').\n", + ) + + # This option supports multiple values, hence the plural ("user-mods-dirs"). However, + # we support the singular ("user-mods-dir") for backwards compatibility (and because + # the singular may be more intuitive for someone who only wants to use a single + # directory). + parser.add_argument( + "--user-mods-dirs", + "--user-mods-dir", + nargs="*", + help="Full pathname to a directory containing any combination of user_nl_* files " + "\nand a shell_commands script (typically containing xmlchange commands). " + "\nThe directory can also contain an SourceMods/ directory with the same structure " + "\nas would be found in a case directory." + "\nIt can also contain a file named 'include_user_mods' which gives the path to" + "\none or more other directories that should be included." + "\nMultiple directories can be given to the --user-mods-dirs argument," + "\nin which case changes from all of them are applied." + "\n(If there are conflicts, later directories take precedence.)" + "\n(Care is needed if multiple directories include the same directory via 'include_user_mods':" + "\nin this case, the included directory will be applied multiple times.)", + ) + + parser.add_argument( + "--pesfile", + help="Full pathname of an optional pes specification file. " + "\nThe file can follow either the config_pes.xml or the env_mach_pes.xml format.", + ) + + parser.add_argument( + "--gridfile", + help="Full pathname of config grid file to use. " + "\nThis should be a copy of config/config_grids.xml with the new user grid changes added to it. \n", + ) + + if cime_config and cime_config.has_option("main", "workflow"): + workflow_default = cime_config.get("main", "workflow") + else: + workflow_default = "default" + + parser.add_argument( + "--workflow", + default=workflow_default, + help="A workflow from config_workflow.xml to apply to this case. ", + ) + + srcroot_default = get_src_root() + + parser.add_argument( + "--srcroot", + default=srcroot_default, + help="Alternative pathname for source root directory. " + f"The default is {srcroot_default}", + ) + + parser.add_argument( + "--output-root", + help="Alternative pathname for the directory where case output is written.", + ) + + # The following is a deprecated option + parser.add_argument( + "--script-root", dest="script_root", default=None, help=argparse.SUPPRESS + ) + + if config.allow_unsupported: + parser.add_argument( + "--run-unsupported", + action="store_true", + help="Force the creation of a case that is not tested or supported by CESM developers.", + ) + # hidden argument indicating called from create_test + # Indicates that create_newcase was called from create_test - do not use otherwise. + parser.add_argument("--test", "-test", action="store_true", help=argparse.SUPPRESS) + + parser.add_argument( + "--walltime", + default=os.getenv("CIME_GLOBAL_WALLTIME"), + help="Set the wallclock limit for this case in the format (the usual format is HH:MM:SS). " + "\nYou may use env var CIME_GLOBAL_WALLTIME to set this. " + "\nIf CIME_GLOBAL_WALLTIME is not defined in the environment, then the walltime" + "\nwill be the maximum allowed time defined for the queue in config_batch.xml.", + ) + + parser.add_argument( + "-q", + "--queue", + default=None, + help="Force batch system to use the specified queue. ", + ) + + parser.add_argument( + "--handle-preexisting-dirs", + dest="answer", + choices=("a", "r", "u"), + default=None, + help="Do not query how to handle pre-existing bld/exe dirs. " + "\nValid options are (a)bort (r)eplace or (u)se existing. " + "\nThis can be useful if you need to run create_newcase non-iteractively.", + ) + + parser.add_argument( + "-i", + "--input-dir", + help="Use a non-default location for input files. This will change the xml value of DIN_LOC_ROOT.", + ) + + drv_choices = config.driver_choices + drv_help = ( + "Override the top level driver type and use this one " + + "(changes xml variable COMP_INTERFACE) [this is an advanced option]" + ) + + parser.add_argument( + "--driver", + # use get_cime_default_driver rather than config.driver_default as it considers + # environment, user config then config.driver_default + default=get_cime_default_driver(), + choices=drv_choices, + help=drv_help, + ) + + parser.add_argument( + "-n", + "--non-local", + action="store_true", + help="Use when you've requested a machine that you aren't on. " + "Will reduce errors for missing directories etc.", + ) + + parser.add_argument( + "--extra-machines-dir", + help="Optional path to a directory containing one or more of:" + "\nconfig_machines.xml, config_batch.xml." + "\nIf provided, the contents of these files will be appended to" + "\nthe standard machine files (and any files in ~/.cime).", + ) + + parser.add_argument("--case-group", help="Add this case to a case group") + + parser.add_argument( + "--ngpus-per-node", + default=0, + type=int, + help="Specify number of GPUs used for simulation. ", + ) + + parser.add_argument( + "--gpu-type", + default=None, + help="Specify type of GPU hardware - currently supported are v100, a100, mi250", + ) + + parser.add_argument( + "--gpu-offload", + default=None, + help="Specify gpu offload method - currently supported are openacc, openmp, combined", + ) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + if args.srcroot is not None: + expect( + os.path.isdir(args.srcroot), + "Input non-default directory srcroot {} does not exist ".format( + args.srcroot + ), + ) + args.srcroot = os.path.abspath(args.srcroot) + + if args.gridfile is not None: + expect( + os.path.isfile(args.gridfile), + "Grid specification file {} does not exist ".format(args.gridfile), + ) + + if args.pesfile is not None: + expect( + os.path.isfile(args.pesfile), + "Pes specification file {} cannot be found ".format(args.pesfile), + ) + + run_unsupported = False + if config.allow_unsupported: + run_unsupported = args.run_unsupported + + expect( + CIME.utils.check_name(args.case, fullpath=True), + "Illegal case name argument provided", + ) + + if args.input_dir is not None: + args.input_dir = os.path.abspath(args.input_dir) + elif cime_config and cime_config.has_option("main", "input_dir"): + args.input_dir = os.path.abspath(cime_config.get("main", "input_dir")) + + if config.create_test_flag_mode == "cesm" and args.driver == "mct": + logger.warning( + """======================================================================== +WARNING: The MCT-based driver and data models will be removed from CESM +WARNING: on September 30, 2022. +WARNING: Please contact members of the CESM Software Engineering Group +WARNING: if you need support migrating to the ESMF/NUOPC infrastructure. +========================================================================""" + ) + + return ( + args.case, + args.compset, + args.res, + args.machine, + args.compiler, + args.mpilib, + args.project, + args.pecount, + args.user_mods_dirs, + args.pesfile, + args.gridfile, + args.srcroot, + args.test, + args.multi_driver, + args.ninst, + args.walltime, + args.queue, + args.output_root, + args.script_root, + run_unsupported, + args.answer, + args.input_dir, + args.driver, + args.workflow, + args.non_local, + args.extra_machines_dir, + args.case_group, + args.ngpus_per_node, + args.gpu_type, + args.gpu_offload, + )
+ + + +############################################################################### +def _main_func(description=None): + ############################################################################### + cimeroot = os.path.abspath(CIME.utils.get_cime_root()) + + ( + casename, + compset, + grid, + machine, + compiler, + mpilib, + project, + pecount, + user_mods_dirs, + pesfile, + gridfile, + srcroot, + test, + multi_driver, + ninst, + walltime, + queue, + output_root, + script_root, + run_unsupported, + answer, + input_dir, + driver, + workflow, + non_local, + extra_machines_dir, + case_group, + ngpus_per_node, + gpu_type, + gpu_offload, + ) = parse_command_line(sys.argv, cimeroot, description) + + if script_root is None: + caseroot = os.path.abspath(casename) + else: + caseroot = os.path.abspath(script_root) + + if user_mods_dirs is not None: + user_mods_dirs = [ + os.path.abspath(one_user_mods_dir) + if os.path.isdir(one_user_mods_dir) + else one_user_mods_dir + for one_user_mods_dir in user_mods_dirs + ] + + # create_test creates the caseroot before calling create_newcase + # otherwise throw an error if this directory exists + expect( + not (os.path.exists(caseroot) and not test), + "Case directory {} already exists".format(caseroot), + ) + + # create_newcase ... --test ... throws a CIMEError along with + # a very stern warning message to the user + # if it detects that it was invoked outside of create_test + if test: + expect( + ( + "FROM_CREATE_TEST" in os.environ + and os.environ["FROM_CREATE_TEST"] == "True" + ), + "The --test argument is intended to only be called from inside create_test. Invoking this option from the command line is not appropriate usage.", + ) + del os.environ["FROM_CREATE_TEST"] + + with Case(caseroot, read_only=False, non_local=non_local) as case: + # Configure the Case + case.create( + casename, + srcroot, + compset, + grid, + user_mods_dirs=user_mods_dirs, + machine_name=machine, + project=project, + pecount=pecount, + compiler=compiler, + mpilib=mpilib, + pesfile=pesfile, + gridfile=gridfile, + multi_driver=multi_driver, + ninst=ninst, + test=test, + walltime=walltime, + queue=queue, + output_root=output_root, + run_unsupported=run_unsupported, + answer=answer, + input_dir=input_dir, + driver=driver, + workflowid=workflow, + non_local=non_local, + extra_machines_dir=extra_machines_dir, + case_group=case_group, + ngpus_per_node=ngpus_per_node, + gpu_type=gpu_type, + gpu_offload=gpu_offload, + ) + + # Called after create since casedir does not exist yet + case.record_cmd(init=True) + + +############################################################################### + +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_test.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_test.html new file mode 100644 index 00000000000..bc8ace6264b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/create_test.html @@ -0,0 +1,1272 @@ + + + + + + CIME.scripts.create_test — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.scripts.create_test

+#!/usr/bin/env python3
+
+"""
+Script to create, build and run CIME tests. This script can:
+
+1) Run a single test, or more than one test
+   ./create_test TESTNAME
+   ./create_test TESTNAME1 TESTNAME2 ...
+2) Run a test suite from a text file with one test per line
+   ./create_test -f TESTFILE
+3) Run an E3SM test suite:
+  Below, a suite name, SUITE, is defined in $CIMEROOT/scripts/lib/get_tests.py
+  - Run a single suite
+   ./create_test SUITE
+  - Run two suites
+   ./create_test SUITE1 SUITE2
+  - Run all tests in a suite except for one
+   ./create_test SUITE ^TESTNAME
+  - Run all tests in a suite except for tests that are in another suite
+   ./create_test SUITE1 ^SUITE2
+  - Run all tests in a suite with baseline comparisons against master baselines
+   ./create_test SUITE1 -c -b master
+4) Run a CESM test suite(s):
+   ./create_test --xml-category XML_CATEGORY [--xml-machine XML_MACHINE] [--xml-compiler XML_COMPILER] [ --xml-testlist XML_TESTLIST]
+
+If this tool is missing any feature that you need, please add an issue on
+https://github.com/ESMCI/cime
+"""
+from CIME.Tools.standard_script_setup import *
+from CIME import get_tests
+from CIME.test_scheduler import TestScheduler, RUN_PHASE
+from CIME import utils
+from CIME.utils import (
+    expect,
+    convert_to_seconds,
+    compute_total_time,
+    convert_to_babylonian_time,
+    run_cmd_no_fail,
+    get_cime_config,
+)
+from CIME.config import Config
+from CIME.XML.machines import Machines
+from CIME.case import Case
+from CIME.test_utils import get_tests_from_xml
+from argparse import RawTextHelpFormatter
+
+import argparse, math, glob
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_command_line(args, description): + ############################################################################### + + parser = argparse.ArgumentParser( + description=description, formatter_class=RawTextHelpFormatter + ) + + model_config = Config.instance() + + CIME.utils.setup_standard_logging_options(parser) + + config = get_cime_config() + + parser.add_argument( + "--no-run", action="store_true", help="Do not run generated tests" + ) + + parser.add_argument( + "--no-build", + action="store_true", + help="Do not build generated tests, implies --no-run", + ) + + parser.add_argument( + "--no-setup", + action="store_true", + help="Do not setup generated tests, implies --no-build and --no-run", + ) + + parser.add_argument( + "-u", + "--use-existing", + action="store_true", + help="Use pre-existing case directories they will pick up at the " + "\nlatest PEND state or re-run the first failed state. Requires test-id", + ) + + default = get_default_setting(config, "SAVE_TIMING", False, check_main=False) + + parser.add_argument( + "--save-timing", + action="store_true", + default=default, + help="Enable archiving of performance data.", + ) + + parser.add_argument( + "--no-batch", + action="store_true", + help="Do not submit jobs to batch system, run locally." + "\nIf false, this will default to machine setting.", + ) + + parser.add_argument( + "--single-exe", + action="store_true", + default=False, + help="Use a single build for all cases. This can " + "\ndrastically improve test throughput but is currently use-at-your-own risk." + "\nIt's up to the user to ensure that all cases are build-compatible." + "\nE3SM tests belonging to a suite with share enabled will always share exes.", + ) + + default = get_default_setting(config, "SINGLE_SUBMIT", False, check_main=False) + + parser.add_argument( + "--single-submit", + action="store_true", + default=default, + help="Use a single interactive allocation to run all the tests. This can " + "\ndrastically reduce queue waiting but only makes sense on batch machines.", + ) + + default = get_default_setting(config, "TEST_ROOT", None, check_main=False) + + parser.add_argument( + "-r", + "--test-root", + default=default, + help="Where test cases will be created. The default is output root" + "\nas defined in the config_machines file", + ) + + default = get_default_setting(config, "OUTPUT_ROOT", None, check_main=False) + + parser.add_argument( + "--output-root", default=default, help="Where the case output is written." + ) + + default = get_default_setting(config, "BASELINE_ROOT", None, check_main=False) + + parser.add_argument( + "--baseline-root", + default=default, + help="Specifies a root directory for baseline datasets that will " + "\nbe used for Bit-for-bit generate and/or compare testing.", + ) + + default = get_default_setting(config, "CLEAN", False, check_main=False) + + parser.add_argument( + "--clean", + action="store_true", + default=default, + help="Specifies if tests should be cleaned after run. If set, all object" + "\nexecutables and data files will be removed after the tests are run.", + ) + + default = get_default_setting(config, "MACHINE", None, check_main=True) + + parser.add_argument( + "-m", + "--machine", + default=default, + help="The machine for creating and building tests. This machine must be defined" + "\nin the config_machines.xml file for the given model. The default is to " + "\nto match the name of the machine in the test name or the name of the " + "\nmachine this script is run on to the NODENAME_REGEX field in " + "\nconfig_machines.xml. WARNING: This option is highly unsafe and should " + "\nonly be used if you are an expert.", + ) + + default = get_default_setting(config, "MPILIB", None, check_main=True) + + parser.add_argument( + "--mpilib", + default=default, + help="Specify the mpilib. To see list of supported MPI libraries for each machine, " + "\ninvoke ./query_config. The default is the first listing .", + ) + + if model_config.create_test_flag_mode == "cesm": + parser.add_argument( + "-c", + "--compare", + help="While testing, compare baselines against the given compare directory. ", + ) + + parser.add_argument( + "-g", + "--generate", + help="While testing, generate baselines in the given generate directory. " + "\nNOTE: this can also be done after the fact with bless_test_results", + ) + + parser.add_argument( + "--xml-machine", + help="Use this machine key in the lookup in testlist.xml. " + "\nThe default is all if any --xml- argument is used.", + ) + + parser.add_argument( + "--xml-compiler", + help="Use this compiler key in the lookup in testlist.xml. " + "\nThe default is all if any --xml- argument is used.", + ) + + parser.add_argument( + "--xml-category", + help="Use this category key in the lookup in testlist.xml. " + "\nThe default is all if any --xml- argument is used.", + ) + + parser.add_argument( + "--xml-testlist", + help="Use this testlist to lookup tests.The default is specified in config_files.xml", + ) + + parser.add_argument( + "--driver", + choices=model_config.driver_choices, + help="Override driver specified in tests and use this one.", + ) + + parser.add_argument( + "testargs", + nargs="*", + help="Tests to run. Testname form is TEST.GRID.COMPSET[.MACHINE_COMPILER]", + ) + + else: + + parser.add_argument( + "testargs", + nargs="+", + help="Tests or test suites to run." + " Testname form is TEST.GRID.COMPSET[.MACHINE_COMPILER]", + ) + + parser.add_argument( + "-b", + "--baseline-name", + help="If comparing or generating baselines, use this directory under baseline root. " + "\nDefault will be current branch name.", + ) + + parser.add_argument( + "-c", + "--compare", + action="store_true", + help="While testing, compare baselines", + ) + + parser.add_argument( + "-g", + "--generate", + action="store_true", + help="While testing, generate baselines. " + "\nNOTE: this can also be done after the fact with bless_test_results", + ) + + default = get_default_setting(config, "COMPILER", None, check_main=True) + + parser.add_argument( + "--compiler", + default=default, + help="Compiler for building cime. Default will be the name in the " + "\nTestname or the default defined for the machine.", + ) + + parser.add_argument( + "-n", + "--namelists-only", + action="store_true", + help="Only perform namelist actions for tests", + ) + + parser.add_argument( + "-p", + "--project", + help="Specify a project id for the case (optional)." + "\nUsed for accounting and directory permissions when on a batch system." + "\nThe default is user or machine specified by PROJECT." + "\nAccounting (only) may be overridden by user or machine specified CHARGE_ACCOUNT.", + ) + + parser.add_argument( + "-t", + "--test-id", + help="Specify an 'id' for the test. This is simply a string that is appended " + "\nto the end of a test name. If no test-id is specified, a time stamp plus a " + "\nrandom string will be used (ensuring a high probability of uniqueness). " + "\nIf a test-id is specified, it is the user's responsibility to ensure that " + "\neach run of create_test uses a unique test-id. WARNING: problems will occur " + "\nif you use the same test-id twice on the same file system, even if the test " + "\nlists are completely different.", + ) + + default = get_default_setting(config, "PARALLEL_JOBS", None, check_main=False) + + parser.add_argument( + "-j", + "--parallel-jobs", + type=int, + default=default, + help="Number of tasks create_test should perform simultaneously. The default " + "\n is min(num_cores, num_tests).", + ) + + default = get_default_setting(config, "PROC_POOL", None, check_main=False) + + parser.add_argument( + "--proc-pool", + type=int, + default=default, + help="The size of the processor pool that create_test can use. The default is " + "\nMAX_MPITASKS_PER_NODE + 25 percent.", + ) + + default = os.getenv("CIME_GLOBAL_WALLTIME") + if default is None: + default = get_default_setting(config, "WALLTIME", None, check_main=True) + + parser.add_argument( + "--walltime", + default=default, + help="Set the wallclock limit for all tests in the suite. " + "\nUse the variable CIME_GLOBAL_WALLTIME to set this for all tests.", + ) + + default = get_default_setting(config, "JOB_QUEUE", None, check_main=True) + + parser.add_argument( + "-q", + "--queue", + default=default, + help="Force batch system to use a certain queue", + ) + + parser.add_argument( + "-f", "--testfile", help="A file containing an ascii list of tests to run" + ) + + default = get_default_setting( + config, "ALLOW_BASELINE_OVERWRITE", False, check_main=False + ) + + parser.add_argument( + "-o", + "--allow-baseline-overwrite", + action="store_true", + default=default, + help="If the --generate option is given, then an attempt to overwrite " + "\nan existing baseline directory will raise an error. WARNING: Specifying this " + "\noption will allow existing baseline directories to be silently overwritten.", + ) + + default = get_default_setting(config, "WAIT", False, check_main=False) + + parser.add_argument( + "--wait", + action="store_true", + default=default, + help="On batch systems, wait for submitted jobs to complete", + ) + + default = get_default_setting(config, "ALLOW_PNL", False, check_main=False) + + parser.add_argument( + "--allow-pnl", + action="store_true", + default=default, + help="Do not pass skip-pnl to case.submit", + ) + + parser.add_argument( + "--check-throughput", + action="store_true", + help="Fail if throughput check fails. Requires --wait on batch systems", + ) + + parser.add_argument( + "--check-memory", + action="store_true", + help="Fail if memory check fails. Requires --wait on batch systems", + ) + + parser.add_argument( + "--ignore-namelists", + action="store_true", + help="Do not fail if there namelist diffs", + ) + + parser.add_argument( + "--ignore-memleak", action="store_true", help="Do not fail if there's a memleak" + ) + + default = get_default_setting(config, "FORCE_PROCS", None, check_main=False) + + parser.add_argument( + "--force-procs", + type=int, + default=default, + help="For all tests to run with this number of processors", + ) + + default = get_default_setting(config, "FORCE_THREADS", None, check_main=False) + + parser.add_argument( + "--force-threads", + type=int, + default=default, + help="For all tests to run with this number of threads", + ) + + default = get_default_setting(config, "INPUT_DIR", None, check_main=True) + + parser.add_argument( + "-i", + "--input-dir", + default=default, + help="Use a non-default location for input files", + ) + + default = get_default_setting(config, "PESFILE", None, check_main=True) + + parser.add_argument( + "--pesfile", + default=default, + help="Full pathname of an optional pes specification file. The file" + "\ncan follow either the config_pes.xml or the env_mach_pes.xml format.", + ) + + default = get_default_setting(config, "RETRY", 0, check_main=False) + + parser.add_argument( + "--retry", + type=int, + default=default, + help="Automatically retry failed tests. >0 implies --wait", + ) + + parser.add_argument( + "-N", + "--non-local", + action="store_true", + help="Use when you've requested a machine that you aren't on. " + "Will reduce errors for missing directories etc.", + ) + + if config and config.has_option("main", "workflow"): + workflow_default = config.get("main", "workflow") + else: + workflow_default = "default" + + parser.add_argument( + "--workflow", + default=workflow_default, + help="A workflow from config_workflow.xml to apply to this case. ", + ) + + parser.add_argument( + "--chksum", action="store_true", help="Verifies input data checksums." + ) + + srcroot_default = utils.get_src_root() + + parser.add_argument( + "--srcroot", + default=srcroot_default, + help="Alternative pathname for source root directory. " + f"The default is {srcroot_default}", + ) + + parser.add_argument( + "--force-rebuild", + action="store_true", + help="When used with 'use-existing' and 'test-id', the" + "tests will have their 'BUILD_SHAREDLIB' phase reset to 'PEND'.", + ) + + CIME.utils.add_mail_type_args(parser) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + CIME.utils.resolve_mail_type_args(args) + + if args.force_rebuild: + expect( + args.use_existing and args.test_id, + "Cannot force a rebuild without 'use-existing' and 'test-id'", + ) + + # generate and compare flags may not point to the same directory + if model_config.create_test_flag_mode == "cesm": + if args.generate is not None: + expect( + not (args.generate == args.compare), + "Cannot generate and compare baselines at the same time", + ) + + if args.xml_testlist is not None: + expect( + not ( + args.xml_machine is None + and args.xml_compiler is None + and args.xml_category is None + ), + "If an xml-testlist is present at least one of --xml-machine, " + "--xml-compiler, --xml-category must also be present", + ) + + else: + expect( + not ( + args.baseline_name is not None + and (not args.compare and not args.generate) + ), + "Provided baseline name but did not specify compare or generate", + ) + expect( + not (args.compare and args.generate), + "Tried to compare and generate at same time", + ) + + expect( + not (args.namelists_only and not (args.generate or args.compare)), + "Must provide either --compare or --generate with --namelists-only", + ) + + if args.retry > 0: + args.wait = True + + if args.parallel_jobs is not None: + expect( + args.parallel_jobs > 0, + "Invalid value for parallel_jobs: %d" % args.parallel_jobs, + ) + + if args.use_existing: + expect(args.test_id is not None, "Must provide test-id of pre-existing cases") + + if args.no_setup: + args.no_build = True + + if args.no_build: + args.no_run = True + + # Namelist-only forces some other options: + if args.namelists_only: + expect(not args.no_setup, "Cannot compare namelists without setup") + args.no_build = True + args.no_run = True + args.no_batch = True + + expect( + not (args.non_local and not args.no_build), "Cannot build on non-local machine" + ) + + if args.single_submit: + expect( + not args.no_run, + "Doesn't make sense to request single-submit if no-run is on", + ) + args.no_build = True + args.no_run = True + args.no_batch = True + + if args.test_id is None: + args.test_id = "%s_%s" % (CIME.utils.get_timestamp(), CIME.utils.id_generator()) + else: + expect( + CIME.utils.check_name(args.test_id, additional_chars="."), + "invalid test-id argument provided", + ) + + if args.testfile is not None: + with open(args.testfile, "r") as fd: + args.testargs.extend( + [ + line.strip() + for line in fd.read().splitlines() + if line.strip() and not line.startswith("#") + ] + ) + + # Propagate `srcroot` to `GenericXML` to resolve $SRCROOT + # See call to `Machines` below + utils.GLOBAL["SRCROOT"] = args.srcroot + + # Compute list of fully-resolved test_names + test_extra_data = {} + if model_config.check_machine_name_from_test_name: + machine_name = args.xml_machine if args.machine is None else args.machine + + # If it's still unclear what machine to use, look at test names + if machine_name is None: + for test in args.testargs: + testsplit = CIME.utils.parse_test_name(test) + if testsplit[4] is not None: + if machine_name is None: + machine_name = testsplit[4] + else: + expect( + machine_name == testsplit[4], + "ambiguity in machine, please use the --machine option", + ) + + mach_obj = Machines(machine=machine_name) + if args.testargs: + args.compiler = ( + mach_obj.get_default_compiler() + if args.compiler is None + else args.compiler + ) + test_names = get_tests.get_full_test_names( + args.testargs, mach_obj.get_machine_name(), args.compiler + ) + else: + expect( + not ( + args.xml_machine is None + and args.xml_compiler is None + and args.xml_category is None + and args.xml_testlist is None + ), + "At least one of --xml-machine, --xml-testlist, " + "--xml-compiler, --xml-category or a valid test name must be provided.", + ) + + test_data = get_tests_from_xml( + xml_machine=args.xml_machine, + xml_category=args.xml_category, + xml_compiler=args.xml_compiler, + xml_testlist=args.xml_testlist, + machine=machine_name, + compiler=args.compiler, + driver=args.driver, + ) + test_names = [item["name"] for item in test_data] + for test_datum in test_data: + test_extra_data[test_datum["name"]] = test_datum + + logger.info("Testnames: %s" % test_names) + else: + inf_machine, inf_compilers = get_tests.infer_arch_from_tests(args.testargs) + if args.machine is None: + args.machine = inf_machine + + mach_obj = Machines(machine=args.machine) + if args.compiler is None: + if len(inf_compilers) == 0: + args.compiler = mach_obj.get_default_compiler() + elif len(inf_compilers) == 1: + args.compiler = inf_compilers[0] + else: + # User has multiple compiler specifications in their testargs + args.compiler = inf_compilers[0] + expect( + not args.compare and not args.generate, + "It is not safe to do baseline operations with heterogenous compiler set: {}".format( + inf_compilers + ), + ) + + test_names = get_tests.get_full_test_names( + args.testargs, mach_obj.get_machine_name(), args.compiler + ) + + expect( + mach_obj.is_valid_compiler(args.compiler), + "Compiler %s not valid for machine %s" + % (args.compiler, mach_obj.get_machine_name()), + ) + + if not args.wait and mach_obj.has_batch_system() and not args.no_batch: + expect( + not args.check_throughput, + "Makes no sense to use --check-throughput without --wait", + ) + expect( + not args.check_memory, "Makes no sense to use --check-memory without --wait" + ) + + # Normalize compare/generate between the models + baseline_cmp_name = None + baseline_gen_name = None + if args.compare or args.generate: + if model_config.create_test_flag_mode == "cesm": + if args.compare is not None: + baseline_cmp_name = args.compare + if args.generate is not None: + baseline_gen_name = args.generate + else: + baseline_name = ( + args.baseline_name + if args.baseline_name + else CIME.utils.get_current_branch(repo=CIME.utils.get_cime_root()) + ) + expect( + baseline_name is not None, + "Could not determine baseline name from branch, please use -b option", + ) + if args.compare: + baseline_cmp_name = baseline_name + elif args.generate: + baseline_gen_name = baseline_name + + if args.input_dir is not None: + args.input_dir = os.path.abspath(args.input_dir) + + # sanity check + for name in test_names: + dot_count = name.count(".") + expect(dot_count > 1 and dot_count <= 4, "Invalid test Name, '{}'".format(name)) + + # for e3sm, sort by walltime + if model_config.sort_tests: + if args.walltime is None: + # Longest tests should run first + test_names.sort(key=get_tests.key_test_time, reverse=True) + else: + test_names.sort() + + return ( + test_names, + test_extra_data, + args.compiler, + mach_obj.get_machine_name(), + args.no_run, + args.no_build, + args.no_setup, + args.no_batch, + args.test_root, + args.baseline_root, + args.clean, + baseline_cmp_name, + baseline_gen_name, + args.namelists_only, + args.project, + args.test_id, + args.parallel_jobs, + args.walltime, + args.single_submit, + args.proc_pool, + args.use_existing, + args.save_timing, + args.queue, + args.allow_baseline_overwrite, + args.output_root, + args.wait, + args.force_procs, + args.force_threads, + args.mpilib, + args.input_dir, + args.pesfile, + args.retry, + args.mail_user, + args.mail_type, + args.check_throughput, + args.check_memory, + args.ignore_namelists, + args.ignore_memleak, + args.allow_pnl, + args.non_local, + args.single_exe, + args.workflow, + args.chksum, + args.force_rebuild, + )
+ + + +############################################################################### +
+[docs] +def get_default_setting(config, varname, default_if_not_found, check_main=False): + ############################################################################### + if config.has_option("create_test", varname): + default = config.get("create_test", varname) + elif check_main and config.has_option("main", varname): + default = config.get("main", varname) + else: + default = default_if_not_found + return default
+ + + +############################################################################### +
+[docs] +def single_submit_impl( + machine_name, test_id, proc_pool, _, args, job_cost_map, wall_time, test_root +): + ############################################################################### + mach = Machines(machine=machine_name) + expect( + mach.has_batch_system(), + "Single submit does not make sense on non-batch machine '%s'" + % mach.get_machine_name(), + ) + + machine_name = mach.get_machine_name() + + # + # Compute arg list for second call to create_test + # + new_args = list(args) + new_args.remove("--single-submit") + new_args.append("--no-batch") + new_args.append("--use-existing") + no_arg_is_a_test_id_arg = True + no_arg_is_a_proc_pool_arg = True + no_arg_is_a_machine_arg = True + for arg in new_args: + if arg == "-t" or arg.startswith("--test-id"): + no_arg_is_a_test_id_arg = False + elif arg.startswith("--proc-pool"): + no_arg_is_a_proc_pool_arg = False + elif arg == "-m" or arg.startswith("--machine"): + no_arg_is_a_machine_arg = True + + if no_arg_is_a_test_id_arg: + new_args.append("-t %s" % test_id) + if no_arg_is_a_proc_pool_arg: + new_args.append("--proc-pool %d" % proc_pool) + if no_arg_is_a_machine_arg: + new_args.append("-m %s" % machine_name) + + # + # Resolve batch directives manually. There is currently no other way + # to do this without making a Case object. Make a throwaway case object + # to help us here. + # + testcase_dirs = glob.glob("%s/*%s*/TestStatus" % (test_root, test_id)) + expect(testcase_dirs, "No test case dirs found!?") + first_case = os.path.abspath(os.path.dirname(testcase_dirs[0])) + with Case(first_case, read_only=False) as case: + env_batch = case.get_env("batch") + + submit_cmd = env_batch.get_value("batch_submit", subgroup=None) + submit_args = env_batch.get_submit_args(case, "case.test") + + tasks_per_node = mach.get_value("MAX_MPITASKS_PER_NODE") + num_nodes = int(math.ceil(float(proc_pool) / tasks_per_node)) + if wall_time is None: + wall_time = compute_total_time(job_cost_map, proc_pool) + wall_time_bab = convert_to_babylonian_time(int(wall_time)) + else: + wall_time_bab = wall_time + + queue = env_batch.select_best_queue(num_nodes, proc_pool, walltime=wall_time_bab) + wall_time_max_bab = env_batch.get_queue_specs(queue)[3] + if wall_time_max_bab is not None: + wall_time_max = convert_to_seconds(wall_time_max_bab) + if wall_time_max < wall_time: + wall_time = wall_time_max + wall_time_bab = convert_to_babylonian_time(wall_time) + + overrides = { + "job_id": "create_test_single_submit_%s" % test_id, + "num_nodes": num_nodes, + "tasks_per_node": tasks_per_node, + "totaltasks": tasks_per_node * num_nodes, + "job_wallclock_time": wall_time_bab, + "job_queue": env_batch.text(queue), + } + + directives = env_batch.get_batch_directives(case, "case.test", overrides=overrides) + + # + # Make simple submit script and submit + # + + script = "#! /bin/bash\n" + script += "\n%s" % directives + script += "\n" + script += "cd %s\n" % os.getcwd() + script += "%s %s\n" % (__file__, " ".join(new_args)) + + submit_cmd = "%s %s" % (submit_cmd, submit_args) + logger.info("Script:\n%s" % script) + + run_cmd_no_fail( + submit_cmd, input_str=script, arg_stdout=None, arg_stderr=None, verbose=True + )
+ + + +############################################################################### +# pragma pylint: disable=protected-access +
+[docs] +def create_test( + test_names, + test_data, + compiler, + machine_name, + no_run, + no_build, + no_setup, + no_batch, + test_root, + baseline_root, + clean, + baseline_cmp_name, + baseline_gen_name, + namelists_only, + project, + test_id, + parallel_jobs, + walltime, + single_submit, + proc_pool, + use_existing, + save_timing, + queue, + allow_baseline_overwrite, + output_root, + wait, + force_procs, + force_threads, + mpilib, + input_dir, + pesfile, + run_count, + mail_user, + mail_type, + check_throughput, + check_memory, + ignore_namelists, + ignore_memleak, + allow_pnl, + non_local, + single_exe, + workflow, + chksum, + force_rebuild, +): + ############################################################################### + impl = TestScheduler( + test_names, + test_data=test_data, + no_run=no_run, + no_build=no_build, + no_setup=no_setup, + no_batch=no_batch, + test_root=test_root, + test_id=test_id, + baseline_root=baseline_root, + baseline_cmp_name=baseline_cmp_name, + baseline_gen_name=baseline_gen_name, + clean=clean, + machine_name=machine_name, + compiler=compiler, + namelists_only=namelists_only, + project=project, + parallel_jobs=parallel_jobs, + walltime=walltime, + proc_pool=proc_pool, + use_existing=use_existing, + save_timing=save_timing, + queue=queue, + allow_baseline_overwrite=allow_baseline_overwrite, + output_root=output_root, + force_procs=force_procs, + force_threads=force_threads, + mpilib=mpilib, + input_dir=input_dir, + pesfile=pesfile, + run_count=run_count, + mail_user=mail_user, + mail_type=mail_type, + allow_pnl=allow_pnl, + non_local=non_local, + single_exe=single_exe, + workflow=workflow, + chksum=chksum, + force_rebuild=force_rebuild, + ) + + success = impl.run_tests( + wait=wait, + check_throughput=check_throughput, + check_memory=check_memory, + ignore_namelists=ignore_namelists, + ignore_memleak=ignore_memleak, + ) + + if success and single_submit: + # Get real test root + test_root = impl._test_root + + job_cost_map = {} + largest_case = 0 + for test in impl._tests: + test_dir = impl._get_test_dir(test) + procs_needed = impl._get_procs_needed(test, RUN_PHASE) + time_needed = convert_to_seconds( + run_cmd_no_fail( + "./xmlquery JOB_WALLCLOCK_TIME -value -subgroup case.test", + from_dir=test_dir, + ) + ) + job_cost_map[test] = (procs_needed, time_needed) + if procs_needed > largest_case: + largest_case = procs_needed + + if proc_pool is None: + # Based on size of created jobs, choose a reasonable proc_pool. May need to put + # more thought into this. + proc_pool = 2 * largest_case + + # Create submit script + single_submit_impl( + machine_name, + test_id, + proc_pool, + project, + sys.argv[1:], + job_cost_map, + walltime, + test_root, + ) + + return success
+ + + +############################################################################### +def _main_func(description=None): + ############################################################################### + customize_path = os.path.join(utils.get_src_root(), "cime_config", "customize") + + if os.path.exists(customize_path): + Config.instance().load(customize_path) + + ( + test_names, + test_data, + compiler, + machine_name, + no_run, + no_build, + no_setup, + no_batch, + test_root, + baseline_root, + clean, + baseline_cmp_name, + baseline_gen_name, + namelists_only, + project, + test_id, + parallel_jobs, + walltime, + single_submit, + proc_pool, + use_existing, + save_timing, + queue, + allow_baseline_overwrite, + output_root, + wait, + force_procs, + force_threads, + mpilib, + input_dir, + pesfile, + retry, + mail_user, + mail_type, + check_throughput, + check_memory, + ignore_namelists, + ignore_memleak, + allow_pnl, + non_local, + single_exe, + workflow, + chksum, + force_rebuild, + ) = parse_command_line(sys.argv, description) + + success = False + run_count = 0 + while not success and run_count <= retry: + use_existing = use_existing if run_count == 0 else True + allow_baseline_overwrite = allow_baseline_overwrite if run_count == 0 else True + success = create_test( + test_names, + test_data, + compiler, + machine_name, + no_run, + no_build, + no_setup, + no_batch, + test_root, + baseline_root, + clean, + baseline_cmp_name, + baseline_gen_name, + namelists_only, + project, + test_id, + parallel_jobs, + walltime, + single_submit, + proc_pool, + use_existing, + save_timing, + queue, + allow_baseline_overwrite, + output_root, + wait, + force_procs, + force_threads, + mpilib, + input_dir, + pesfile, + run_count, + mail_user, + mail_type, + check_throughput, + check_memory, + ignore_namelists, + ignore_memleak, + allow_pnl, + non_local, + single_exe, + workflow, + chksum, + force_rebuild, + ) + run_count += 1 + + # For testing only + os.environ["TESTBUILDFAIL_PASS"] = "True" + os.environ["TESTRUNFAIL_PASS"] = "True" + + sys.exit(0 if success else CIME.utils.TESTS_FAILED_ERR_CODE) + + +############################################################################### + +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/query_config.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/query_config.html new file mode 100644 index 00000000000..3aae0b3ed8c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/query_config.html @@ -0,0 +1,657 @@ + + + + + + CIME.scripts.query_config — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.scripts.query_config

+#!/usr/bin/env python3
+"""
+Displays information about available compsets, component settings, grids and/or
+machines. Typically run with one of the arguments --compsets, --settings,
+--grids or --machines; if you specify more than one of these arguments,
+information will be listed for each.
+"""
+
+from CIME.Tools.standard_script_setup import *
+import re
+from CIME.utils import expect, get_cime_default_driver, deprecate_action
+from CIME.XML.files import Files
+from CIME.XML.component import Component
+from CIME.XML.compsets import Compsets
+from CIME.XML.grids import Grids
+from CIME.config import Config
+
+# from CIME.XML.machines  import Machines
+import CIME.XML.machines
+from argparse import RawTextHelpFormatter
+
+logger = logging.getLogger(__name__)
+
+customize_path = os.path.join(CIME.utils.get_src_root(), "cime_config", "customize")
+
+config = Config.load(customize_path)
+
+supported_comp_interfaces = list(config.driver_choices)
+
+
+
+[docs] +def query_grids(files, long_output, xml=False): + """ + query all grids. + """ + config_file = files.get_value("GRIDS_SPEC_FILE") + expect( + os.path.isfile(config_file), + "Cannot find config_file {} on disk".format(config_file), + ) + + grids = Grids(config_file) + if xml: + print("{}".format(grids.get_raw_record().decode("UTF-8"))) + elif long_output: + grids.print_values(long_output=long_output) + else: + grids.print_values()
+ + + +
+[docs] +def query_machines(files, machine_name="all", xml=False): + """ + query machines. Defaule: all + """ + config_file = files.get_value("MACHINES_SPEC_FILE") + expect( + os.path.isfile(config_file), + "Cannot find config_file {} on disk".format(config_file), + ) + # Provide a special machine name indicating no need for a machine name + machines = Machines(config_file, machine="Query") + if xml: + if machine_name == "all": + print("{}".format(machines.get_raw_record().decode("UTF-8"))) + else: + machines.set_machine(machine_name) + print( + "{}".format( + machines.get_raw_record(root=machines.machine_node).decode("UTF-8") + ) + ) + else: + machines.print_values(machine_name=machine_name)
+ + + +
+[docs] +def query_compsets(files, name, xml=False): + """ + query compset definition give a compset name + """ + # Determine valid component values by checking the value attributes for COMPSETS_SPEC_FILE + components = get_compsets(files) + match_found = None + all_components = False + if re.search("^all$", name): # print all compsets + match_found = name + all_components = True + else: + for component in components: + if component == name: + match_found = name + break + + # If name is not a valid argument - exit with error + expect( + match_found is not None, + "Invalid input argument {}, valid input arguments are {}".format( + name, components + ), + ) + + if all_components: # print all compsets + for component in components: + # the all_components flag will only print available components + print_compset(component, files, all_components=all_components, xml=xml) + else: + print_compset(name, files, xml=xml)
+ + + + + + + +
+[docs] +def query_all_components(files, xml=False): + """ + query all components + """ + components = get_components(files) + # Loop through the elements for each component class (in config_files.xml) + for comp in components: + string = "CONFIG_{}_FILE".format(comp) + + # determine all components in string + components = files.get_components(string) + for item in components: + query_component(item, files, all_components=True, xml=xml)
+ + + +
+[docs] +def query_component(name, files, all_components=False, xml=False): + """ + query a component by name + """ + # Determine the valid component classes (e.g. atm) for the driver/cpl + # These are then stored in comps_array + components = get_components(files) + + # Loop through the elements for each component class (in config_files.xml) + # and see if there is a match for the the target component in the component attribute + match_found = False + valid_components = [] + config_exists = False + for comp in components: + string = "CONFIG_{}_FILE".format(comp) + config_file = None + # determine all components in string + root_dir_node_name = "COMP_ROOT_DIR_{}".format(comp) + components = files.get_components(root_dir_node_name) + if components is None: + components = files.get_components(string) + for item in components: + valid_components.append(item) + logger.debug("{}: valid_components {}".format(comp, valid_components)) + # determine if config_file is on disk + if name is None: + config_file = files.get_value(string) + elif name in valid_components: + config_file = files.get_value(string, attribute={"component": name}) + logger.debug("query {}".format(config_file)) + if config_file is not None: + match_found = True + config_exists = os.path.isfile(config_file) + break + + if not all_components and not config_exists: + expect(config_exists, "Cannot find config_file {} on disk".format(config_file)) + elif all_components and not config_exists: + print("WARNING: Couldn't find config_file {} on disk".format(config_file)) + return + # If name is not a valid argument - exit with error + expect( + match_found, + "Invalid input argument {}, valid input arguments are {}".format( + name, valid_components + ), + ) + + # Check that file exists on disk, if not exit with error + expect( + (config_file), "Cannot find any config_component.xml file for {}".format(name) + ) + + # determine component xml content + component = Component(config_file, "CPL") + if xml: + print("{}".format(component.get_raw_record().decode("UTF-8"))) + else: + component.print_values()
+ + + +
+[docs] +def parse_command_line(args, description): + """ + parse command line arguments + """ + cime_model = CIME.utils.get_model() + + parser = ArgumentParser( + description=description, formatter_class=RawTextHelpFormatter + ) + + CIME.utils.setup_standard_logging_options(parser) + + valid_components = ["all"] + + parser.add_argument("--xml", action="store_true", help="Output in xml format.") + + files = {} + for comp_interface in supported_comp_interfaces: + files[comp_interface] = Files(comp_interface=comp_interface) + components = files[comp_interface].get_components("COMPSETS_SPEC_FILE") + for item in components: + valid_components.append(item) + + parser.add_argument( + "--compsets", + nargs="?", + const="all", + choices=valid_components, + help="Query compsets corresponding to the target component for the {} model." + " If no component is given, lists compsets defined by all components".format( + cime_model + ), + ) + + # Loop through the elements for each component class (in config_files.xml) + valid_components = ["all"] + tmp_comp_interfaces = supported_comp_interfaces + for comp_interface in tmp_comp_interfaces: + try: + components = get_components(files[comp_interface]) + except Exception: + supported_comp_interfaces.remove(comp_interface) + + for comp in components: + string = config.xml_component_key.format(comp) + + # determine all components in string + components = files[comp_interface].get_components(string) + if components: + for item in components: + valid_components.append(item) + + parser.add_argument( + "--components", + nargs="?", + const="all", + choices=valid_components, + help="Query component settings corresponding to the target component for {} model." + "\nIf the option is empty, then the lists settings defined by all components is output".format( + cime_model + ), + ) + + parser.add_argument( + "--grids", + action="store_true", + help="Query supported model grids for {} model.".format(cime_model), + ) + # same for all comp_interfaces + config_file = files["mct"].get_value("MACHINES_SPEC_FILE") + expect( + os.path.isfile(config_file), + "Cannot find config_file {} on disk".format(config_file), + ) + machines = Machines(config_file, machine="Query") + machine_names = ["all", "current"] + machine_names.extend(machines.list_available_machines()) + + parser.add_argument( + "--machines", + nargs="?", + const="all", + choices=machine_names, + help="Query supported machines for {} model." + "\nIf option is left empty then all machines are listed," + "\nIf the option is 'current' then only the current machine details are listed.".format( + cime_model + ), + ) + + parser.add_argument( + "--long", action="store_true", help="Provide long output for queries" + ) + + parser.add_argument( + "--comp_interface", + choices=supported_comp_interfaces, # same as config.driver_choices + default="mct", + action=deprecate_action(", use --driver argument"), + help="DEPRECATED: Use --driver argument", + ) + + parser.add_argument( + "--driver", + choices=config.driver_choices, + default=get_cime_default_driver(), + help="Coupler/Driver interface", + ) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + # make sure at least one argument has been passed + if not (args.grids or args.compsets or args.components or args.machines): + parser.print_help(sys.stderr) + + return ( + args.grids, + args.compsets, + args.components, + args.machines, + args.long, + args.xml, + files[args.driver], + )
+ + + +
+[docs] +def get_compsets(files): + """ + Determine valid component values by checking the value attributes for COMPSETS_SPEC_FILE + """ + return files.get_components("COMPSETS_SPEC_FILE")
+ + + +
+[docs] +def get_components(files): + """ + Determine the valid component classes (e.g. atm) for the driver/cpl + These are then stored in comps_array + """ + infile = files.get_value("CONFIG_CPL_FILE") + config_drv = Component(infile, "CPL") + return config_drv.get_valid_model_components()
+ + + +
+[docs] +class ArgumentParser(argparse.ArgumentParser): + """ + we override the error message from ArgumentParser to have a more helpful + message in the case of missing arguments + """ + +
+[docs] + def error(self, message): + self.print_usage(sys.stderr) + # missing argument + # TODO: assumes comp_interface='mct' + if "expected one argument" in message: + if "compset" in message: + components = get_compsets(Files(comp_interface="mct")) + self.exit( + 2, + "{}: error: {}\nValid input arguments are {}\n".format( + self.prog, message, components + ), + ) + elif "component" in message: + files = Files(comp_interface="mct") + components = get_components(files) + # Loop through the elements for each component class (in config_files.xml) + valid_components = [] + for comp in components: + string = "CONFIG_{}_FILE".format(comp) + + # determine all components in string + components = files.get_components(string) + for item in components: + valid_components.append(item) + self.exit( + 2, + "{}: error: {}\nValid input arguments are {}\n".format( + self.prog, message, valid_components + ), + ) + # for all other errors + self.exit(2, "{}: error: {}\n".format(self.prog, message))
+
+ + + +
+[docs] +class Machines(CIME.XML.machines.Machines): + """ + we overide print_values from Machines to add current in machine description + """ + +
+[docs] + def print_values(self, machine_name="all"): # pylint: disable=arguments-differ + # set flag to look for single machine + if "all" not in machine_name: + single_machine = True + if machine_name == "current": + machine_name = self.probe_machine_name(warn=False) + else: + single_machine = False + + # if we can't find the specified machine + if single_machine and machine_name is None: + files = Files() + config_file = files.get_value("MACHINES_SPEC_FILE") + print("Machine is not listed in config file: {}".format(config_file)) + else: # write out machines + if single_machine: + machine_names = [machine_name] + else: + machine_names = self.list_available_machines() + print("Machine(s)\n") + for name in machine_names: + self.set_machine(name) + desc = self.text(self.get_child("DESC")) + os_ = self.text(self.get_child("OS")) + compilers = self.text(self.get_child("COMPILERS")) + mpilibnodes = self.get_children("MPILIBS", root=self.machine_node) + mpilibs = [] + for node in mpilibnodes: + mpilibs.extend(self.text(node).split(",")) + # This does not include the possible depedancy of mpilib on compiler + # it simply provides a list of mpilibs available on the machine + mpilibs = list(set(mpilibs)) + max_tasks_per_node = self.text(self.get_child("MAX_TASKS_PER_NODE")) + mpitasks_node = self.get_optional_child( + "MAX_MPITASKS_PER_NODE", root=self.machine_node + ) + max_mpitasks_per_node = ( + self.text(mpitasks_node) if mpitasks_node else max_tasks_per_node + ) + max_gpus_node = self.get_optional_child( + "MAX_GPUS_PER_NODE", root=self.machine_node + ) + max_gpus_per_node = self.text(max_gpus_node) if max_gpus_node else 0 + + current_machine = self.probe_machine_name(warn=False) + name += ( + " (current)" if current_machine and current_machine in name else "" + ) + print(" {} : {} ".format(name, desc)) + print(" os ", os_) + print(" compilers ", compilers) + print(" mpilibs ", mpilibs) + if max_mpitasks_per_node is not None: + print(" pes/node ", max_mpitasks_per_node) + if max_tasks_per_node is not None: + print(" max_tasks/node ", max_tasks_per_node) + if max_gpus_per_node is not None: + print(" max_gpus/node ", max_gpus_per_node) + print("")
+
+ + + +def _main_func(description=None): + """ + main function + """ + grids, compsets, components, machines, long_output, xml, files = parse_command_line( + sys.argv, description + ) + + if grids: + query_grids(files, long_output, xml=xml) + + if compsets is not None: + query_compsets(files, name=compsets, xml=xml) + + if components is not None: + if re.search("^all$", components): # print all compsets + query_all_components(files, xml=xml) + else: + query_component(components, files, xml=xml) + + if machines is not None: + query_machines(files, machine_name=machines, xml=xml) + + +# main entry point +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/query_testlists.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/query_testlists.html new file mode 100644 index 00000000000..4d06107c22f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/scripts/query_testlists.html @@ -0,0 +1,402 @@ + + + + + + CIME.scripts.query_testlists — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.scripts.query_testlists

+#!/usr/bin/env python3
+
+"""
+Script to query xml test lists, displaying all tests in human-readable form.
+
+Usage:
+   ./query_testlists [--show-options] [--define-testtypes]
+      Display a list of tests
+   ./query_testlists --count
+      Count tests by category/machine/compiler
+   ./query_testlists --list {category,categories,machine,machines,compiler,compilers}
+      List the available options for --xml-category, --xml-machine, or --xml-compiler
+
+   All of the above support the various --xml-* arguments for subsetting which tests are included.
+"""
+
+from CIME.Tools.standard_script_setup import *
+from CIME.test_utils import get_tests_from_xml, test_to_string
+from CIME.XML.tests import Tests
+from CIME.utils import expect
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_command_line(args, description): + ############################################################################### + parser = argparse.ArgumentParser( + description=description, formatter_class=argparse.RawTextHelpFormatter + ) + + CIME.utils.setup_standard_logging_options(parser) + + parser.add_argument( + "--count", + action="store_true", + help="Rather than listing tests, just give counts by category/machine/compiler.", + ) + + parser.add_argument( + "--list", + dest="list_type", + choices=[ + "category", + "categories", + "machine", + "machines", + "compiler", + "compilers", + ], + help="Rather than listing tests, list the available options for\n" + "--xml-category, --xml-machine, or --xml-compiler.\n" + "(The singular and plural forms are equivalent - so '--list category'\n" + "is equivalent to '--list categories', etc.)", + ) + + parser.add_argument( + "--show-options", + action="store_true", + help="For each test, also show options for that test\n" + "(wallclock time, memory leak tolerance, etc.).\n" + "(Has no effect with --list or --count options.)", + ) + + parser.add_argument( + "--define-testtypes", + action="store_true", + help="At the top of the list of tests, define all of the possible test types.\n" + "(Has no effect with --list or --count options.)", + ) + + parser.add_argument( + "--xml-category", + help="Only include tests in this category; default is all categories.", + ) + + parser.add_argument( + "--xml-machine", + help="Only include tests for this machine; default is all machines.", + ) + + parser.add_argument( + "--xml-compiler", + help="Only include tests for this compiler; default is all compilers.", + ) + + parser.add_argument( + "--xml-testlist", + help="Path to testlist file from which tests are gathered;\n" + "default is all files specified in config_files.xml.", + ) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + _check_argument_compatibility(args) + + if args.list_type: + _process_list_type(args) + + return args
+ + + +############################################################################### +def _check_argument_compatibility(args): + ############################################################################### + """Ensures there are no incompatible arguments + + If incompatible arguments are found, aborts with a helpful error + message. + """ + + expect( + not (args.count and args.list_type), + "Cannot specify both --count and --list arguments.", + ) + + if args.count: + expect(not args.show_options, "--show-options is incompatible with --count") + expect( + not args.define_testtypes, "--define-testtypes is incompatible with --count" + ) + + if args.list_type: + expect(not args.show_options, "--show-options is incompatible with --list") + expect( + not args.define_testtypes, "--define-testtypes is incompatible with --list" + ) + + +############################################################################### +def _process_list_type(args): + ############################################################################### + """Convert args.list_type into a name that matches one of the keys of the + test data dictionaries + + Args: + args: object containing list_type string attribute + """ + + if args.list_type == "categories": + args.list_type = "category" + elif args.list_type == "machines": + args.list_type = "machine" + elif args.list_type == "compilers": + args.list_type = "compiler" + + +############################################################################### + + + + +############################################################################### +
+[docs] +def count_test_data(test_data): + ############################################################################### + """ + Args: + test_data (dict): dictionary of test data, containing at least these keys: + - name: full test name + - category: test category + - machine + - compiler + """ + + tab_stop = " " * 4 + + categories = sorted(set([item["category"] for item in test_data])) + for category in categories: + tests_this_category = [ + one_test for one_test in test_data if one_test["category"] == category + ] + print("%s: %d" % (category, len(tests_this_category))) + + machines = sorted(set([item["machine"] for item in tests_this_category])) + for machine in machines: + tests_this_machine = [ + one_test + for one_test in tests_this_category + if one_test["machine"] == machine + ] + print("%s%s: %d" % (tab_stop, machine, len(tests_this_machine))) + + compilers = sorted(set([item["compiler"] for item in tests_this_machine])) + for compiler in compilers: + tests_this_compiler = [ + one_test + for one_test in tests_this_machine + if one_test["compiler"] == compiler + ] + print("%s%s: %d" % (tab_stop * 2, compiler, len(tests_this_compiler)))
+ + + +############################################################################### +
+[docs] +def list_test_data(test_data, list_type): + ############################################################################### + """List categories, machines or compilers + + Args: + test_data (dict): dictionary of test data, containing at least these keys: + - category + - machine + - compiler + list_type (str): one of 'category', 'machine' or 'compiler' + """ + + items = sorted(set([one_test[list_type] for one_test in test_data])) + for item in items: + print(item)
+ + + +############################################################################### +def _main_func(description=None): + ############################################################################### + args = parse_command_line(sys.argv, description) + + test_data = get_tests_from_xml( + xml_machine=args.xml_machine, + xml_category=args.xml_category, + xml_compiler=args.xml_compiler, + xml_testlist=args.xml_testlist, + ) + + expect( + test_data, + "No tests found with the following options (where 'None' means no subsetting on that attribute):\n" + "\tMachine = %s\n\tCategory = %s\n\tCompiler = %s\n\tTestlist = %s" + % (args.xml_machine, args.xml_category, args.xml_compiler, args.xml_testlist), + ) + + if args.count: + count_test_data(test_data) + elif args.list_type: + list_test_data(test_data, args.list_type) + else: + print_test_data(test_data, args.show_options, args.define_testtypes) + + +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/simple_compare.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/simple_compare.html new file mode 100644 index 00000000000..93f57a64b38 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/simple_compare.html @@ -0,0 +1,378 @@ + + + + + + CIME.simple_compare — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.simple_compare

+import os, re
+
+from CIME.utils import expect
+
+###############################################################################
+def _normalize_string_value(value, case):
+    ###############################################################################
+    """
+    Some of the strings are inherently prone to diffs, like file
+    paths, etc. This function attempts to normalize that data so that
+    it will not cause diffs.
+    """
+    # Any occurance of case must be normalized because test-ids might not match
+    if case is not None:
+        case_re = re.compile(r"{}[.]([GC])[.]([^./\s]+)".format(case))
+        value = case_re.sub("{}.ACTION.TESTID".format(case), value)
+
+    if "/" in value:
+        # File path, just return the basename
+        return os.path.basename(value)
+    elif "username" in value:
+        return ""
+    elif ".log." in value:
+        # Remove the part that's prone to diff
+        components = value.split(".")
+        return os.path.basename(".".join(components[0:-1]))
+    else:
+        return value
+
+
+###############################################################################
+def _skip_comments_and_whitespace(lines, idx):
+    ###############################################################################
+    """
+    Starting at idx, return next valid idx of lines that contains real data
+    """
+    if idx == len(lines):
+        return idx
+
+    comment_re = re.compile(r"^[#!]")
+
+    lines_slice = lines[idx:]
+    for line in lines_slice:
+        line = line.strip()
+        if comment_re.match(line) is not None or line == "":
+            idx += 1
+        else:
+            return idx
+
+    return idx
+
+
+###############################################################################
+def _compare_data(gold_lines, comp_lines, case, offset_method=False):
+    ###############################################################################
+    """
+    >>> teststr = '''
+    ... data1
+    ... data2 data3
+    ... data4 data5 data6
+    ...
+    ... # Comment
+    ... data7 data8 data9 data10
+    ... '''
+    >>> _compare_data(teststr.splitlines(), teststr.splitlines(), None)
+    ('', 0)
+
+    >>> teststr2 = '''
+    ... data1
+    ... data2 data30
+    ... data4 data5 data6
+    ... data7 data8 data9 data10
+    ... data00
+    ... '''
+    >>> results,_ = _compare_data(teststr.splitlines(), teststr2.splitlines(), None)
+    >>> print(results)
+    Inequivalent lines data2 data3 != data2 data30
+      NORMALIZED: data2 data3 != data2 data30
+    Found extra lines
+    data00
+    <BLANKLINE>
+    >>> teststr3 = '''
+    ... data1
+    ... data4 data5 data6
+    ... data7 data8 data9 data10
+    ... data00
+    ... '''
+    >>> results,_ = _compare_data(teststr3.splitlines(), teststr2.splitlines(), None, offset_method=True)
+    >>> print(results)
+    Inequivalent lines data4 data5 data6 != data2 data30
+      NORMALIZED: data4 data5 data6 != data2 data30
+    <BLANKLINE>
+    """
+    comments = ""
+    cnt = 0
+    gidx, cidx = 0, 0
+    gnum, cnum = len(gold_lines), len(comp_lines)
+    while gidx < gnum or cidx < cnum:
+        gidx = _skip_comments_and_whitespace(gold_lines, gidx)
+        cidx = _skip_comments_and_whitespace(comp_lines, cidx)
+
+        if gidx == gnum:
+            if cidx == cnum:
+                return comments, cnt
+            else:
+                comments += "Found extra lines\n"
+                comments += "\n".join(comp_lines[cidx:]) + "\n"
+                return comments, cnt
+        elif cidx == cnum:
+            comments += "Missing lines\n"
+            comments += "\n".join(gold_lines[gidx:1]) + "\n"
+            return comments, cnt
+
+        gold_value = gold_lines[gidx].strip()
+        gold_value = gold_value.replace('"', "'")
+        comp_value = comp_lines[cidx].strip()
+        comp_value = comp_value.replace('"', "'")
+
+        norm_gold_value = _normalize_string_value(gold_value, case)
+        norm_comp_value = _normalize_string_value(comp_value, case)
+
+        if norm_gold_value != norm_comp_value:
+            comments += "Inequivalent lines {} != {}\n".format(gold_value, comp_value)
+            comments += "  NORMALIZED: {} != {}\n".format(
+                norm_gold_value, norm_comp_value
+            )
+            cnt += 1
+        if offset_method and (norm_gold_value != norm_comp_value):
+            if gnum > cnum:
+                gidx += 1
+            else:
+                cidx += 1
+        else:
+            gidx += 1
+            cidx += 1
+
+    return comments, cnt
+
+
+###############################################################################
+
+[docs] +def compare_files(gold_file, compare_file, case=None): + ############################################################################### + """ + Returns true if files are the same, comments are returned too: + (success, comments) + """ + expect(os.path.exists(gold_file), "File not found: {}".format(gold_file)) + expect(os.path.exists(compare_file), "File not found: {}".format(compare_file)) + + comments, cnt = _compare_data( + open(gold_file, "r").readlines(), open(compare_file, "r").readlines(), case + ) + + if cnt > 0: + comments2, cnt2 = _compare_data( + open(gold_file, "r").readlines(), + open(compare_file, "r").readlines(), + case, + offset_method=True, + ) + if cnt2 < cnt: + comments = comments2 + + return comments == "", comments
+ + + +############################################################################### +
+[docs] +def compare_runconfigfiles(gold_file, compare_file, case=None): + ############################################################################### + """ + Returns true if files are the same, comments are returned too: + (success, comments) + """ + expect(os.path.exists(gold_file), "File not found: {}".format(gold_file)) + expect(os.path.exists(compare_file), "File not found: {}".format(compare_file)) + + # create dictionary's of the runconfig files and compare them + gold_dict = _parse_runconfig(gold_file) + compare_dict = _parse_runconfig(compare_file) + + comments = findDiff(gold_dict, compare_dict, case=case) + comments = comments.replace(" d1", " " + gold_file) + comments = comments.replace(" d2", " " + compare_file) + # this picks up the case that an entry in compare is not in gold + if comments == "": + comments = findDiff(compare_dict, gold_dict, case=case) + comments = comments.replace(" d2", " " + gold_file) + comments = comments.replace(" d1", " " + compare_file) + + return comments == "", comments
+ + + +def _parse_runconfig(filename): + runconfig = {} + inrunseq = False + insubsection = None + subsection_re = re.compile(r"\s*(\S+)::") + group_re = re.compile(r"\s*(\S+)\s*:\s*(\S+)") + var_re = re.compile(r"\s*(\S+)\s*=\s*(\S+)") + with open(filename, "r") as fd: + for line in fd: + # remove comments + line = line.split("#")[0] + subsection_match = subsection_re.match(line) + group_match = group_re.match(line) + var_match = var_re.match(line) + if re.match(r"\s*runSeq\s*::", line): + runconfig["runSeq"] = [] + inrunseq = True + elif re.match(r"\s*::\s*", line): + inrunseq = False + elif inrunseq: + runconfig["runSeq"].append(line) + elif subsection_match: + insubsection = subsection_match.group(1) + runconfig[insubsection] = {} + elif group_match: + runconfig[group_match.group(1)] = group_match.group(2) + elif insubsection and var_match: + runconfig[insubsection][var_match.group(1)] = var_match.group(2) + return runconfig + + +
+[docs] +def findDiff(d1, d2, path="", case=None): + comment = "" + for k in d1.keys(): + if not k in d2: + comment += path + ":\n" + comment += k + " as key not in d2\n" + else: + if type(d1[k]) is dict: + if path == "": + path = k + else: + path = path + "->" + k + comment += findDiff(d1[k], d2[k], path=path, case=case) + else: + if case in d1[k]: + pass + elif "username" in k: + pass + elif "logfile" in k: + pass + elif d1[k] != d2[k]: + comment += path + ":\n" + comment += " - {} : {}\n".format(k, d1[k]) + comment += " + {} : {}\n".format(k, d2[k]) + return comment
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_scheduler.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_scheduler.html new file mode 100644 index 00000000000..a48aa4ae1c7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_scheduler.html @@ -0,0 +1,1630 @@ + + + + + + CIME.test_scheduler — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.test_scheduler

+"""
+A library for scheduling/running through the phases of a set
+of system tests. Supports phase-level parallelism (can make progres
+on multiple system tests at once).
+
+TestScheduler will handle the TestStatus for the 1-time setup
+phases. All other phases need to handle their own status because
+they can be run outside the context of TestScheduler.
+"""
+
+import os
+import traceback, stat, threading, time, glob
+from collections import OrderedDict
+
+from CIME.XML.standard_module_setup import *
+from CIME.get_tests import get_recommended_test_time, get_build_groups, is_perf_test
+from CIME.utils import (
+    append_status,
+    append_testlog,
+    TESTS_FAILED_ERR_CODE,
+    parse_test_name,
+    get_full_test_name,
+    get_model,
+    convert_to_seconds,
+    get_cime_root,
+    get_src_root,
+    get_tools_path,
+    get_template_path,
+    get_project,
+    get_timestamp,
+    get_cime_default_driver,
+    clear_folder,
+)
+from CIME.config import Config
+from CIME.test_status import *
+from CIME.XML.machines import Machines
+from CIME.XML.generic_xml import GenericXML
+from CIME.XML.env_test import EnvTest
+from CIME.XML.env_mach_pes import EnvMachPes
+from CIME.XML.files import Files
+from CIME.XML.component import Component
+from CIME.XML.tests import Tests
+from CIME.case import Case
+from CIME.wait_for_tests import wait_for_tests
+from CIME.provenance import get_recommended_test_time_based_on_past
+from CIME.locked_files import lock_file
+from CIME.cs_status_creator import create_cs_status
+from CIME.hist_utils import generate_teststatus
+from CIME.build import post_build
+
+logger = logging.getLogger(__name__)
+
+# Phases managed by TestScheduler
+TEST_START = "INIT"  # Special pseudo-phase just for test_scheduler bookkeeping
+PHASES = [
+    TEST_START,
+    CREATE_NEWCASE_PHASE,
+    XML_PHASE,
+    SETUP_PHASE,
+    SHAREDLIB_BUILD_PHASE,
+    MODEL_BUILD_PHASE,
+    RUN_PHASE,
+]  # Order matters
+
+###############################################################################
+def _translate_test_names_for_new_pecount(test_names, force_procs, force_threads):
+    ###############################################################################
+    new_test_names = []
+    caseopts = []
+    for test_name in test_names:
+        (
+            testcase,
+            caseopts,
+            grid,
+            compset,
+            machine,
+            compiler,
+            testmods,
+        ) = parse_test_name(test_name)
+        rewrote_caseopt = False
+        if caseopts is not None:
+            for idx, caseopt in enumerate(caseopts):
+                if caseopt.startswith("P"):
+                    caseopt = caseopt[1:]
+                    if "x" in caseopt:
+                        old_procs, old_thrds = caseopt.split("x")
+                    else:
+                        old_procs, old_thrds = caseopt, None
+
+                    new_procs = force_procs if force_procs is not None else old_procs
+                    new_thrds = (
+                        force_threads if force_threads is not None else old_thrds
+                    )
+
+                    newcaseopt = (
+                        ("P{}".format(new_procs))
+                        if new_thrds is None
+                        else ("P{}x{}".format(new_procs, new_thrds))
+                    )
+                    caseopts[idx] = newcaseopt
+
+                    rewrote_caseopt = True
+                    break
+
+        if not rewrote_caseopt:
+            force_procs = "M" if force_procs is None else force_procs
+            newcaseopt = (
+                ("P{}".format(force_procs))
+                if force_threads is None
+                else ("P{}x{}".format(force_procs, force_threads))
+            )
+            if caseopts is None:
+                caseopts = [newcaseopt]
+            else:
+                caseopts.append(newcaseopt)
+
+        new_test_name = get_full_test_name(
+            testcase,
+            caseopts=caseopts,
+            grid=grid,
+            compset=compset,
+            machine=machine,
+            compiler=compiler,
+            testmods_list=testmods,
+        )
+        new_test_names.append(new_test_name)
+
+    return new_test_names
+
+
+_TIME_CACHE = {}
+###############################################################################
+def _get_time_est(test, baseline_root, as_int=False, use_cache=False, raw=False):
+    ###############################################################################
+    if test in _TIME_CACHE and use_cache:
+        return _TIME_CACHE[test]
+
+    recommended_time = get_recommended_test_time_based_on_past(
+        baseline_root, test, raw=raw
+    )
+
+    if recommended_time is None:
+        recommended_time = get_recommended_test_time(test)
+
+    if as_int:
+        if recommended_time is None:
+            recommended_time = 9999999999
+        else:
+            recommended_time = convert_to_seconds(recommended_time)
+
+    if use_cache:
+        _TIME_CACHE[test] = recommended_time
+
+    return recommended_time
+
+
+###############################################################################
+def _order_tests_by_runtime(tests, baseline_root):
+    ###############################################################################
+    tests.sort(
+        key=lambda x: _get_time_est(
+            x, baseline_root, as_int=True, use_cache=True, raw=True
+        ),
+        reverse=True,
+    )
+
+
+###############################################################################
+
+[docs] +class TestScheduler(object): + ############################################################################### + + ########################################################################### + def __init__( + self, + test_names, + test_data=None, + no_run=False, + no_build=False, + no_setup=False, + no_batch=None, + test_root=None, + test_id=None, + machine_name=None, + compiler=None, + baseline_root=None, + baseline_cmp_name=None, + baseline_gen_name=None, + clean=False, + namelists_only=False, + project=None, + parallel_jobs=None, + walltime=None, + proc_pool=None, + use_existing=False, + save_timing=False, + queue=None, + allow_baseline_overwrite=False, + output_root=None, + force_procs=None, + force_threads=None, + mpilib=None, + input_dir=None, + pesfile=None, + run_count=0, + mail_user=None, + mail_type=None, + allow_pnl=False, + non_local=False, + single_exe=False, + workflow=None, + chksum=False, + force_rebuild=False, + ): + ########################################################################### + self._cime_root = get_cime_root() + self._cime_model = get_model() + self._cime_driver = get_cime_default_driver() + self._save_timing = save_timing + self._queue = queue + self._test_data = ( + {} if test_data is None else test_data + ) # Format: {test_name -> {data_name -> data}} + self._mpilib = mpilib # allow override of default mpilib + self._completed_tests = 0 + self._input_dir = input_dir + self._pesfile = pesfile + self._allow_baseline_overwrite = allow_baseline_overwrite + self._single_exe = single_exe + if self._single_exe: + self._allow_pnl = True + else: + self._allow_pnl = allow_pnl + self._non_local = non_local + self._build_groups = [] + self._workflow = workflow + + self._mail_user = mail_user + self._mail_type = mail_type + + self._machobj = Machines(machine=machine_name) + + self._config = Config.instance() + + if self._config.calculate_mode_build_cost: + # Current build system is unlikely to be able to productively use more than 16 cores + self._model_build_cost = min( + 16, int((self._machobj.get_value("GMAKE_J") * 2) / 3) + 1 + ) + else: + self._model_build_cost = 4 + + # If user is forcing procs or threads, re-write test names to reflect this. + if force_procs or force_threads: + test_names = _translate_test_names_for_new_pecount( + test_names, force_procs, force_threads + ) + + self._no_setup = no_setup + self._no_build = no_build or no_setup or namelists_only + self._no_run = no_run or self._no_build + self._output_root = output_root + # Figure out what project to use + if project is None: + self._project = get_project(machobj=self._machobj) + else: + self._project = project + + # We will not use batch system if user asked for no_batch or if current + # machine is not a batch machine + self._no_batch = no_batch or not self._machobj.has_batch_system() + expect( + not (self._no_batch and self._queue is not None), + "Does not make sense to request a queue without batch system", + ) + + # Determine and resolve test_root + if test_root is not None: + self._test_root = test_root + elif self._output_root is not None: + self._test_root = self._output_root + else: + self._test_root = self._machobj.get_value("CIME_OUTPUT_ROOT") + + if self._project is not None: + self._test_root = self._test_root.replace("$PROJECT", self._project) + + self._test_root = os.path.abspath(self._test_root) + self._test_id = test_id if test_id is not None else get_timestamp() + + self._compiler = ( + self._machobj.get_default_compiler() if compiler is None else compiler + ) + + self._clean = clean + + self._namelists_only = namelists_only + + self._walltime = walltime + + if parallel_jobs is None: + mach_parallel_jobs = self._machobj.get_value("NTEST_PARALLEL_JOBS") + if mach_parallel_jobs is None: + mach_parallel_jobs = self._machobj.get_value("MAX_MPITASKS_PER_NODE") + self._parallel_jobs = min(len(test_names), mach_parallel_jobs) + else: + self._parallel_jobs = parallel_jobs + + logger.info( + "create_test will do up to {} tasks simultaneously".format( + self._parallel_jobs + ) + ) + + self._baseline_cmp_name = ( + baseline_cmp_name # Implies comparison should be done if not None + ) + self._baseline_gen_name = ( + baseline_gen_name # Implies generation should be done if not None + ) + + # Compute baseline_root. Need to set some properties on machobj in order for + # the baseline_root to resolve correctly. + self._machobj.set_value("COMPILER", self._compiler) + self._machobj.set_value("PROJECT", self._project) + self._baseline_root = ( + os.path.abspath(baseline_root) + if baseline_root is not None + else self._machobj.get_value("BASELINE_ROOT") + ) + + if baseline_cmp_name or baseline_gen_name: + if self._baseline_cmp_name: + full_baseline_dir = os.path.join( + self._baseline_root, self._baseline_cmp_name + ) + expect( + os.path.isdir(full_baseline_dir), + "Missing baseline comparison directory {}".format( + full_baseline_dir + ), + ) + + # the following is to assure that the existing generate directory is not overwritten + if self._baseline_gen_name: + full_baseline_dir = os.path.join( + self._baseline_root, self._baseline_gen_name + ) + existing_baselines = [] + for test_name in test_names: + test_baseline = os.path.join(full_baseline_dir, test_name) + if os.path.isdir(test_baseline): + existing_baselines.append(test_baseline) + if allow_baseline_overwrite and run_count == 0: + if self._namelists_only: + clear_folder(os.path.join(test_baseline, "CaseDocs")) + else: + clear_folder(test_baseline) + expect( + allow_baseline_overwrite or len(existing_baselines) == 0, + "Baseline directories already exists {}\n" + "Use -o to avoid this error".format(existing_baselines), + ) + + if self._config.sort_tests: + _order_tests_by_runtime(test_names, self._baseline_root) + + # This is the only data that multiple threads will simultaneously access + # Each test has it's own value and setting/retrieving items from a dict + # is atomic, so this should be fine to use without mutex. + # name -> (phase, status) + self._tests = OrderedDict() + for test_name in test_names: + self._tests[test_name] = (TEST_START, TEST_PASS_STATUS) + + # Oversubscribe by 1/4 + if proc_pool is None: + pes = int(self._machobj.get_value("MAX_TASKS_PER_NODE")) + self._proc_pool = int(pes * 1.25) + else: + self._proc_pool = int(proc_pool) + + logger.info( + "create_test will use up to {} cores simultaneously".format(self._proc_pool) + ) + + self._procs_avail = self._proc_pool + + # Setup phases + self._phases = list(PHASES) + if self._no_setup: + self._phases.remove(SETUP_PHASE) + if self._no_build: + self._phases.remove(SHAREDLIB_BUILD_PHASE) + self._phases.remove(MODEL_BUILD_PHASE) + if self._no_run: + self._phases.remove(RUN_PHASE) + + if use_existing: + for test in self._tests: + with TestStatus(self._get_test_dir(test)) as ts: + if force_rebuild: + ts.set_status(SHAREDLIB_BUILD_PHASE, TEST_PEND_STATUS) + + for phase, status in ts: + if phase in CORE_PHASES: + if status in [TEST_PEND_STATUS, TEST_FAIL_STATUS]: + if status == TEST_FAIL_STATUS: + # Import for potential subsequent waits + ts.set_status( + phase, TEST_PEND_STATUS, TEST_RERUN_COMMENT + ) + + # We need to pick up here + break + + else: + if phase != SUBMIT_PHASE: + # Somewhat subtle. Create_test considers submit/run to be the run phase, + # so don't try to update test status for a passed submit phase + self._update_test_status( + test, phase, TEST_PEND_STATUS + ) + self._update_test_status(test, phase, status) + + if phase == RUN_PHASE: + logger.info( + "Test {} passed and will not be re-run".format( + test + ) + ) + + logger.info( + "Using existing test directory {}".format(self._get_test_dir(test)) + ) + else: + # None of the test directories should already exist. + for test in self._tests: + expect( + not os.path.exists(self._get_test_dir(test)), + "Cannot create new case in directory '{}', it already exists." + " Pick a different test-id".format(self._get_test_dir(test)), + ) + logger.info( + "Creating test directory {}".format(self._get_test_dir(test)) + ) + + # Setup build groups + if single_exe: + self._build_groups = [tuple(self._tests.keys())] + elif self._config.share_exes: + # Any test that's in a shared-enabled suite with other tests should share exes + self._build_groups = get_build_groups(self._tests) + else: + self._build_groups = [(item,) for item in self._tests] + + # Build group to exeroot map + self._build_group_exeroots = {} + for build_group in self._build_groups: + self._build_group_exeroots[build_group] = None + + logger.debug("Build groups are:") + for build_group in self._build_groups: + for test_name in build_group: + logger.debug( + "{}{}".format( + " " if test_name == build_group[0] else " ", test_name + ) + ) + + self._chksum = chksum + # By the end of this constructor, this program should never hard abort, + # instead, errors will be placed in the TestStatus files for the various + # tests cases + + ########################################################################### +
+[docs] + def get_testnames(self): + ########################################################################### + return list(self._tests.keys())
+ + + ########################################################################### + def _log_output(self, test, output): + ########################################################################### + test_dir = self._get_test_dir(test) + if not os.path.isdir(test_dir): + # Note: making this directory could cause create_newcase to fail + # if this is run before. + os.makedirs(test_dir) + append_testlog(output, caseroot=test_dir) + + ########################################################################### + def _get_case_id(self, test): + ########################################################################### + baseline_action_code = "" + if self._baseline_gen_name: + baseline_action_code += "G" + if self._baseline_cmp_name: + baseline_action_code += "C" + if len(baseline_action_code) > 0: + return "{}.{}.{}".format(test, baseline_action_code, self._test_id) + else: + return "{}.{}".format(test, self._test_id) + + ########################################################################### + def _get_test_dir(self, test): + ########################################################################### + return os.path.join(self._test_root, self._get_case_id(test)) + + ########################################################################### + def _get_test_data(self, test): + ########################################################################### + # Must be atomic + return self._tests[test] + + ########################################################################### + def _is_broken(self, test): + ########################################################################### + status = self._get_test_status(test) + return status != TEST_PASS_STATUS and status != TEST_PEND_STATUS + + ########################################################################### + def _work_remains(self, test): + ########################################################################### + test_phase, test_status = self._get_test_data(test) + return ( + test_status == TEST_PASS_STATUS or test_status == TEST_PEND_STATUS + ) and test_phase != self._phases[-1] + + ########################################################################### + def _get_test_status(self, test, phase=None): + ########################################################################### + curr_phase, curr_status = self._get_test_data(test) + if phase is None or phase == curr_phase: + return curr_status + else: + # Assume all future phases are PEND + if phase is not None and self._phases.index(phase) > self._phases.index( + curr_phase + ): + return TEST_PEND_STATUS + + # Assume all older phases PASSed + return TEST_PASS_STATUS + + ########################################################################### + def _get_test_phase(self, test): + ########################################################################### + return self._get_test_data(test)[0] + + ########################################################################### + def _update_test_status(self, test, phase, status): + ########################################################################### + phase_idx = self._phases.index(phase) + old_phase, old_status = self._get_test_data(test) + + if old_phase == phase: + expect( + old_status == TEST_PEND_STATUS, + "Only valid to transition from PEND to something else, found '{}' for phase '{}'".format( + old_status, phase + ), + ) + expect(status != TEST_PEND_STATUS, "Cannot transition from PEND -> PEND") + else: + expect( + old_status == TEST_PASS_STATUS, + "Why did we move on to next phase when prior phase did not pass?", + ) + expect( + status == TEST_PEND_STATUS, "New phase should be set to pending status" + ) + expect( + self._phases.index(old_phase) == phase_idx - 1, + "Skipped phase? {} {}".format(old_phase, phase_idx), + ) + + # Must be atomic + self._tests[test] = (phase, status) + + ########################################################################### + def _shell_cmd_for_phase(self, test, cmd, phase, from_dir=None): + ########################################################################### + env = os.environ.copy() + env["PYTHONPATH"] = f"{get_cime_root()}:{get_tools_path()}" + + while True: + rc, output, errput = run_cmd(cmd, from_dir=from_dir, env=env) + if rc != 0: + self._log_output( + test, + "{} FAILED for test '{}'.\nCommand: {}\nOutput: {}\n".format( + phase, test, cmd, output + "\n" + errput + ), + ) + # Temporary hack to get around odd file descriptor use by + # buildnml scripts. + if "bad interpreter" in output: + time.sleep(1) + continue + else: + return False, errput + else: + # We don't want "RUN PASSED" in the TestStatus.log if the only thing that + # succeeded was the submission. + phase = "SUBMIT" if phase == RUN_PHASE else phase + self._log_output( + test, + "{} PASSED for test '{}'.\nCommand: {}\nOutput: {}\n".format( + phase, test, cmd, output + "\n" + errput + ), + ) + return True, errput + + ########################################################################### + def _create_newcase_phase(self, test): + ########################################################################### + test_dir = self._get_test_dir(test) + + _, case_opts, grid, compset, machine, compiler, test_mods = parse_test_name( + test + ) + + os.environ["FROM_CREATE_TEST"] = "True" + create_newcase_cmd = "{} {} --case {} --res {} --compset {} --test".format( + sys.executable, + os.path.join(self._cime_root, "CIME", "scripts", "create_newcase.py"), + test_dir, + grid, + compset, + ) + + if machine is not None: + create_newcase_cmd += " --machine {}".format(machine) + if compiler is not None: + create_newcase_cmd += " --compiler {}".format(compiler) + if self._project is not None: + create_newcase_cmd += " --project {} ".format(self._project) + if self._output_root is not None: + create_newcase_cmd += " --output-root {} ".format(self._output_root) + if self._input_dir is not None: + create_newcase_cmd += " --input-dir {} ".format(self._input_dir) + if self._non_local: + create_newcase_cmd += " --non-local" + if self._workflow: + create_newcase_cmd += " --workflow {}".format(self._workflow) + if self._pesfile is not None: + create_newcase_cmd += " --pesfile {} ".format(self._pesfile) + + create_newcase_cmd += f" --srcroot {get_src_root()}" + + mpilib = None + ninst = 1 + ncpl = 1 + if case_opts is not None: + for case_opt in case_opts: # pylint: disable=not-an-iterable + if case_opt.startswith("M"): + mpilib = case_opt[1:] + create_newcase_cmd += " --mpilib {}".format(mpilib) + logger.debug(" MPILIB set to {}".format(mpilib)) + elif case_opt.startswith("N"): + expect(ncpl == 1, "Cannot combine _C and _N options") + ninst = case_opt[1:] + create_newcase_cmd += " --ninst {}".format(ninst) + logger.debug(" NINST set to {}".format(ninst)) + elif case_opt.startswith("C"): + expect(ninst == 1, "Cannot combine _C and _N options") + ncpl = case_opt[1:] + create_newcase_cmd += " --ninst {} --multi-driver".format(ncpl) + logger.debug(" NCPL set to {}".format(ncpl)) + elif case_opt.startswith("P"): + pesize = case_opt[1:] + create_newcase_cmd += " --pecount {}".format(pesize) + elif case_opt.startswith("G"): + if "-" in case_opt: + ngpus_per_node, gpu_type, gpu_offload = case_opt[1:].split("-") + else: + error = "GPU test argument format is ngpus_per_node-gpu_type-gpu_offload" + self._log_output(test, error) + return False, error + create_newcase_cmd += ( + " --ngpus-per-node {} --gpu-type {} --gpu-offload {}".format( + ngpus_per_node, gpu_type, gpu_offload + ) + ) + elif case_opt.startswith("V"): + self._cime_driver = case_opt[1:] + create_newcase_cmd += " --driver {}".format(self._cime_driver) + + if ( + "--ninst" in create_newcase_cmd + and not "--multi-driver" in create_newcase_cmd + ): + if "--driver nuopc" in create_newcase_cmd or ( + "--driver" not in create_newcase_cmd and self._cime_driver == "nuopc" + ): + expect(False, "_N option not supported by nuopc driver, use _C instead") + + if test_mods is not None: + create_newcase_cmd += " --user-mods-dir " + + for one_test_mod in test_mods: # pylint: disable=not-an-iterable + if one_test_mod.find("/") != -1: + (component, modspath) = one_test_mod.split("/", 1) + else: + error = "Missing testmod component. Testmods are specified as '${component}-${testmod}'" + self._log_output(test, error) + return False, error + + files = Files(comp_interface=self._cime_driver) + testmods_dir = files.get_value( + "TESTS_MODS_DIR", {"component": component} + ) + test_mod_file = os.path.join(testmods_dir, component, modspath) + # if no testmod is found check if a usermod of the same name exists and + # use it if it does. + if not os.path.exists(test_mod_file): + usermods_dir = files.get_value( + "USER_MODS_DIR", {"component": component} + ) + test_mod_file = os.path.join(usermods_dir, modspath) + if not os.path.exists(test_mod_file): + error = "Missing testmod file '{}', checked {} and {}".format( + modspath, testmods_dir, usermods_dir + ) + self._log_output(test, error) + return False, error + + create_newcase_cmd += "{} ".format(test_mod_file) + + # create_test mpilib option overrides default but not explicitly set case_opt mpilib + if mpilib is None and self._mpilib is not None: + create_newcase_cmd += " --mpilib {}".format(self._mpilib) + logger.debug(" MPILIB set to {}".format(self._mpilib)) + + if self._queue is not None: + create_newcase_cmd += " --queue={}".format(self._queue) + else: + # We need to hard code the queue for this test on cheyenne + # otherwise it runs in share and fails intermittently + test_case = parse_test_name(test)[0] + if test_case == "NODEFAIL": + machine = ( + machine if machine is not None else self._machobj.get_machine_name() + ) + if machine == "cheyenne": + create_newcase_cmd += " --queue=regular" + + if self._walltime is not None: + create_newcase_cmd += " --walltime {}".format(self._walltime) + else: + # model specific ways of setting time + if self._config.sort_tests: + recommended_time = _get_time_est(test, self._baseline_root) + + if recommended_time is not None: + create_newcase_cmd += " --walltime {}".format(recommended_time) + + else: + if ( + test in self._test_data + and "options" in self._test_data[test] + and "wallclock" in self._test_data[test]["options"] + ): + create_newcase_cmd += " --walltime {}".format( + self._test_data[test]["options"]["wallclock"] + ) + if ( + test in self._test_data + and "options" in self._test_data[test] + and "workflow" in self._test_data[test]["options"] + ): + create_newcase_cmd += " --workflow {}".format( + self._test_data[test]["options"]["workflow"] + ) + + logger.debug("Calling create_newcase: " + create_newcase_cmd) + return self._shell_cmd_for_phase(test, create_newcase_cmd, CREATE_NEWCASE_PHASE) + + ########################################################################### + def _xml_phase(self, test): + ########################################################################### + test_case, case_opts, _, _, _, compiler, _ = parse_test_name(test) + + # Create, fill and write an envtest object + test_dir = self._get_test_dir(test) + envtest = EnvTest(test_dir) + + # Determine list of component classes that this coupler/driver knows how + # to deal with. This list follows the same order as compset longnames follow. + files = Files(comp_interface=self._cime_driver) + ufs_driver = os.environ.get("UFS_DRIVER") + attribute = None + if ufs_driver: + attribute = {"component": ufs_driver} + + drv_config_file = files.get_value("CONFIG_CPL_FILE", attribute=attribute) + + if self._cime_driver == "nuopc" and not os.path.exists(drv_config_file): + drv_config_file = files.get_value("CONFIG_CPL_FILE", {"component": "cpl"}) + expect( + os.path.exists(drv_config_file), + "File {} not found, cime driver {}".format( + drv_config_file, self._cime_driver + ), + ) + + drv_comp = Component(drv_config_file, "CPL") + + envtest.add_elements_by_group(files, {}, "env_test.xml") + envtest.add_elements_by_group(drv_comp, {}, "env_test.xml") + envtest.set_value("TESTCASE", test_case) + envtest.set_value("TEST_TESTID", self._test_id) + envtest.set_value("CASEBASEID", test) + memleak_tolerance = self._machobj.get_value( + "TEST_MEMLEAK_TOLERANCE", resolved=False + ) + if ( + test in self._test_data + and "options" in self._test_data[test] + and "memleak_tolerance" in self._test_data[test]["options"] + ): + memleak_tolerance = self._test_data[test]["options"]["memleak_tolerance"] + + envtest.set_value( + "TEST_MEMLEAK_TOLERANCE", + 0.10 if memleak_tolerance is None else memleak_tolerance, + ) + + test_argv = "-testname {} -testroot {}".format(test, self._test_root) + if self._baseline_gen_name: + test_argv += " -generate {}".format(self._baseline_gen_name) + basegen_case_fullpath = os.path.join( + self._baseline_root, self._baseline_gen_name, test + ) + logger.debug("basegen_case is {}".format(basegen_case_fullpath)) + envtest.set_value("BASELINE_NAME_GEN", self._baseline_gen_name) + envtest.set_value( + "BASEGEN_CASE", os.path.join(self._baseline_gen_name, test) + ) + if self._baseline_cmp_name: + test_argv += " -compare {}".format(self._baseline_cmp_name) + envtest.set_value("BASELINE_NAME_CMP", self._baseline_cmp_name) + envtest.set_value( + "BASECMP_CASE", os.path.join(self._baseline_cmp_name, test) + ) + + envtest.set_value("TEST_ARGV", test_argv) + envtest.set_value("CLEANUP", self._clean) + + envtest.set_value("BASELINE_ROOT", self._baseline_root) + envtest.set_value("GENERATE_BASELINE", self._baseline_gen_name is not None) + envtest.set_value("COMPARE_BASELINE", self._baseline_cmp_name is not None) + envtest.set_value( + "CCSM_CPRNC", self._machobj.get_value("CCSM_CPRNC", resolved=False) + ) + tput_tolerance = self._machobj.get_value("TEST_TPUT_TOLERANCE", resolved=False) + if ( + test in self._test_data + and "options" in self._test_data[test] + and "tput_tolerance" in self._test_data[test]["options"] + ): + tput_tolerance = self._test_data[test]["options"]["tput_tolerance"] + + envtest.set_value( + "TEST_TPUT_TOLERANCE", 0.25 if tput_tolerance is None else tput_tolerance + ) + + # Add the test instructions from config_test to env_test in the case + config_test = Tests() + testnode = config_test.get_test_node(test_case) + envtest.add_test(testnode) + + if compiler == "nag": + envtest.set_value("FORCE_BUILD_SMP", "FALSE") + + # Determine case_opts from the test_case + if case_opts is not None: + logger.debug("case_opts are {} ".format(case_opts)) + for opt in case_opts: # pylint: disable=not-an-iterable + + logger.debug("case_opt is {}".format(opt)) + if opt == "D": + envtest.set_test_parameter("DEBUG", "TRUE") + logger.debug(" DEBUG set to TRUE") + + elif opt == "E": + envtest.set_test_parameter("USE_ESMF_LIB", "TRUE") + logger.debug(" USE_ESMF_LIB set to TRUE") + + elif opt == "CG": + envtest.set_test_parameter("CALENDAR", "GREGORIAN") + logger.debug(" CALENDAR set to {}".format(opt)) + + elif opt.startswith("L"): + match = re.match("L([A-Za-z])([0-9]*)", opt) + stop_option = { + "y": "nyears", + "m": "nmonths", + "d": "ndays", + "h": "nhours", + "s": "nseconds", + "n": "nsteps", + } + opt = match.group(1) + envtest.set_test_parameter("STOP_OPTION", stop_option[opt]) + opti = match.group(2) + envtest.set_test_parameter("STOP_N", opti) + + logger.debug(" STOP_OPTION set to {}".format(stop_option[opt])) + logger.debug(" STOP_N set to {}".format(opti)) + + elif opt.startswith("R"): + # R option is for testing in PTS_MODE or Single Column Model + # (SCM) mode + envtest.set_test_parameter("PTS_MODE", "TRUE") + + # For PTS_MODE, set all tasks and threads to 1 + comps = ["ATM", "LND", "ICE", "OCN", "CPL", "GLC", "ROF", "WAV"] + + for comp in comps: + envtest.set_test_parameter("NTASKS_" + comp, "1") + envtest.set_test_parameter("NTHRDS_" + comp, "1") + envtest.set_test_parameter("ROOTPE_" + comp, "0") + envtest.set_test_parameter("PIO_TYPENAME", "netcdf") + + elif opt.startswith("A"): + # A option is for testing in ASYNC IO mode, only available with nuopc driver and pio2 + envtest.set_test_parameter("PIO_ASYNC_INTERFACE", "TRUE") + envtest.set_test_parameter("CIME_DRIVER", "nuopc") + envtest.set_test_parameter("PIO_VERSION", "2") + match = re.match("A([0-9]+)x?([0-9])*", opt) + envtest.set_test_parameter("PIO_NUMTASKS_CPL", match.group(1)) + if match.group(2): + envtest.set_test_parameter("PIO_STRIDE_CPL", match.group(2)) + + elif ( + opt.startswith("I") + or opt.startswith( # Marker to distinguish tests with same name - ignored + "M" + ) + or opt.startswith("P") # handled in create_newcase + or opt.startswith("N") # handled in create_newcase + or opt.startswith("C") # handled in create_newcase + or opt.startswith("V") # handled in create_newcase + or opt.startswith("G") # handled in create_newcase + or opt == "B" # handled in create_newcase + ): # handled in run_phase + pass + + elif opt.startswith("IOP"): + logger.warning("IOP test option not yet implemented") + else: + expect(False, "Could not parse option '{}' ".format(opt)) + + envtest.write() + lock_file("env_run.xml", caseroot=test_dir, newname="env_run.orig.xml") + + with Case(test_dir, read_only=False, non_local=self._non_local) as case: + if self._output_root is None: + self._output_root = case.get_value("CIME_OUTPUT_ROOT") + # if we are running a single test we don't need sharedlibroot + if len(self._tests) > 1 and self._config.common_sharedlibroot: + case.set_value( + "SHAREDLIBROOT", + os.path.join( + self._output_root, "sharedlibroot.{}".format(self._test_id) + ), + ) + envtest.set_initial_values(case) + case.set_value("TEST", True) + if is_perf_test(test): + case.set_value("SAVE_TIMING", True) + else: + case.set_value("SAVE_TIMING", self._save_timing) + + # handle single-exe here, all cases will use the EXEROOT from + # the first case in the build group + is_first_test, _, my_build_group = self._get_build_group(test) + if is_first_test: + expect( + self._build_group_exeroots[my_build_group] is None, + "Should not already have exeroot", + ) + self._build_group_exeroots[my_build_group] = case.get_value("EXEROOT") + else: + build_group_exeroot = self._build_group_exeroots[my_build_group] + expect(build_group_exeroot is not None, "Should already have exeroot") + case.set_value("EXEROOT", build_group_exeroot) + + # Scale back build parallelism on systems with few cores + if self._model_build_cost > self._proc_pool: + case.set_value("GMAKE_J", self._proc_pool) + self._model_build_cost = self._proc_pool + + return True, "" + + ########################################################################### + def _setup_phase(self, test): + ########################################################################### + test_dir = self._get_test_dir(test) + rv = self._shell_cmd_for_phase( + test, "./case.setup", SETUP_PHASE, from_dir=test_dir + ) + + # It's OK for this command to fail with baseline diffs but not catastrophically + if rv[0]: + env = os.environ.copy() + env["PYTHONPATH"] = f"{get_cime_root()}:{get_tools_path()}" + cmdstat, output, _ = run_cmd( + "./case.cmpgen_namelists", + combine_output=True, + from_dir=test_dir, + env=env, + ) + expect( + cmdstat in [0, TESTS_FAILED_ERR_CODE], + "Fatal error in case.cmpgen_namelists: {}".format(output), + ) + + if self._single_exe: + with Case(self._get_test_dir(test), read_only=False) as case: + tests = Tests() + + try: + tests.support_single_exe(case) + except Exception: + self._update_test_status_file(test, SETUP_PHASE, TEST_FAIL_STATUS) + + raise + + return rv + + ########################################################################### + def _sharedlib_build_phase(self, test): + ########################################################################### + is_first_test, first_test, _ = self._get_build_group(test) + if not is_first_test: + if ( + self._get_test_status(first_test, phase=SHAREDLIB_BUILD_PHASE) + == TEST_PASS_STATUS + ): + return True, "" + else: + return False, "Cannot use build for test {} because it failed".format( + first_test + ) + + test_dir = self._get_test_dir(test) + return self._shell_cmd_for_phase( + test, + "./case.build --sharedlib-only", + SHAREDLIB_BUILD_PHASE, + from_dir=test_dir, + ) + + ########################################################################### + def _get_build_group(self, test): + ########################################################################### + for build_group in self._build_groups: + if test in build_group: + return test == build_group[0], build_group[0], build_group + + expect(False, "No build group for test '{}'".format(test)) + + ########################################################################### + def _model_build_phase(self, test): + ########################################################################### + is_first_test, first_test, _ = self._get_build_group(test) + + test_dir = self._get_test_dir(test) + + if not is_first_test: + if ( + self._get_test_status(first_test, phase=MODEL_BUILD_PHASE) + == TEST_PASS_STATUS + ): + with Case(test_dir, read_only=False) as case: + post_build( + case, [], build_complete=True, save_build_provenance=False + ) + + return True, "" + else: + return False, "Cannot use build for test {} because it failed".format( + first_test + ) + + return self._shell_cmd_for_phase( + test, "./case.build --model-only", MODEL_BUILD_PHASE, from_dir=test_dir + ) + + ########################################################################### + def _run_phase(self, test): + ########################################################################### + test_dir = self._get_test_dir(test) + + case_opts = parse_test_name(test)[1] + if ( + case_opts is not None + and "B" in case_opts # pylint: disable=unsupported-membership-test + ): + self._log_output(test, "{} SKIPPED for test '{}'".format(RUN_PHASE, test)) + self._update_test_status_file(test, SUBMIT_PHASE, TEST_PASS_STATUS) + self._update_test_status_file(test, RUN_PHASE, TEST_PASS_STATUS) + + return True, "SKIPPED" + else: + cmd = "./case.submit" + if not self._allow_pnl: + cmd += " --skip-preview-namelist" + if self._no_batch: + cmd += " --no-batch" + if self._mail_user: + cmd += " --mail-user={}".format(self._mail_user) + if self._mail_type: + cmd += " -M={}".format(",".join(self._mail_type)) + if self._chksum: + cmd += " --chksum" + + return self._shell_cmd_for_phase(test, cmd, RUN_PHASE, from_dir=test_dir) + + ########################################################################### + def _run_catch_exceptions(self, test, phase, run): + ########################################################################### + try: + return run(test) + except Exception as e: + exc_tb = sys.exc_info()[2] + errput = "Test '{}' failed in phase '{}' with exception '{}'\n".format( + test, phase, str(e) + ) + errput += "".join(traceback.format_tb(exc_tb)) + self._log_output(test, errput) + return False, errput + + ########################################################################### + def _get_procs_needed(self, test, phase, threads_in_flight=None, no_batch=False): + ########################################################################### + # For build pools, we must wait for the first case to complete XML, SHAREDLIB, + # and MODEL_BUILD phases before the other cases can do those phases + is_first_test, first_test, _ = self._get_build_group(test) + + if not is_first_test: + build_group_dep_phases = [ + XML_PHASE, + SHAREDLIB_BUILD_PHASE, + MODEL_BUILD_PHASE, + ] + if phase in build_group_dep_phases: + if self._get_test_status(first_test, phase=phase) == TEST_PEND_STATUS: + return self._proc_pool + 1 + else: + return 1 + + if phase == RUN_PHASE and (self._no_batch or no_batch): + test_dir = self._get_test_dir(test) + total_pes = EnvMachPes(test_dir, read_only=True).get_value("TOTALPES") + return total_pes + + elif phase == SHAREDLIB_BUILD_PHASE: + if self._config.serialize_sharedlib_builds: + # Will force serialization of sharedlib builds + # TODO - instead of serializing, compute all library configs needed and build + # them all in parallel + for _, _, running_phase in threads_in_flight.values(): + if running_phase == SHAREDLIB_BUILD_PHASE: + return self._proc_pool + 1 + + return 1 + elif phase == MODEL_BUILD_PHASE: + # Model builds now happen in parallel + return self._model_build_cost + else: + return 1 + + ########################################################################### + def _wait_for_something_to_finish(self, threads_in_flight): + ########################################################################### + expect(len(threads_in_flight) <= self._parallel_jobs, "Oversubscribed?") + finished_tests = [] + while not finished_tests: + for test, thread_info in threads_in_flight.items(): + if not thread_info[0].is_alive(): + finished_tests.append((test, thread_info[1])) + + if not finished_tests: + time.sleep(0.2) + + for finished_test, procs_needed in finished_tests: + self._procs_avail += procs_needed + del threads_in_flight[finished_test] + + ########################################################################### + def _update_test_status_file(self, test, test_phase, status): + ########################################################################### + """ + In general, test_scheduler should not be responsible for updating + the TestStatus file, but there are a few cases where it has to. + """ + test_dir = self._get_test_dir(test) + with TestStatus(test_dir=test_dir, test_name=test) as ts: + ts.set_status(test_phase, status) + + ########################################################################### + def _consumer(self, test, test_phase, phase_method): + ########################################################################### + before_time = time.time() + success, errors = self._run_catch_exceptions(test, test_phase, phase_method) + elapsed_time = time.time() - before_time + status = ( + ( + TEST_PEND_STATUS + if test_phase == RUN_PHASE and not self._no_batch + else TEST_PASS_STATUS + ) + if success + else TEST_FAIL_STATUS + ) + + if status != TEST_PEND_STATUS: + self._update_test_status(test, test_phase, status) + + if not self._work_remains(test): + self._completed_tests += 1 + total = len(self._tests) + status_str = "Finished {} for test {} in {:f} seconds ({}). [COMPLETED {:d} of {:d}]".format( + test_phase, test, elapsed_time, status, self._completed_tests, total + ) + else: + status_str = "Finished {} for test {} in {:f} seconds ({})".format( + test_phase, test, elapsed_time, status + ) + + if not success: + status_str += "\n Case dir: {}\n".format(self._get_test_dir(test)) + status_str += " Errors were:\n {}\n".format( + "\n ".join(errors.splitlines()) + ) + + logger.info(status_str) + + is_first_test = self._get_build_group(test)[0] + + if test_phase in [CREATE_NEWCASE_PHASE, XML_PHASE] or ( + not is_first_test + and test_phase in [SHAREDLIB_BUILD_PHASE, MODEL_BUILD_PHASE] + ): + # These are the phases for which TestScheduler is reponsible for + # updating the TestStatus file + self._update_test_status_file(test, test_phase, status) + + if test_phase == XML_PHASE: + append_status( + "Case Created using: " + " ".join(sys.argv), + "README.case", + caseroot=self._get_test_dir(test), + ) + + # On batch systems, we want to immediately submit to the queue, because + # it's very cheap to submit and will get us a better spot in line + if ( + success + and not self._no_run + and not self._no_batch + and test_phase == MODEL_BUILD_PHASE + ): + logger.info( + "Starting {} for test {} with 1 proc on interactive node and {:d} procs on compute nodes".format( + RUN_PHASE, + test, + self._get_procs_needed(test, RUN_PHASE, no_batch=True), + ) + ) + self._update_test_status(test, RUN_PHASE, TEST_PEND_STATUS) + self._consumer(test, RUN_PHASE, self._run_phase) + + ########################################################################### + def _producer(self): + ########################################################################### + threads_in_flight = {} # test-name -> (thread, procs, phase) + while True: + work_to_do = False + num_threads_launched_this_iteration = 0 + for test in self._tests: + logger.debug("test_name: " + test) + + if self._work_remains(test): + work_to_do = True + + # If we have no workers available, immediately break out of loop so we can wait + if len(threads_in_flight) == self._parallel_jobs: + break + + if test not in threads_in_flight: + test_phase, test_status = self._get_test_data(test) + expect(test_status != TEST_PEND_STATUS, test) + next_phase = self._phases[self._phases.index(test_phase) + 1] + procs_needed = self._get_procs_needed( + test, next_phase, threads_in_flight + ) + + if procs_needed <= self._procs_avail: + self._procs_avail -= procs_needed + + # Necessary to print this way when multiple threads printing + logger.info( + "Starting {} for test {} with {:d} procs".format( + next_phase, test, procs_needed + ) + ) + + self._update_test_status(test, next_phase, TEST_PEND_STATUS) + new_thread = threading.Thread( + target=self._consumer, + args=( + test, + next_phase, + getattr( + self, "_{}_phase".format(next_phase.lower()) + ), + ), + ) + threads_in_flight[test] = ( + new_thread, + procs_needed, + next_phase, + ) + new_thread.start() + num_threads_launched_this_iteration += 1 + + logger.debug(" Current workload:") + total_procs = 0 + for the_test, the_data in threads_in_flight.items(): + logger.debug( + " {}: {} -> {}".format( + the_test, the_data[2], the_data[1] + ) + ) + total_procs += the_data[1] + + logger.debug( + " Total procs in use: {}".format(total_procs) + ) + else: + if not threads_in_flight: + msg = "Phase '{}' for test '{}' required more processors, {:d}, than this machine can provide, {:d}".format( + next_phase, test, procs_needed, self._procs_avail + ) + logger.warning(msg) + self._update_test_status( + test, next_phase, TEST_PEND_STATUS + ) + self._update_test_status( + test, next_phase, TEST_FAIL_STATUS + ) + self._log_output(test, msg) + if next_phase == RUN_PHASE: + self._update_test_status_file( + test, SUBMIT_PHASE, TEST_PASS_STATUS + ) + self._update_test_status_file( + test, next_phase, TEST_FAIL_STATUS + ) + else: + self._update_test_status_file( + test, next_phase, TEST_FAIL_STATUS + ) + num_threads_launched_this_iteration += 1 + + if not work_to_do: + break + + if num_threads_launched_this_iteration == 0: + # No free resources, wait for something in flight to finish + self._wait_for_something_to_finish(threads_in_flight) + + for unfinished_thread, _, _ in threads_in_flight.values(): + unfinished_thread.join() + + ########################################################################### + def _setup_cs_files(self): + ########################################################################### + try: + template_path = get_template_path() + + create_cs_status(test_root=self._test_root, test_id=self._test_id) + + template_file = os.path.join(template_path, "cs.submit.template") + template = open(template_file, "r").read() + setup_cmd = "./case.setup" if self._no_setup else ":" + build_cmd = "./case.build" if self._no_build else ":" + test_cmd = "./case.submit" + template = ( + template.replace("<SETUP_CMD>", setup_cmd) + .replace("<BUILD_CMD>", build_cmd) + .replace("<RUN_CMD>", test_cmd) + .replace("<TESTID>", self._test_id) + ) + + if self._no_run: + cs_submit_file = os.path.join( + self._test_root, "cs.submit.{}".format(self._test_id) + ) + with open(cs_submit_file, "w") as fd: + fd.write(template) + os.chmod( + cs_submit_file, + os.stat(cs_submit_file).st_mode | stat.S_IXUSR | stat.S_IXGRP, + ) + + if self._config.use_testreporter_template: + template_file = os.path.join(template_path, "testreporter.template") + template = open(template_file, "r").read() + template = template.replace("<PATH>", get_tools_path()) + testreporter_file = os.path.join(self._test_root, "testreporter") + with open(testreporter_file, "w") as fd: + fd.write(template) + os.chmod( + testreporter_file, + os.stat(testreporter_file).st_mode | stat.S_IXUSR | stat.S_IXGRP, + ) + + except Exception as e: + logger.warning("FAILED to set up cs files: {}".format(str(e))) + + ########################################################################### +
+[docs] + def run_tests( + self, + wait=False, + check_throughput=False, + check_memory=False, + ignore_namelists=False, + ignore_memleak=False, + ): + ########################################################################### + """ + Main API for this class. + + Return True if all tests passed. + """ + start_time = time.time() + + # Tell user what will be run + logger.info("RUNNING TESTS:") + for test in self._tests: + logger.info(" {}".format(test)) + + # Setup cs files + self._setup_cs_files() + + GenericXML.DISABLE_CACHING = True + self._producer() + GenericXML.DISABLE_CACHING = False + + expect(threading.active_count() == 1, "Leftover threads?") + + config = Config.instance() + + # Copy TestStatus files to baselines for tests that have already failed. + if config.baseline_store_teststatus: + for test in self._tests: + status = self._get_test_data(test)[1] + if ( + status not in [TEST_PASS_STATUS, TEST_PEND_STATUS] + and self._baseline_gen_name + ): + basegen_case_fullpath = os.path.join( + self._baseline_root, self._baseline_gen_name, test + ) + test_dir = self._get_test_dir(test) + generate_teststatus(test_dir, basegen_case_fullpath) + + no_need_to_wait = self._no_run or self._no_batch + if no_need_to_wait: + wait = False + + expect_test_complete = not self._no_run and (self._no_batch or wait) + + logger.info("Waiting for tests to finish") + rv = wait_for_tests( + glob.glob( + os.path.join(self._test_root, "*{}/TestStatus".format(self._test_id)) + ), + no_wait=not wait, + check_throughput=check_throughput, + check_memory=check_memory, + ignore_namelists=ignore_namelists, + ignore_memleak=ignore_memleak, + no_run=self._no_run, + expect_test_complete=expect_test_complete, + ) + + if not no_need_to_wait and not wait: + logger.info( + "Due to presence of batch system, create_test will exit before tests are complete.\n" + "To force create_test to wait for full completion, use --wait" + ) + + logger.info("test-scheduler took {} seconds".format(time.time() - start_time)) + + return rv
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_status.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_status.html new file mode 100644 index 00000000000..67ee81cd3de --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_status.html @@ -0,0 +1,761 @@ + + + + + + CIME.test_status — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.test_status

+"""
+Contains the crucial TestStatus class which manages phase-state of a test
+case and ensure that this state is represented by the TestStatus file in
+the case.
+
+TestStatus objects are only modifiable via the set_status method and this
+is only allowed if the object is being accessed within the context of a
+context manager. Example:
+
+    with TestStatus(test_dir=caseroot) as ts:
+        ts.set_status(RUN_PHASE, TEST_PASS_STATUS)
+
+This file also contains all of the hardcoded phase information which includes
+the phase names, phase orders, potential phase states, and which phases are
+required (core phases).
+
+Additional important design decisions:
+1) In order to ensure that incomplete tests are always left in a PEND
+   state, updating a core phase to a PASS state will automatically set the next
+   core state to PEND.
+2) If the user repeats a core state, that invalidates all subsequent state. For
+   example, if a user rebuilds their case, then any of the post-run states like the
+   RUN state are no longer valid.
+"""
+
+from CIME.XML.standard_module_setup import *
+
+from collections import OrderedDict
+
+import os, itertools
+from CIME import expected_fails
+
+TEST_STATUS_FILENAME = "TestStatus"
+
+# The statuses that a phase can be in
+TEST_PEND_STATUS = "PEND"
+TEST_PASS_STATUS = "PASS"
+TEST_FAIL_STATUS = "FAIL"
+
+ALL_PHASE_STATUSES = [TEST_PEND_STATUS, TEST_PASS_STATUS, TEST_FAIL_STATUS]
+
+# Special statuses that the overall test can be in
+TEST_DIFF_STATUS = "DIFF"  # Implies a failure in the BASELINE phase
+NAMELIST_FAIL_STATUS = "NLFAIL"  # Implies a failure in the NLCOMP phase
+
+# Special strings that can appear in comments, indicating particular types of failures
+TEST_NO_BASELINES_COMMENT = "BFAIL"  # Implies baseline directory is missing in the
+# baseline comparison phase
+TEST_RERUN_COMMENT = "RERUN"  # Added to a PEND status to indicate that the test
+# system has changed this phase to PEND in order to
+# rerun it (e.g., to retry a failed test).
+# The expected and unexpected failure comments aren't used directly in this module, but
+# are included here for symmetry, so other modules can access them from here.
+TEST_EXPECTED_FAILURE_COMMENT = expected_fails.EXPECTED_FAILURE_COMMENT
+TEST_UNEXPECTED_FAILURE_COMMENT_START = expected_fails.UNEXPECTED_FAILURE_COMMENT_START
+
+# The valid phases
+CREATE_NEWCASE_PHASE = "CREATE_NEWCASE"
+XML_PHASE = "XML"
+SETUP_PHASE = "SETUP"
+NAMELIST_PHASE = "NLCOMP"
+SHAREDLIB_BUILD_PHASE = "SHAREDLIB_BUILD"
+MODEL_BUILD_PHASE = "MODEL_BUILD"
+SUBMIT_PHASE = "SUBMIT"
+RUN_PHASE = "RUN"
+THROUGHPUT_PHASE = "TPUTCOMP"
+MEMCOMP_PHASE = "MEMCOMP"
+MEMLEAK_PHASE = "MEMLEAK"
+STARCHIVE_PHASE = "SHORT_TERM_ARCHIVER"
+COMPARE_PHASE = "COMPARE"  # This is one special, real phase will be COMPARE_$WHAT, this is for internal test comparisons, there could be multiple variations of this phase in one test
+BASELINE_PHASE = "BASELINE"
+GENERATE_PHASE = "GENERATE"
+
+ALL_PHASES = [
+    CREATE_NEWCASE_PHASE,
+    XML_PHASE,
+    SETUP_PHASE,
+    NAMELIST_PHASE,
+    SHAREDLIB_BUILD_PHASE,
+    MODEL_BUILD_PHASE,
+    SUBMIT_PHASE,
+    RUN_PHASE,
+    COMPARE_PHASE,
+    BASELINE_PHASE,
+    THROUGHPUT_PHASE,
+    MEMCOMP_PHASE,
+    MEMLEAK_PHASE,
+    STARCHIVE_PHASE,
+    GENERATE_PHASE,
+]
+
+# These are mandatory phases that a test must go through
+CORE_PHASES = [
+    CREATE_NEWCASE_PHASE,
+    XML_PHASE,
+    SETUP_PHASE,
+    SHAREDLIB_BUILD_PHASE,
+    MODEL_BUILD_PHASE,
+    SUBMIT_PHASE,
+    RUN_PHASE,
+]
+
+
+def _test_helper1(file_contents):
+    ts = TestStatus(test_dir="/", test_name="ERS.foo.A")
+    ts._parse_test_status(file_contents)  # pylint: disable=protected-access
+    return ts._phase_statuses  # pylint: disable=protected-access
+
+
+def _test_helper2(
+    file_contents,
+    wait_for_run=False,
+    check_throughput=False,
+    check_memory=False,
+    ignore_namelists=False,
+    no_run=False,
+    no_perm=False,
+):
+    lines = file_contents.splitlines()
+    rv = None
+    perms = [lines] if no_perm else itertools.permutations(lines)
+    for perm in perms:
+        ts = TestStatus(test_dir="/", test_name="ERS.foo.A")
+        ts._parse_test_status("\n".join(perm))  # pylint: disable=protected-access
+        the_status = ts.get_overall_test_status(
+            wait_for_run=wait_for_run,
+            check_throughput=check_throughput,
+            check_memory=check_memory,
+            ignore_namelists=ignore_namelists,
+            no_run=no_run,
+        )
+        if rv is not None and the_status != rv:
+            return "{} != {}".format(rv, the_status)
+        else:
+            rv = the_status
+
+    return rv
+
+
+
+[docs] +class TestStatus(object): + def __init__(self, test_dir=None, test_name=None, no_io=False): + """ + Create a TestStatus object + + If test_dir is not specified, it is set to the current working directory + + no_io is intended only for testing, and should be kept False in + production code + """ + test_dir = os.getcwd() if test_dir is None else test_dir + self._filename = os.path.join(test_dir, TEST_STATUS_FILENAME) + self._phase_statuses = OrderedDict() # {name -> (status, comments)} + self._test_name = test_name + self._ok_to_modify = False + self._no_io = no_io + + if os.path.exists(self._filename): + self._parse_test_status_file() + if not os.access(self._filename, os.W_OK): + self._no_io = True + else: + expect( + test_name is not None, + "Must provide test_name if TestStatus file doesn't exist", + ) + + def __enter__(self): + self._ok_to_modify = True + return self + + def __exit__(self, *_): + self._ok_to_modify = False + self.flush() + + def __iter__(self): + for phase, data in self._phase_statuses.items(): + yield phase, data[0] + + def __eq__(self, rhs): + return ( + self._phase_statuses == rhs._phase_statuses + ) # pylint: disable=protected-access + + def __ne__(self, rhs): + return not self.__eq__(rhs) + +
+[docs] + def get_name(self): + return self._test_name
+ + +
+[docs] + def set_status(self, phase, status, comments=""): + """ + Update the status of this test by changing the status of given phase to the + given status. + + >>> with TestStatus(test_dir="/", test_name="ERS.foo.A", no_io=True) as ts: + ... ts.set_status(CREATE_NEWCASE_PHASE, "PASS") + ... ts.set_status(XML_PHASE, "PASS") + ... ts.set_status(SETUP_PHASE, "FAIL") + ... ts.set_status(SETUP_PHASE, "PASS") + ... ts.set_status("{}_base_rest".format(COMPARE_PHASE), "FAIL") + ... ts.set_status(SHAREDLIB_BUILD_PHASE, "PASS", comments='Time=42') + >>> ts._phase_statuses + OrderedDict([('CREATE_NEWCASE', ('PASS', '')), ('XML', ('PASS', '')), ('SETUP', ('PASS', '')), ('SHAREDLIB_BUILD', ('PASS', 'Time=42')), ('COMPARE_base_rest', ('FAIL', '')), ('MODEL_BUILD', ('PEND', ''))]) + + >>> with TestStatus(test_dir="/", test_name="ERS.foo.A", no_io=True) as ts: + ... ts.set_status(CREATE_NEWCASE_PHASE, "PASS") + ... ts.set_status(XML_PHASE, "PASS") + ... ts.set_status(SETUP_PHASE, "FAIL") + ... ts.set_status(SETUP_PHASE, "PASS") + ... ts.set_status(BASELINE_PHASE, "PASS") + ... ts.set_status("{}_base_rest".format(COMPARE_PHASE), "FAIL") + ... ts.set_status(SHAREDLIB_BUILD_PHASE, "PASS", comments='Time=42') + ... ts.set_status(SETUP_PHASE, "PASS") + >>> ts._phase_statuses + OrderedDict([('CREATE_NEWCASE', ('PASS', '')), ('XML', ('PASS', '')), ('SETUP', ('PASS', '')), ('SHAREDLIB_BUILD', ('PEND', ''))]) + + >>> with TestStatus(test_dir="/", test_name="ERS.foo.A", no_io=True) as ts: + ... ts.set_status(CREATE_NEWCASE_PHASE, "FAIL") + >>> ts._phase_statuses + OrderedDict([('CREATE_NEWCASE', ('FAIL', ''))]) + """ + expect( + self._ok_to_modify, + "TestStatus not in a modifiable state, use 'with' syntax", + ) + expect( + phase in ALL_PHASES or phase.startswith(COMPARE_PHASE), + "Invalid phase '{}'".format(phase), + ) + expect(status in ALL_PHASE_STATUSES, "Invalid status '{}'".format(status)) + + if phase in CORE_PHASES and phase != CORE_PHASES[0]: + previous_core_phase = CORE_PHASES[CORE_PHASES.index(phase) - 1] + # TODO: enable check below + # expect(previous_core_phase in self._phase_statuses, "Core phase '{}' was skipped".format(previous_core_phase)) + + if previous_core_phase in self._phase_statuses: + expect( + self._phase_statuses[previous_core_phase][0] == TEST_PASS_STATUS, + "Cannot move past core phase '{}', it didn't pass: ".format( + previous_core_phase + ), + ) + + reran_phase = ( + phase in self._phase_statuses + and self._phase_statuses[phase][0] != TEST_PEND_STATUS + and phase in CORE_PHASES + ) + if reran_phase: + # All subsequent phases are invalidated + phase_idx = ALL_PHASES.index(phase) + for subsequent_phase in ALL_PHASES[phase_idx + 1 :]: + if subsequent_phase in self._phase_statuses: + del self._phase_statuses[subsequent_phase] + if subsequent_phase.startswith(COMPARE_PHASE): + for stored_phase in list(self._phase_statuses.keys()): + if stored_phase.startswith(COMPARE_PHASE): + del self._phase_statuses[stored_phase] + + self._phase_statuses[phase] = (status, comments) # Can overwrite old phase info + + if ( + status == TEST_PASS_STATUS + and phase in CORE_PHASES + and phase != CORE_PHASES[-1] + ): + next_core_phase = CORE_PHASES[CORE_PHASES.index(phase) + 1] + self._phase_statuses[next_core_phase] = (TEST_PEND_STATUS, "")
+ + +
+[docs] + def get_status(self, phase): + return self._phase_statuses[phase][0] if phase in self._phase_statuses else None
+ + +
+[docs] + def get_comment(self, phase): + return self._phase_statuses[phase][1] if phase in self._phase_statuses else None
+ + +
+[docs] + def current_is(self, phase, status): + try: + latest = self.get_latest_phase() + except KeyError: + return False + + return latest == phase and self.get_status(phase) == status
+ + +
+[docs] + def get_latest_phase(self): + return list(self._phase_statuses.keys())[-1]
+ + +
+[docs] + def phase_statuses_dump( + self, prefix="", skip_passes=False, skip_phase_list=None, xfails=None + ): + """ + Args: + prefix: string printed at the start of each line + skip_passes: if True, do not output lines that have a PASS status + skip_phase_list: list of phases (from the phases given by + ALL_PHASES) for which we skip output + xfails: object of type ExpectedFails, giving expected failures for this test + """ + if skip_phase_list is None: + skip_phase_list = [] + if xfails is None: + xfails = expected_fails.ExpectedFails() + result = "" + if self._phase_statuses: + for phase, data in self._phase_statuses.items(): + if phase in skip_phase_list: + continue + status, comments = data + xfail_comment = xfails.expected_fails_comment(phase, status) + if skip_passes: + if status == TEST_PASS_STATUS and not xfail_comment: + # Note that we still print the result of a PASSing test if there + # is a comment related to the expected failure status. Typically + # this will indicate that this is an unexpected PASS (and so + # should be removed from the expected fails list). + continue + result += "{}{} {} {}".format(prefix, status, self._test_name, phase) + if comments: + result += " {}".format(comments) + if xfail_comment: + result += " {}".format(xfail_comment) + result += "\n" + + return result
+ + +
+[docs] + def increment_non_pass_counts(self, non_pass_counts): + """ + Increment counts of the number of times given phases did not pass + + non_pass_counts is a dictionary whose keys are phases of + interest and whose values are running counts of the number of + non-passes. This method increments those counts based on results + in the given TestStatus object. + """ + for phase in non_pass_counts: + if phase in self._phase_statuses: + status, _ = self._phase_statuses[phase] + if status != TEST_PASS_STATUS: + non_pass_counts[phase] += 1
+ + +
+[docs] + def flush(self): + if self._phase_statuses and not self._no_io: + with open(self._filename, "w") as fd: + fd.write(self.phase_statuses_dump())
+ + + def _parse_test_status(self, file_contents): + """ + >>> contents = ''' + ... PASS ERS.foo.A CREATE_NEWCASE + ... PASS ERS.foo.A XML + ... FAIL ERS.foo.A SETUP + ... PASS ERS.foo.A COMPARE_base_rest + ... PASS ERS.foo.A SHAREDLIB_BUILD Time=42 + ... ''' + >>> _test_helper1(contents) + OrderedDict([('CREATE_NEWCASE', ('PASS', '')), ('XML', ('PASS', '')), ('SETUP', ('FAIL', '')), ('COMPARE_base_rest', ('PASS', '')), ('SHAREDLIB_BUILD', ('PASS', 'Time=42'))]) + """ + for line in file_contents.splitlines(): + line = line.strip() + tokens = line.split() + if line == "": + pass # skip blank lines + elif len(tokens) >= 3: + status, curr_test_name, phase = tokens[:3] + if self._test_name is None: + self._test_name = curr_test_name + else: + expect( + self._test_name == curr_test_name, + "inconsistent test name in parse_test_status: '{}' != '{}'".format( + self._test_name, curr_test_name + ), + ) + + expect( + status in ALL_PHASE_STATUSES, + "Unexpected status '{}' in parse_test_status for test '{}'".format( + status, self._test_name + ), + ) + expect( + phase in ALL_PHASES or phase.startswith(COMPARE_PHASE), + "phase '{}' not expected in parse_test_status for test '{}'".format( + phase, self._test_name + ), + ) + expect( + phase not in self._phase_statuses, + "Should not have seen multiple instances of phase '{}' for test '{}'".format( + phase, self._test_name + ), + ) + + self._phase_statuses[phase] = (status, " ".join(tokens[3:])) + else: + logging.warning( + "In TestStatus file for test '{}', line '{}' not in expected format".format( + self._test_name, line + ) + ) + + def _parse_test_status_file(self): + with open(self._filename, "r") as fd: + self._parse_test_status(fd.read()) + + def _get_overall_status_based_on_phases( + self, + phases, + wait_for_run=False, + check_throughput=False, + check_memory=False, + ignore_namelists=False, + ignore_memleak=False, + no_run=False, + ): + + rv = TEST_PASS_STATUS + run_phase_found = False + phase_responsible_for_status = None + for phase in phases: # ensure correct order of processing phases + if phase in self._phase_statuses: + data = self._phase_statuses[phase] + else: + continue + + status = data[0] + + if ( + phase in CORE_PHASES + and rv in [TEST_PASS_STATUS, NAMELIST_FAIL_STATUS] + and status != TEST_PEND_STATUS + ): + phase_responsible_for_status = phase + + if phase == RUN_PHASE: + run_phase_found = True + + if phase in [SUBMIT_PHASE, RUN_PHASE] and no_run: + break + + if status == TEST_PEND_STATUS and rv in [ + TEST_PASS_STATUS, + NAMELIST_FAIL_STATUS, + ]: + if not no_run: + rv = TEST_PEND_STATUS + phase_responsible_for_status = phase + break + + elif status == TEST_FAIL_STATUS: + if ( + (not check_throughput and phase == THROUGHPUT_PHASE) + or (not check_memory and phase == MEMCOMP_PHASE) + or (ignore_namelists and phase == NAMELIST_PHASE) + or (ignore_memleak and phase == MEMLEAK_PHASE) + ): + continue + + if phase == NAMELIST_PHASE: + if rv == TEST_PASS_STATUS: + rv = NAMELIST_FAIL_STATUS + + elif phase in [BASELINE_PHASE, THROUGHPUT_PHASE, MEMCOMP_PHASE]: + if rv in [NAMELIST_FAIL_STATUS, TEST_PASS_STATUS]: + phase_responsible_for_status = phase + rv = TEST_DIFF_STATUS + else: + pass # a DIFF does not trump a FAIL + + elif phase in CORE_PHASES: + phase_responsible_for_status = phase + return TEST_FAIL_STATUS, phase_responsible_for_status + + else: + phase_responsible_for_status = phase + rv = TEST_FAIL_STATUS + + # The test did not fail but the RUN phase was not found, so if the user requested + # that we wait for the RUN phase, then the test must still be considered pending. + if ( + rv in [TEST_PASS_STATUS, NAMELIST_FAIL_STATUS] + and not run_phase_found + and wait_for_run + ): + phase_responsible_for_status = RUN_PHASE + rv = TEST_PEND_STATUS + + return rv, phase_responsible_for_status + +
+[docs] + def get_overall_test_status( + self, + wait_for_run=False, + check_throughput=False, + check_memory=False, + ignore_namelists=False, + ignore_memleak=False, + no_run=False, + ): + r""" + Given the current phases and statuses, produce a single results for this test. Preference + is given to PEND since we don't want to stop waiting for a test + that hasn't finished. Namelist diffs are given the lowest precedence. + + >>> _test_helper2('PASS ERS.foo.A RUN') + ('PASS', 'RUN') + >>> _test_helper2('PASS ERS.foo.A SHAREDLIB_BUILD\nPEND ERS.foo.A RUN') + ('PEND', 'RUN') + >>> _test_helper2('FAIL ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN') + ('FAIL', 'MODEL_BUILD') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPASS ERS.foo.A RUN') + ('PASS', 'RUN') + >>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A TPUTCOMP') + ('PASS', 'RUN') + >>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A TPUTCOMP', check_throughput=True) + ('DIFF', 'TPUTCOMP') + >>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A MEMCOMP', check_memory=True) + ('DIFF', 'MEMCOMP') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPASS ERS.foo.A RUN\nFAIL ERS.foo.A NLCOMP') + ('NLFAIL', 'RUN') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN\nFAIL ERS.foo.A NLCOMP') + ('PEND', 'RUN') + >>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A MEMCOMP') + ('PASS', 'RUN') + >>> _test_helper2('PASS ERS.foo.A RUN\nFAIL ERS.foo.A NLCOMP', ignore_namelists=True) + ('PASS', 'RUN') + >>> _test_helper2('PASS ERS.foo.A COMPARE_1\nFAIL ERS.foo.A NLCOMP\nFAIL ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN') + ('FAIL', 'COMPARE_2') + >>> _test_helper2('FAIL ERS.foo.A BASELINE\nFAIL ERS.foo.A NLCOMP\nPASS ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN') + ('DIFF', 'BASELINE') + >>> _test_helper2('FAIL ERS.foo.A BASELINE\nFAIL ERS.foo.A NLCOMP\nFAIL ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN') + ('FAIL', 'COMPARE_2') + >>> _test_helper2('PEND ERS.foo.A COMPARE_2\nFAIL ERS.foo.A RUN') + ('FAIL', 'RUN') + >>> _test_helper2('PEND ERS.foo.A COMPARE_2\nPASS ERS.foo.A RUN') + ('PEND', 'COMPARE_2') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD') + ('PASS', 'MODEL_BUILD') + >>> _test_helper2('PEND ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN') + ('PEND', 'MODEL_BUILD') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD', wait_for_run=True) + ('PEND', 'RUN') + >>> _test_helper2('FAIL ERS.foo.A MODEL_BUILD', wait_for_run=True) + ('FAIL', 'MODEL_BUILD') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN', wait_for_run=True) + ('PEND', 'RUN') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nFAIL ERS.foo.A RUN', wait_for_run=True) + ('FAIL', 'RUN') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPASS ERS.foo.A RUN', wait_for_run=True) + ('PASS', 'RUN') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nFAIL ERS.foo.A RUN\nPEND ERS.foo.A COMPARE') + ('FAIL', 'RUN') + >>> _test_helper2('PASS ERS.foo.A MODEL_BUILD\nPEND ERS.foo.A RUN', no_run=True) + ('PASS', 'MODEL_BUILD') + >>> s = '''PASS ERS.foo.A CREATE_NEWCASE + ... PASS ERS.foo.A XML + ... PASS ERS.foo.A SETUP + ... PASS ERS.foo.A SHAREDLIB_BUILD time=454 + ... PASS ERS.foo.A NLCOMP + ... PASS ERS.foo.A MODEL_BUILD time=363 + ... PASS ERS.foo.A SUBMIT + ... PASS ERS.foo.A RUN time=73 + ... PEND ERS.foo.A COMPARE_base_single_thread + ... FAIL ERS.foo.A BASELINE master: DIFF + ... PASS ERS.foo.A TPUTCOMP + ... PASS ERS.foo.A MEMLEAK insuffiencient data for memleak test + ... PASS ERS.foo.A SHORT_TERM_ARCHIVER + ... ''' + >>> _test_helper2(s, no_perm=True) + ('PEND', 'COMPARE_base_single_thread') + >>> s = '''PASS ERS.foo.A CREATE_NEWCASE + ... PASS ERS.foo.A XML + ... PASS ERS.foo.A SETUP + ... PEND ERS.foo.A SHAREDLIB_BUILD + ... FAIL ERS.foo.A NLCOMP + ... ''' + >>> _test_helper2(s, no_run=True) + ('NLFAIL', 'SETUP') + >>> _test_helper2(s, no_run=False) + ('PEND', 'SHAREDLIB_BUILD') + """ + # Core phases take priority + core_rv, phase = self._get_overall_status_based_on_phases( + CORE_PHASES, + wait_for_run=wait_for_run, + check_throughput=check_throughput, + check_memory=check_memory, + ignore_namelists=ignore_namelists, + ignore_memleak=ignore_memleak, + no_run=no_run, + ) + if core_rv != TEST_PASS_STATUS: + return core_rv, phase + else: + phase_order = list(CORE_PHASES) + phase_order.extend( + [item for item in self._phase_statuses if item not in CORE_PHASES] + ) + + return self._get_overall_status_based_on_phases( + phase_order, + wait_for_run=wait_for_run, + check_throughput=check_throughput, + check_memory=check_memory, + ignore_namelists=ignore_namelists, + ignore_memleak=ignore_memleak, + no_run=no_run, + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_utils.html new file mode 100644 index 00000000000..dd6084771f5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/test_utils.html @@ -0,0 +1,303 @@ + + + + + + CIME.test_utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.test_utils

+"""
+Utility functions used in test_scheduler.py, and by other utilities that need to
+get test lists.
+"""
+import glob
+from CIME.XML.standard_module_setup import *
+from CIME.XML.testlist import Testlist
+from CIME.XML.files import Files
+from CIME.test_status import TEST_STATUS_FILENAME
+import CIME.utils
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +def get_tests_from_xml( + xml_machine=None, + xml_category=None, + xml_compiler=None, + xml_testlist=None, + machine=None, + compiler=None, + driver=None, +): + """ + Parse testlists for a list of tests + """ + listoftests = [] + testlistfiles = [] + if machine is not None: + thismach = machine + if compiler is not None: + thiscompiler = compiler + + if xml_testlist is not None: + expect( + os.path.isfile(xml_testlist), + "Testlist not found or not readable " + xml_testlist, + ) + testlistfiles.append(xml_testlist) + else: + files = Files() + comps = files.get_components("TESTS_SPEC_FILE") + for comp in comps: + test_spec_file = files.get_value("TESTS_SPEC_FILE", {"component": comp}) + if os.path.isfile(test_spec_file): + testlistfiles.append(test_spec_file) + # We need to make nuopc the default for cesm testing, then we can remove this block + files = Files(comp_interface="nuopc") + test_spec_file = files.get_value("TESTS_SPEC_FILE", {"component": "drv"}) + if os.path.isfile(test_spec_file): + testlistfiles.append(test_spec_file) + + for testlistfile in testlistfiles: + thistestlistfile = Testlist(testlistfile) + logger.debug("Testlist file is " + testlistfile) + logger.debug( + "xml_machine {} xml_category {} xml_compiler {}".format( + xml_machine, xml_category, xml_compiler + ) + ) + newtests = thistestlistfile.get_tests(xml_machine, xml_category, xml_compiler) + for test in newtests: + if machine is None: + thismach = test["machine"] + if compiler is None: + thiscompiler = test["compiler"] + test["name"] = CIME.utils.get_full_test_name( + test["testname"], + grid=test["grid"], + compset=test["compset"], + machine=thismach, + compiler=thiscompiler, + testmods_string=None if "testmods" not in test else test["testmods"], + ) + if driver: + # override default or specified driver + founddriver = False + for specdriver in ("Vnuopc", "Vmct", "Vmoab"): + if specdriver in test["name"]: + test["name"] = test["name"].replace( + specdriver, "V{}".format(driver) + ) + founddriver = True + if not founddriver: + name = test["name"] + index = name.find(".") + test["name"] = name[:index] + "_V{}".format(driver) + name[index:] + + logger.debug( + "Adding test {} with compiler {}".format(test["name"], test["compiler"]) + ) + listoftests += newtests + logger.debug("Found {:d} tests".format(len(listoftests))) + + return listoftests
+ + + +
+[docs] +def test_to_string( + test, category_field_width=0, test_field_width=0, show_options=False +): + """Given a test dictionary, return a string representation suitable for printing + + Args: + test (dict): dictionary for a single test - e.g., one element from the + list returned by get_tests_from_xml + category_field_width (int): minimum amount of space to use for printing the test category + test_field_width (int): minimum amount of space to use for printing the test category + show_options (bool): if True, print test options, too (note that the 'comment' + option is always printed, if present) + + Basic functionality: + >>> mytest = {'name': 'SMS.f19_g16.A.cheyenne_intel', 'category': 'prealpha', 'options': {}} + >>> test_to_string(mytest, 10) + 'prealpha : SMS.f19_g16.A.cheyenne_intel' + + Printing comments: + >>> mytest = {'name': 'SMS.f19_g16.A.cheyenne_intel', 'category': 'prealpha', 'options': {'comment': 'my remarks'}} + >>> test_to_string(mytest, 10) + 'prealpha : SMS.f19_g16.A.cheyenne_intel # my remarks' + + Newlines in comments are converted to spaces + >>> mytest = {'name': 'SMS.f19_g16.A.cheyenne_intel', 'category': 'prealpha', 'options': {'comment': 'my\\nremarks'}} + >>> test_to_string(mytest, 10) + 'prealpha : SMS.f19_g16.A.cheyenne_intel # my remarks' + + Printing other options, too: + >>> mytest = {'name': 'SMS.f19_g16.A.cheyenne_intel', 'category': 'prealpha', 'options': {'comment': 'my remarks', 'wallclock': '0:20', 'memleak_tolerance': 0.2}} + >>> test_to_string(mytest, 10, show_options=True) + 'prealpha : SMS.f19_g16.A.cheyenne_intel # my remarks # memleak_tolerance: 0.2 # wallclock: 0:20' + """ + + mystr = "%-*s: %-*s" % ( + category_field_width, + test["category"], + test_field_width, + test["name"], + ) + if "options" in test: + myopts = test["options"].copy() + comment = myopts.pop("comment", None) + if comment: + comment = comment.replace("\n", " ") + mystr += " # {}".format(comment) + if show_options: + for one_opt in sorted(myopts): + mystr += " # {}: {}".format(one_opt, myopts[one_opt]) + + return mystr
+ + + +
+[docs] +def get_test_status_files(test_root, compiler, test_id=None): + test_id_glob = ( + "*{}*".format(compiler) + if test_id is None + else "*{}*{}*".format(compiler, test_id) + ) + test_status_files = glob.glob( + "{}/{}/{}".format(test_root, test_id_glob, TEST_STATUS_FILENAME) + ) + test_status_files = [ + item + for item in test_status_files + if not os.path.dirname(item).endswith("ref1") + and not os.path.dirname(item).endswith("ref2") + ] + + expect( + test_status_files, + "No matching test cases found in for {}/{}/{}".format( + test_root, test_id_glob, TEST_STATUS_FILENAME + ), + ) + return test_status_files
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/base.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/base.html new file mode 100644 index 00000000000..af1a01161e2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/base.html @@ -0,0 +1,491 @@ + + + + + + CIME.tests.base — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.base

+#!/usr/bin/env python3
+
+import glob
+import os
+import tempfile
+import time
+import signal
+import shutil
+import stat
+import sys
+import unittest
+
+from CIME import utils
+from CIME.config import Config
+from CIME.XML.machines import Machines
+
+
+
+[docs] +def typed_os_environ(key, default_value, expected_type=None): + # Infer type if not explicitly set + dst_type = expected_type or type(default_value) + + value = os.environ.get(key, default_value) + + if value is not None and dst_type == bool: + # Any else is false, might want to be more strict + return value.lower() == "true" if isinstance(value, str) else value + + if value is None: + return None + + return dst_type(value)
+ + + +
+[docs] +class BaseTestCase(unittest.TestCase): + # These static values are set when scripts/lib/CIME/tests/scripts_regression_tests.py is called. + MACHINE = None + SCRIPT_DIR = utils.get_scripts_root() + TOOLS_DIR = os.path.join(utils.get_cime_root(), "CIME", "Tools") + TEST_ROOT = None + TEST_COMPILER = None + TEST_MPILIB = None + NO_FORTRAN_RUN = None + FAST_ONLY = None + NO_BATCH = None + NO_CMAKE = None + NO_TEARDOWN = None + GLOBAL_TIMEOUT = None + +
+[docs] + def setUp(self): + self._thread_error = None + self._unset_proxy = self.setup_proxy() + self._machine = self.MACHINE.get_machine_name() + self._compiler = ( + self.MACHINE.get_default_compiler() + if self.TEST_COMPILER is None + else self.TEST_COMPILER + ) + self._baseline_name = "fake_testing_only_%s" % utils.get_timestamp() + self._baseline_area = os.path.join(self.TEST_ROOT, "baselines") + self._testroot = self.TEST_ROOT + self._hasbatch = self.MACHINE.has_batch_system() and not self.NO_BATCH + self._do_teardown = not self.NO_TEARDOWN + self._root_dir = os.getcwd() + self._cprnc = self.MACHINE.get_value("CCSM_CPRNC") + customize_path = os.path.join(utils.get_src_root(), "cime_config", "customize") + self._config = Config.load(customize_path)
+ + +
+[docs] + def tearDown(self): + self.kill_subprocesses() + + os.chdir(self._root_dir) + + if self._unset_proxy: + del os.environ["http_proxy"] + + files_to_clean = [] + + baselines = os.path.join(self._baseline_area, self._baseline_name) + if os.path.isdir(baselines): + files_to_clean.append(baselines) + + for test_id in ["master", self._baseline_name]: + for leftover in glob.glob(os.path.join(self._testroot, "*%s*" % test_id)): + files_to_clean.append(leftover) + + do_teardown = self._do_teardown and sys.exc_info() == (None, None, None) + if not do_teardown and files_to_clean: + print("Detected failed test or user request no teardown") + print("Leaving files:") + for file_to_clean in files_to_clean: + print(" " + file_to_clean) + else: + # For batch machines need to avoid race condition as batch system + # finishes I/O for the case. + if self._hasbatch: + time.sleep(5) + + for file_to_clean in files_to_clean: + if os.path.isdir(file_to_clean): + shutil.rmtree(file_to_clean) + else: + os.remove(file_to_clean)
+ + +
+[docs] + def assert_test_status(self, test_name, test_status_obj, test_phase, expected_stat): + test_status = test_status_obj.get_status(test_phase) + self.assertEqual( + test_status, + expected_stat, + msg="Problem with {}: for phase '{}': has status '{}', expected '{}'".format( + test_name, test_phase, test_status, expected_stat + ), + )
+ + +
+[docs] + def run_cmd_assert_result( + self, cmd, from_dir=None, expected_stat=0, env=None, verbose=False, shell=True + ): + from_dir = os.getcwd() if from_dir is None else from_dir + stat, output, errput = utils.run_cmd( + cmd, from_dir=from_dir, env=env, verbose=verbose, shell=shell + ) + if expected_stat == 0: + expectation = "SHOULD HAVE WORKED, INSTEAD GOT STAT %s" % stat + else: + expectation = "EXPECTED STAT %s, INSTEAD GOT STAT %s" % ( + expected_stat, + stat, + ) + msg = """ + COMMAND: %s + FROM_DIR: %s + %s + OUTPUT: %s + ERRPUT: %s + """ % ( + cmd, + from_dir, + expectation, + output, + errput, + ) + self.assertEqual(stat, expected_stat, msg=msg) + + return output
+ + +
+[docs] + def setup_proxy(self): + if "http_proxy" not in os.environ: + proxy = self.MACHINE.get_value("PROXY") + if proxy is not None: + os.environ["http_proxy"] = proxy + return True + + return False
+ + +
+[docs] + def assert_dashboard_has_build(self, build_name, expected_count=1): + # Do not test E3SM dashboard if model is CESM + if self._config.test_mode == "e3sm": + time.sleep(10) # Give chance for cdash to update + + wget_file = tempfile.mktemp() + + utils.run_cmd_no_fail( + "wget https://my.cdash.org/api/v1/index.php?project=ACME_test --no-check-certificate -O %s" + % wget_file + ) + + raw_text = open(wget_file, "r").read() + os.remove(wget_file) + + num_found = raw_text.count(build_name) + self.assertEqual( + num_found, + expected_count, + msg="Dashboard did not have expected num occurances of build name '%s'. Expected %s, found %s" + % (build_name, expected_count, num_found), + )
+ + +
+[docs] + def kill_subprocesses( + self, name=None, sig=signal.SIGKILL, expected_num_killed=None + ): + # Kill all subprocesses + proc_ids = utils.find_proc_id(proc_name=name, children_only=True) + if expected_num_killed is not None: + self.assertEqual( + len(proc_ids), + expected_num_killed, + msg="Expected to find %d processes to kill, found %d" + % (expected_num_killed, len(proc_ids)), + ) + for proc_id in proc_ids: + try: + os.kill(proc_id, sig) + except OSError: + pass
+ + +
+[docs] + def kill_python_subprocesses(self, sig=signal.SIGKILL, expected_num_killed=None): + self.kill_subprocesses("[Pp]ython", sig, expected_num_killed)
+ + + def _create_test( + self, + extra_args, + test_id=None, + run_errors=False, + env_changes="", + default_baseline_area=False, + ): + """ + Convenience wrapper around create_test. Returns list of full paths to created cases. If multiple cases, + the order of the returned list is not guaranteed to match the order of the arguments. + """ + # All stub model not supported in nuopc driver + driver = utils.get_cime_default_driver() + if driver == "nuopc" and "cime_developer" in extra_args: + extra_args.append( + " ^SMS_Ln3.T42_T42.S ^PRE.f19_f19.ADESP_TEST ^PRE.f19_f19.ADESP ^DAE.ww3a.ADWAV" + ) + + test_id = ( + "{}-{}".format(self._baseline_name, utils.get_timestamp()) + if test_id is None + else test_id + ) + extra_args.append("-t {}".format(test_id)) + if not default_baseline_area: + extra_args.append("--baseline-root {}".format(self._baseline_area)) + if self.NO_BATCH: + extra_args.append("--no-batch") + if self.TEST_COMPILER and ( + [extra_arg for extra_arg in extra_args if "--compiler" in extra_arg] == [] + ): + extra_args.append("--compiler={}".format(self.TEST_COMPILER)) + if self.TEST_MPILIB and ( + [extra_arg for extra_arg in extra_args if "--mpilib" in extra_arg] == [] + ): + extra_args.append("--mpilib={}".format(self.TEST_MPILIB)) + if [extra_arg for extra_arg in extra_args if "--machine" in extra_arg] == []: + extra_args.append(f"--machine {self.MACHINE.get_machine_name()}") + extra_args.append("--test-root={0} --output-root={0}".format(self._testroot)) + + full_run = ( + set(extra_args) + & set(["-n", "--namelist-only", "--no-setup", "--no-build", "--no-run"]) + ) == set() + if full_run and not self.NO_BATCH: + extra_args.append("--wait") + + expected_stat = 0 if not run_errors else utils.TESTS_FAILED_ERR_CODE + + output = self.run_cmd_assert_result( + "{} {}/create_test {}".format( + env_changes, self.SCRIPT_DIR, " ".join(extra_args) + ), + expected_stat=expected_stat, + ) + cases = [] + for line in output.splitlines(): + if "Case dir:" in line: + casedir = line.split()[-1] + self.assertTrue( + os.path.isdir(casedir), msg="Missing casedir {}".format(casedir) + ) + cases.append(casedir) + + self.assertTrue(len(cases) > 0, "create_test made no cases") + + return cases[0] if len(cases) == 1 else cases + + def _wait_for_tests(self, test_id, expect_works=True, always_wait=False): + if self._hasbatch or always_wait: + timeout_arg = ( + "--timeout={}".format(self.GLOBAL_TIMEOUT) + if self.GLOBAL_TIMEOUT is not None + else "" + ) + expected_stat = 0 if expect_works else utils.TESTS_FAILED_ERR_CODE + self.run_cmd_assert_result( + "{}/wait_for_tests {} *{}/TestStatus".format( + self.TOOLS_DIR, timeout_arg, test_id + ), + from_dir=self._testroot, + expected_stat=expected_stat, + ) + +
+[docs] + def get_casedir(self, case_fragment, all_cases): + potential_matches = [item for item in all_cases if case_fragment in item] + self.assertTrue( + len(potential_matches) == 1, + "Ambiguous casedir selection for {}, found {} among {}".format( + case_fragment, potential_matches, all_cases + ), + ) + return potential_matches[0]
+ + +
+[docs] + def verify_perms(self, root_dir): + for root, dirs, files in os.walk(root_dir): + for filename in files: + full_path = os.path.join(root, filename) + st = os.stat(full_path) + self.assertTrue( + st.st_mode & stat.S_IWGRP, + msg="file {} is not group writeable".format(full_path), + ) + self.assertTrue( + st.st_mode & stat.S_IRGRP, + msg="file {} is not group readable".format(full_path), + ) + self.assertTrue( + st.st_mode & stat.S_IROTH, + msg="file {} is not world readable".format(full_path), + ) + + for dirname in dirs: + full_path = os.path.join(root, dirname) + st = os.stat(full_path) + + self.assertTrue( + st.st_mode & stat.S_IWGRP, + msg="dir {} is not group writable".format(full_path), + ) + self.assertTrue( + st.st_mode & stat.S_IRGRP, + msg="dir {} is not group readable".format(full_path), + ) + self.assertTrue( + st.st_mode & stat.S_IXGRP, + msg="dir {} is not group executable".format(full_path), + ) + self.assertTrue( + st.st_mode & stat.S_IROTH, + msg="dir {} is not world readable".format(full_path), + ) + self.assertTrue( + st.st_mode & stat.S_IXOTH, + msg="dir {} is not world executable".format(full_path), + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/case_fake.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/case_fake.html new file mode 100644 index 00000000000..66d96a526e6 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/case_fake.html @@ -0,0 +1,310 @@ + + + + + + CIME.tests.case_fake — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.case_fake

+"""
+This module contains a fake implementation of the Case class that can be used
+for testing the tests.
+"""
+
+import os
+from copy import deepcopy
+
+
+
+[docs] +class CaseFake(object): + def __init__(self, case_root, create_case_root=True): + """ + Initialize a new case object for the given case_root directory. + + Args: + case_root (str): path to CASEROOT + create_case_root (bool): If True, creates the directory given by case_root + """ + self.vars = dict() + if create_case_root: + os.makedirs(case_root) + self.set_value("CASEROOT", case_root) + casename = os.path.basename(case_root) + # Typically, CIME_OUTPUT_ROOT is independent of the case. Here, + # we nest it under CASEROOT so that (1) tests don't interfere + # with each other; (2) a cleanup that removes CASEROOT will also + # remove CIME_OUTPUT_ROOT. + self.set_value("CIME_OUTPUT_ROOT", os.path.join(case_root, "CIME_OUTPUT_ROOT")) + self.set_value("CASE", casename) + self.set_value("CASEBASEID", casename) + self.set_value("RUN_TYPE", "startup") + self.set_exeroot() + self.set_rundir() + +
+[docs] + def set_initial_test_values(self): + pass
+ + +
+[docs] + def get_value(self, item): + """ + Get the value of the given item + + Returns None if item isn't set for this case + + Args: + item (str): variable of interest + """ + return self.vars.get(item)
+ + +
+[docs] + def set_value(self, item, value): + """ + Set the value of the given item to the given value + + Args: + item (str): variable of interest + value (any type): new value for item + """ + self.vars[item] = value
+ + +
+[docs] + def copy(self, newcasename, newcaseroot): + """ + Create and return a copy of self, but with CASE and CASEBASEID set to newcasename, + CASEROOT set to newcaseroot, and RUNDIR set appropriately. + + Args: + newcasename (str): new value for CASE + newcaseroot (str): new value for CASEROOT + """ + newcase = deepcopy(self) + newcase.set_value("CASE", newcasename) + newcase.set_value("CASEBASEID", newcasename) + newcase.set_value("CASEROOT", newcaseroot) + newcase.set_exeroot() + newcase.set_rundir() + + return newcase
+ + +
+[docs] + def create_clone( + self, + newcase, + keepexe=False, + mach_dir=None, + project=None, + cime_output_root=None, + exeroot=None, + rundir=None, + ): + # Need to disable unused-argument checking: keepexe is needed to match + # the interface of Case, but is not used in this fake implementation + # + # pylint: disable=unused-argument + """ + Create a clone of the current case. Also creates the CASEROOT directory + for the clone case (given by newcase). + + Args: + newcase (str): full path to the new case. This directory should not + already exist; it will be created + keepexe (bool, optional): Ignored + mach_dir (str, optional): Ignored + project (str, optional): Ignored + cime_output_root (str, optional): New CIME_OUTPUT_ROOT for the clone + exeroot (str, optional): New EXEROOT for the clone + rundir (str, optional): New RUNDIR for the clone + + Returns the clone case object + """ + newcaseroot = os.path.abspath(newcase) + newcasename = os.path.basename(newcase) + os.makedirs(newcaseroot) + clone = self.copy(newcasename=newcasename, newcaseroot=newcaseroot) + if cime_output_root is not None: + clone.set_value("CIME_OUTPUT_ROOT", cime_output_root) + if exeroot is not None: + clone.set_value("EXEROOT", exeroot) + if rundir is not None: + clone.set_value("RUNDIR", rundir) + + return clone
+ + +
+[docs] + def flush(self): + pass
+ + +
+[docs] + def make_rundir(self): + """ + Make directory given by RUNDIR + """ + os.makedirs(self.get_value("RUNDIR"))
+ + +
+[docs] + def set_exeroot(self): + """ + Assumes CASEROOT is already set; sets an appropriate EXEROOT + (nested inside CASEROOT) + """ + self.set_value("EXEROOT", os.path.join(self.get_value("CASEROOT"), "bld"))
+ + +
+[docs] + def set_rundir(self): + """ + Assumes CASEROOT is already set; sets an appropriate RUNDIR (nested + inside CASEROOT) + """ + self.set_value("RUNDIR", os.path.join(self.get_value("CASEROOT"), "run"))
+ + +
+[docs] + def case_setup(self, clean=False, test_mode=False, reset=False): + pass
+ + +
+[docs] + def load_env(self, reset=False): + pass
+ + + def __enter__(self): + pass + + def __exit__(self, *_): + pass
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/custom_assertions_test_status.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/custom_assertions_test_status.html new file mode 100644 index 00000000000..48529593c85 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/custom_assertions_test_status.html @@ -0,0 +1,230 @@ + + + + + + CIME.tests.custom_assertions_test_status — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.custom_assertions_test_status

+"""
+This module contains a class that extends unittest.TestCase, adding custom assertions that
+can be used when testing TestStatus.
+"""
+
+from CIME.XML.standard_module_setup import *
+
+import unittest
+import re
+from CIME import test_status
+
+
+
+[docs] +class CustomAssertionsTestStatus(unittest.TestCase): +
+[docs] + def assert_status_of_phase(self, output, status, phase, test_name, xfail=None): + """Asserts that 'output' contains a line showing the given + status for the given phase for the given test_name. + + 'xfail' should have one of the following values: + - None (the default): assertion passes regardless of whether there is an + EXPECTED/UNEXPECTED string + - 'no': The line should end with the phase, with no additional text after that + - 'expected': After the phase, the line should contain '(EXPECTED FAILURE)' + - 'unexpected': After the phase, the line should contain '(UNEXPECTED' + """ + expected = r"^ *{} +".format( + re.escape(status) + ) + self._test_name_and_phase_regex(test_name, phase) + + if xfail == "no": + # There should be no other text after the testname and phase regex + expected += r" *$" + elif xfail == "expected": + expected += r" *{}".format( + re.escape(test_status.TEST_EXPECTED_FAILURE_COMMENT) + ) + elif xfail == "unexpected": + expected += r" *{}".format( + re.escape(test_status.TEST_UNEXPECTED_FAILURE_COMMENT_START) + ) + else: + expect(xfail is None, "Unhandled value of xfail argument") + + expected_re = re.compile(expected, flags=re.MULTILINE) + + self.assertRegex(output, expected_re)
+ + +
+[docs] + def assert_phase_absent(self, output, phase, test_name): + """Asserts that 'output' does not contain a status line for the + given phase and test_name""" + expected = re.compile( + r"^.* +" + self._test_name_and_phase_regex(test_name, phase), + flags=re.MULTILINE, + ) + + self.assertNotRegex(output, expected)
+ + +
+[docs] + def assert_core_phases(self, output, test_name, fails): + """Asserts that 'output' contains a line for each of the core test + phases for the given test_name. All results should be PASS + except those given by the fails list, which should be FAILS. + """ + for phase in test_status.CORE_PHASES: + if phase in fails: + status = test_status.TEST_FAIL_STATUS + else: + status = test_status.TEST_PASS_STATUS + self.assert_status_of_phase( + output=output, status=status, phase=phase, test_name=test_name + )
+ + +
+[docs] + def assert_num_expected_unexpected_fails( + self, output, num_expected, num_unexpected + ): + """Asserts that the number of occurrences of expected and unexpected fails in + 'output' matches the given numbers""" + self.assertEqual( + output.count(test_status.TEST_EXPECTED_FAILURE_COMMENT), num_expected + ) + self.assertEqual( + output.count(test_status.TEST_UNEXPECTED_FAILURE_COMMENT_START), + num_unexpected, + )
+ + + @staticmethod + def _test_name_and_phase_regex(test_name, phase): + """Returns a regex matching the portion of a TestStatus line + containing the test name and phase""" + # The main purpose of extracting this into a shared method is: + # assert_phase_absent could wrongly pass if the format of the + # TestStatus output changed without that method's regex + # changing. By making its regex shared as much as possible with + # the regex in assert_status_of_phase, we decrease the chances + # of these false passes. + return r"{} +{}".format(re.escape(test_name), re.escape(phase))
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/scripts_regression_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/scripts_regression_tests.html new file mode 100644 index 00000000000..f87bbe95735 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/scripts_regression_tests.html @@ -0,0 +1,424 @@ + + + + + + CIME.tests.scripts_regression_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.scripts_regression_tests

+#!/usr/bin/env python3
+
+"""
+Script containing CIME python regression test suite. This suite should be run
+to confirm overall CIME correctness.
+"""
+
+import glob, os, re, shutil, signal, sys, tempfile, threading, time, logging, unittest, getpass, filecmp, time, atexit, functools
+
+CIMEROOT = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", ".."))
+sys.path.insert(0, CIMEROOT)
+
+from xml.etree.ElementTree import ParseError
+
+import subprocess, argparse
+
+subprocess.call('/bin/rm -f $(find . -name "*.pyc")', shell=True, cwd=CIMEROOT)
+import stat as osstat
+
+import collections
+
+from CIME.utils import (
+    run_cmd,
+    run_cmd_no_fail,
+    get_lids,
+    get_current_commit,
+    safe_copy,
+    CIMEError,
+    get_cime_root,
+    get_src_root,
+    Timeout,
+    import_from_file,
+    get_model,
+)
+import CIME.test_scheduler, CIME.wait_for_tests
+from CIME import get_tests
+from CIME.test_scheduler import TestScheduler
+from CIME.XML.env_run import EnvRun
+from CIME.XML.machines import Machines
+from CIME.XML.files import Files
+from CIME.case import Case
+from CIME.code_checker import check_code, get_all_checkable_files
+from CIME.test_status import *
+from CIME.provenance import get_test_success, save_test_success
+from CIME import utils
+from CIME.tests.base import BaseTestCase
+
+os.environ["CIME_GLOBAL_WALLTIME"] = "0:05:00"
+
+TEST_RESULT = None
+
+
+
+[docs] +def write_provenance_info(machine, test_compiler, test_mpilib, test_root): + curr_commit = get_current_commit(repo=CIMEROOT) + logging.info("Testing commit %s" % curr_commit) + cime_model = get_model() + logging.info("Using cime_model = %s" % cime_model) + logging.info("Testing machine = %s" % machine.get_machine_name()) + if test_compiler is not None: + logging.info("Testing compiler = %s" % test_compiler) + if test_mpilib is not None: + logging.info("Testing mpilib = %s" % test_mpilib) + logging.info("Test root: %s" % test_root) + logging.info("Test driver: %s" % CIME.utils.get_cime_default_driver()) + logging.info("Python version {}\n".format(sys.version))
+ + + +
+[docs] +def cleanup(test_root): + if ( + os.path.exists(test_root) + and TEST_RESULT is not None + and TEST_RESULT.wasSuccessful() + ): + testreporter = os.path.join(test_root, "testreporter") + files = os.listdir(test_root) + if len(files) == 1 and os.path.isfile(testreporter): + os.unlink(testreporter) + if not os.listdir(test_root): + print("All pass, removing directory:", test_root) + os.rmdir(test_root)
+ + + +
+[docs] +def setup_arguments(parser): + parser.add_argument( + "--fast", + action="store_true", + help="Skip full system tests, which saves a lot of time", + ) + + parser.add_argument( + "--no-batch", + action="store_true", + help="Do not submit jobs to batch system, run locally." + " If false, will default to machine setting.", + ) + + parser.add_argument( + "--no-fortran-run", + action="store_true", + help="Do not run any fortran jobs. Implies --fast" " Used for github actions", + ) + + parser.add_argument( + "--no-cmake", action="store_true", help="Do not run cmake tests" + ) + + parser.add_argument( + "--no-teardown", + action="store_true", + help="Do not delete directories left behind by testing", + ) + + parser.add_argument( + "--machine", help="Select a specific machine setting for cime", default=None + ) + + parser.add_argument( + "--compiler", help="Select a specific compiler setting for cime", default=None + ) + + parser.add_argument( + "--mpilib", help="Select a specific compiler setting for cime", default=None + ) + + parser.add_argument( + "--test-root", + help="Select a specific test root for all cases created by the testing", + default=None, + ) + + parser.add_argument( + "--timeout", + type=int, + help="Select a specific timeout for all tests", + default=None, + )
+ + + +
+[docs] +def configure_tests( + timeout, + no_fortran_run, + fast, + no_batch, + no_cmake, + no_teardown, + machine, + compiler, + mpilib, + test_root, + **kwargs +): + config = CIME.utils.get_cime_config() + + if timeout: + BaseTestCase.GLOBAL_TIMEOUT = str(timeout) + + BaseTestCase.NO_FORTRAN_RUN = no_fortran_run or False + BaseTestCase.FAST_ONLY = fast or no_fortran_run + BaseTestCase.NO_BATCH = no_batch or False + BaseTestCase.NO_CMAKE = no_cmake or False + BaseTestCase.NO_TEARDOWN = no_teardown or False + + # make sure we have default values + MACHINE = None + TEST_COMPILER = None + TEST_MPILIB = None + + if machine is not None: + MACHINE = Machines(machine=machine) + os.environ["CIME_MACHINE"] = machine + elif "CIME_MACHINE" in os.environ: + MACHINE = Machines(machine=os.environ["CIME_MACHINE"]) + elif config.has_option("create_test", "MACHINE"): + MACHINE = Machines(machine=config.get("create_test", "MACHINE")) + elif config.has_option("main", "MACHINE"): + MACHINE = Machines(machine=config.get("main", "MACHINE")) + else: + MACHINE = Machines() + + BaseTestCase.MACHINE = MACHINE + + if compiler is not None: + TEST_COMPILER = compiler + elif config.has_option("create_test", "COMPILER"): + TEST_COMPILER = config.get("create_test", "COMPILER") + elif config.has_option("main", "COMPILER"): + TEST_COMPILER = config.get("main", "COMPILER") + + BaseTestCase.TEST_COMPILER = TEST_COMPILER + + if mpilib is not None: + TEST_MPILIB = mpilib + elif config.has_option("create_test", "MPILIB"): + TEST_MPILIB = config.get("create_test", "MPILIB") + elif config.has_option("main", "MPILIB"): + TEST_MPILIB = config.get("main", "MPILIB") + + BaseTestCase.TEST_MPILIB = TEST_MPILIB + + if test_root is not None: + TEST_ROOT = test_root + elif config.has_option("create_test", "TEST_ROOT"): + TEST_ROOT = config.get("create_test", "TEST_ROOT") + else: + TEST_ROOT = os.path.join( + MACHINE.get_value("CIME_OUTPUT_ROOT"), + "scripts_regression_test.%s" % CIME.utils.get_timestamp(), + ) + + BaseTestCase.TEST_ROOT = TEST_ROOT + + write_provenance_info(MACHINE, TEST_COMPILER, TEST_MPILIB, TEST_ROOT) + + atexit.register(functools.partial(cleanup, TEST_ROOT))
+ + + +def _main_func(description): + help_str = """ +{0} [TEST] [TEST] +OR +{0} --help + +\033[1mEXAMPLES:\033[0m + \033[1;32m# Run the full suite \033[0m + > {0} + + \033[1;32m# Run single test file (with or without extension) \033[0m + > {0} test_unit_doctest + + \033[1;32m# Run single test class from a test file \033[0m + > {0} test_unit_doctest.TestDocs + + \033[1;32m# Run single test case from a test class \033[0m + > {0} test_unit_doctest.TestDocs.test_lib_docs +""".format( + os.path.basename(sys.argv[0]) + ) + + parser = argparse.ArgumentParser( + usage=help_str, + description=description, + formatter_class=argparse.ArgumentDefaultsHelpFormatter, + ) + + setup_arguments(parser) + + parser.add_argument("--verbose", action="store_true", help="Enable verbose logging") + + parser.add_argument("--debug", action="store_true", help="Enable debug logging") + + parser.add_argument("--silent", action="store_true", help="Disable all logging") + + parser.add_argument( + "tests", nargs="*", help="Specific tests to run e.g. test_unit*" + ) + + ns, args = parser.parse_known_args() + + # Now set the sys.argv to the unittest_args (leaving sys.argv[0] alone) + sys.argv[1:] = args + + utils.configure_logging(ns.verbose, ns.debug, ns.silent) + + configure_tests(**vars(ns)) + + os.chdir(CIMEROOT) + + if len(ns.tests) == 0: + test_root = os.path.join(CIMEROOT, "CIME", "tests") + + test_suite = unittest.defaultTestLoader.discover(test_root) + else: + # Fixes handling shell expansion e.g. test_unit_*, by removing python extension + tests = [x.replace(".py", "").replace("/", ".") for x in ns.tests] + + # Try to load tests by just names + test_suite = unittest.defaultTestLoader.loadTestsFromNames(tests) + + test_runner = unittest.TextTestRunner(verbosity=2) + + global TEST_RESULT + + TEST_RESULT = test_runner.run(test_suite) + + # Implements same behavior as unittesst.main + # https://github.com/python/cpython/blob/b6d68aa08baebb753534a26d537ac3c0d2c21c79/Lib/unittest/main.py#L272-L273 + sys.exit(not TEST_RESULT.wasSuccessful()) + + +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_bless_tests_results.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_bless_tests_results.html new file mode 100644 index 00000000000..b7f8afedc92 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_bless_tests_results.html @@ -0,0 +1,354 @@ + + + + + + CIME.tests.test_sys_bless_tests_results — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_bless_tests_results

+#!/usr/bin/env python3
+
+import glob
+import re
+import os
+import stat
+
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestBlessTestResults(base.BaseTestCase): +
+[docs] + def setUp(self): + super().setUp() + + # Set a restrictive umask so we can test that SharedAreas used for + # recording baselines are working + restrictive_mask = 0o027 + self._orig_umask = os.umask(restrictive_mask) + if not self._cprnc: + self.skipTest( + "Test cannot run without cprnc program defined in config_machines.xml" + )
+ + +
+[docs] + def tearDown(self): + super().tearDown() + + if "TESTRUNDIFF_ALTERNATE" in os.environ: + del os.environ["TESTRUNDIFF_ALTERNATE"] + + os.umask(self._orig_umask)
+ + +
+[docs] + def test_bless_test_results(self): + if self.NO_FORTRAN_RUN: + self.skipTest("Skipping fortran test") + # Test resubmit scenario if Machine has a batch system + if self.MACHINE.has_batch_system(): + test_names = [ + "TESTRUNDIFFRESUBMIT_Mmpi-serial.f19_g16_rx1.A", + "TESTRUNDIFF_Mmpi-serial.f19_g16_rx1.A", + ] + else: + test_names = ["TESTRUNDIFF_P1.f19_g16_rx1.A"] + + # Generate some baselines + for test_name in test_names: + if self._config.create_test_flag_mode == "e3sm": + genargs = ["-g", "-o", "-b", self._baseline_name, test_name] + compargs = ["-c", "-b", self._baseline_name, test_name] + else: + genargs = [ + "-g", + self._baseline_name, + "-o", + test_name, + "--baseline-root ", + self._baseline_area, + ] + compargs = [ + "-c", + self._baseline_name, + test_name, + "--baseline-root ", + self._baseline_area, + ] + + self._create_test(genargs) + # Hist compare should pass + self._create_test(compargs) + # Change behavior + os.environ["TESTRUNDIFF_ALTERNATE"] = "True" + + # Hist compare should now fail + test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + self._create_test(compargs, test_id=test_id, run_errors=True) + + # compare_test_results should detect the fail + cpr_cmd = "{}/compare_test_results --test-root {} -t {} ".format( + self.TOOLS_DIR, self._testroot, test_id + ) + output = self.run_cmd_assert_result( + cpr_cmd, expected_stat=utils.TESTS_FAILED_ERR_CODE + ) + + # use regex + expected_pattern = re.compile(r"FAIL %s[^\s]* BASELINE" % test_name) + the_match = expected_pattern.search(output) + self.assertNotEqual( + the_match, + None, + msg="Cmd '%s' failed to display failed test %s in output:\n%s" + % (cpr_cmd, test_name, output), + ) + # Bless + utils.run_cmd_no_fail( + "{}/bless_test_results --test-root {} --hist-only --force -t {}".format( + self.TOOLS_DIR, self._testroot, test_id + ) + ) + # Hist compare should now pass again + self._create_test(compargs) + self.verify_perms(self._baseline_area) + if "TESTRUNDIFF_ALTERNATE" in os.environ: + del os.environ["TESTRUNDIFF_ALTERNATE"]
+ + +
+[docs] + def test_rebless_namelist(self): + # Generate some namelist baselines + if self.NO_FORTRAN_RUN: + self.skipTest("Skipping fortran test") + test_to_change = "TESTRUNPASS_P1.f19_g16_rx1.A" + if self._config.create_test_flag_mode == "e3sm": + genargs = ["-g", "-o", "-b", self._baseline_name, "cime_test_only_pass"] + compargs = ["-c", "-b", self._baseline_name, "cime_test_only_pass"] + else: + genargs = ["-g", self._baseline_name, "-o", "cime_test_only_pass"] + compargs = ["-c", self._baseline_name, "cime_test_only_pass"] + + self._create_test(genargs) + + # Basic namelist compare + test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + cases = self._create_test(compargs, test_id=test_id) + casedir = self.get_casedir(test_to_change, cases) + + # Check standalone case.cmpgen_namelists + self.run_cmd_assert_result("./case.cmpgen_namelists", from_dir=casedir) + + # compare_test_results should pass + cpr_cmd = "{}/compare_test_results --test-root {} -n -t {} ".format( + self.TOOLS_DIR, self._testroot, test_id + ) + output = self.run_cmd_assert_result(cpr_cmd) + + # use regex + expected_pattern = re.compile(r"PASS %s[^\s]* NLCOMP" % test_to_change) + the_match = expected_pattern.search(output) + msg = f"Cmd {cpr_cmd} failed to display passed test in output:\n{output}" + self.assertNotEqual( + the_match, + None, + msg=msg, + ) + + # Modify namelist + fake_nl = """ + &fake_nml + fake_item = 'fake' + fake = .true. +/""" + baseline_area = self._baseline_area + baseline_glob = glob.glob( + os.path.join(baseline_area, self._baseline_name, "TEST*") + ) + self.assertEqual( + len(baseline_glob), + 3, + msg="Expected three matches, got:\n%s" % "\n".join(baseline_glob), + ) + + for baseline_dir in baseline_glob: + nl_path = os.path.join(baseline_dir, "CaseDocs", "datm_in") + self.assertTrue(os.path.isfile(nl_path), msg="Missing file %s" % nl_path) + + os.chmod(nl_path, stat.S_IRUSR | stat.S_IWUSR) + with open(nl_path, "a") as nl_file: + nl_file.write(fake_nl) + + # Basic namelist compare should now fail + test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + self._create_test(compargs, test_id=test_id, run_errors=True) + casedir = self.get_casedir(test_to_change, cases) + + # Unless namelists are explicitly ignored + test_id2 = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + self._create_test(compargs + ["--ignore-namelists"], test_id=test_id2) + + self.run_cmd_assert_result( + "./case.cmpgen_namelists", from_dir=casedir, expected_stat=100 + ) + + # preview namelists should work + self.run_cmd_assert_result("./preview_namelists", from_dir=casedir) + + # This should still fail + self.run_cmd_assert_result( + "./case.cmpgen_namelists", from_dir=casedir, expected_stat=100 + ) + + # compare_test_results should fail + cpr_cmd = "{}/compare_test_results --test-root {} -n -t {} ".format( + self.TOOLS_DIR, self._testroot, test_id + ) + output = self.run_cmd_assert_result( + cpr_cmd, expected_stat=utils.TESTS_FAILED_ERR_CODE + ) + + # use regex + expected_pattern = re.compile(r"FAIL %s[^\s]* NLCOMP" % test_to_change) + the_match = expected_pattern.search(output) + self.assertNotEqual( + the_match, + None, + msg="Cmd '%s' failed to display passed test in output:\n%s" + % (cpr_cmd, output), + ) + + # Bless + new_test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + utils.run_cmd_no_fail( + "{}/bless_test_results --test-root {} -n --force -t {} --new-test-root={} --new-test-id={}".format( + self.TOOLS_DIR, self._testroot, test_id, self._testroot, new_test_id + ) + ) + + # Basic namelist compare should now pass again + self._create_test(compargs) + + self.verify_perms(self._baseline_area)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_build_system.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_build_system.html new file mode 100644 index 00000000000..ed99d61a66c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_build_system.html @@ -0,0 +1,147 @@ + + + + + + CIME.tests.test_sys_build_system — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_build_system

+#!/usr/bin/env python3
+
+from CIME.tests import base
+
+
+
+[docs] +class TestBuildSystem(base.BaseTestCase): +
+[docs] + def test_clean_rebuild(self): + casedir = self._create_test( + ["--no-run", "SMS.f19_g16_rx1.A"], test_id=self._baseline_name + ) + + # Clean a component and a sharedlib + self.run_cmd_assert_result("./case.build --clean atm", from_dir=casedir) + self.run_cmd_assert_result("./case.build --clean gptl", from_dir=casedir) + + # Repeating should not be an error + self.run_cmd_assert_result("./case.build --clean atm", from_dir=casedir) + self.run_cmd_assert_result("./case.build --clean gptl", from_dir=casedir) + + self.run_cmd_assert_result("./case.build", from_dir=casedir)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_cime_case.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_cime_case.html new file mode 100644 index 00000000000..b4a049be30b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_cime_case.html @@ -0,0 +1,949 @@ + + + + + + CIME.tests.test_sys_cime_case — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_cime_case

+#!/usr/bin/env python3
+
+import collections
+import os
+import re
+import shutil
+import sys
+import time
+
+from CIME import utils
+from CIME.tests import base
+from CIME.case.case import Case
+from CIME.XML.env_run import EnvRun
+
+try:
+    collectionsAbc = collections.abc
+except AttributeError:
+    collectionsAbc = collections
+
+
+
+[docs] +class TestCimeCase(base.BaseTestCase): +
+[docs] + def test_cime_case(self): + casedir = self._create_test( + ["--no-build", "TESTRUNPASS_P1.f19_g16_rx1.A"], test_id=self._baseline_name + ) + + self.assertEqual(type(self.MACHINE.get_value("MAX_TASKS_PER_NODE")), int) + self.assertTrue( + type(self.MACHINE.get_value("PROJECT_REQUIRED")) in [type(None), bool] + ) + + with Case(casedir, read_only=False) as case: + build_complete = case.get_value("BUILD_COMPLETE") + self.assertFalse( + build_complete, + msg="Build complete had wrong value '%s'" % build_complete, + ) + + case.set_value("BUILD_COMPLETE", True) + build_complete = case.get_value("BUILD_COMPLETE") + self.assertTrue( + build_complete, + msg="Build complete had wrong value '%s'" % build_complete, + ) + + case.flush() + + build_complete = utils.run_cmd_no_fail( + "./xmlquery BUILD_COMPLETE --value", from_dir=casedir + ) + self.assertEqual( + build_complete, + "TRUE", + msg="Build complete had wrong value '%s'" % build_complete, + ) + + # Test some test properties + self.assertEqual(case.get_value("TESTCASE"), "TESTRUNPASS")
+ + + def _batch_test_fixture(self, testcase_name): + if not self.MACHINE.has_batch_system() or self.NO_BATCH: + self.skipTest("Skipping testing user prerequisites without batch systems") + testdir = os.path.join(self._testroot, testcase_name) + if os.path.exists(testdir): + shutil.rmtree(testdir) + args = "--case {name} --script-root {testdir} --compset X --res f19_g16 --handle-preexisting-dirs=r --output-root {testdir}".format( + name=testcase_name, testdir=testdir + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + + self.run_cmd_assert_result( + "{}/create_newcase {}".format(self.SCRIPT_DIR, args), + from_dir=self.SCRIPT_DIR, + ) + self.run_cmd_assert_result("./case.setup", from_dir=testdir) + + return testdir + +
+[docs] + def test_cime_case_prereq(self): + testcase_name = "prereq_test" + testdir = self._batch_test_fixture(testcase_name) + with Case(testdir, read_only=False) as case: + if case.get_value("depend_string") is None: + self.skipTest( + "Skipping prereq test, depend_string was not provided for this batch system" + ) + job_name = "case.run" + prereq_name = "prereq_test" + batch_commands = case.submit_jobs( + prereq=prereq_name, job=job_name, skip_pnl=True, dry_run=True + ) + self.assertTrue( + isinstance(batch_commands, collectionsAbc.Sequence), + "case.submit_jobs did not return a sequence for a dry run", + ) + self.assertTrue( + len(batch_commands) > 0, + "case.submit_jobs did not return any job submission string", + ) + # The first element in the internal sequence should just be the job name + # The second one (batch_cmd_index) should be the actual batch submission command + batch_cmd_index = 1 + # The prerequisite should be applied to all jobs, though we're only expecting one + for batch_cmd in batch_commands: + self.assertTrue( + isinstance(batch_cmd, collectionsAbc.Sequence), + "case.submit_jobs did not return a sequence of sequences", + ) + self.assertTrue( + len(batch_cmd) > batch_cmd_index, + "case.submit_jobs returned internal sequences with length <= {}".format( + batch_cmd_index + ), + ) + self.assertTrue( + isinstance(batch_cmd[1], str), + "case.submit_jobs returned internal sequences without the batch command string as the second parameter: {}".format( + batch_cmd[1] + ), + ) + batch_cmd_args = batch_cmd[1] + + jobid_ident = "jobid" + dep_str_fmt = case.get_env("batch").get_value( + "depend_string", subgroup=None + ) + self.assertTrue( + jobid_ident in dep_str_fmt, + "dependency string doesn't include the jobid identifier {}".format( + jobid_ident + ), + ) + dep_str = dep_str_fmt[: dep_str_fmt.index(jobid_ident)] + + prereq_substr = None + while dep_str in batch_cmd_args: + dep_id_pos = batch_cmd_args.find(dep_str) + len(dep_str) + batch_cmd_args = batch_cmd_args[dep_id_pos:] + prereq_substr = batch_cmd_args[: len(prereq_name)] + if prereq_substr == prereq_name: + break + + self.assertTrue( + prereq_name in prereq_substr, + "Dependencies added, but not the user specified one", + )
+ + +
+[docs] + def test_cime_case_allow_failed_prereq(self): + testcase_name = "allow_failed_prereq_test" + testdir = self._batch_test_fixture(testcase_name) + with Case(testdir, read_only=False) as case: + depend_allow = case.get_value("depend_allow_string") + if depend_allow is None: + self.skipTest( + "Skipping allow_failed_prereq test, depend_allow_string was not provided for this batch system" + ) + job_name = "case.run" + prereq_name = "prereq_allow_fail_test" + depend_allow = depend_allow.replace("jobid", prereq_name) + batch_commands = case.submit_jobs( + prereq=prereq_name, + allow_fail=True, + job=job_name, + skip_pnl=True, + dry_run=True, + ) + self.assertTrue( + isinstance(batch_commands, collectionsAbc.Sequence), + "case.submit_jobs did not return a sequence for a dry run", + ) + num_submissions = 1 + if case.get_value("DOUT_S"): + num_submissions = 2 + self.assertTrue( + len(batch_commands) == num_submissions, + "case.submit_jobs did not return any job submission strings", + ) + self.assertTrue(depend_allow in batch_commands[0][1])
+ + +
+[docs] + def test_cime_case_resubmit_immediate(self): + testcase_name = "resubmit_immediate_test" + testdir = self._batch_test_fixture(testcase_name) + with Case(testdir, read_only=False) as case: + depend_string = case.get_value("depend_string") + if depend_string is None: + self.skipTest( + "Skipping resubmit_immediate test, depend_string was not provided for this batch system" + ) + depend_string = re.sub("jobid.*$", "", depend_string) + job_name = "case.run" + num_submissions = 6 + case.set_value("RESUBMIT", num_submissions - 1) + batch_commands = case.submit_jobs( + job=job_name, skip_pnl=True, dry_run=True, resubmit_immediate=True + ) + self.assertTrue( + isinstance(batch_commands, collectionsAbc.Sequence), + "case.submit_jobs did not return a sequence for a dry run", + ) + if case.get_value("DOUT_S"): + num_submissions = 12 + self.assertTrue( + len(batch_commands) == num_submissions, + "case.submit_jobs did not return {} submitted jobs".format( + num_submissions + ), + ) + for i, cmd in enumerate(batch_commands): + if i > 0: + self.assertTrue(depend_string in cmd[1])
+ + +
+[docs] + def test_cime_case_st_archive_resubmit(self): + testcase_name = "st_archive_resubmit_test" + testdir = self._batch_test_fixture(testcase_name) + with Case(testdir, read_only=False) as case: + case.case_setup(clean=False, test_mode=False, reset=True) + orig_resubmit = 2 + case.set_value("RESUBMIT", orig_resubmit) + case.case_st_archive(resubmit=False) + new_resubmit = case.get_value("RESUBMIT") + self.assertTrue( + orig_resubmit == new_resubmit, "st_archive resubmitted when told not to" + ) + case.case_st_archive(resubmit=True) + new_resubmit = case.get_value("RESUBMIT") + self.assertTrue( + (orig_resubmit - 1) == new_resubmit, + "st_archive did not resubmit when told to", + )
+ + +
+[docs] + def test_cime_case_build_threaded_1(self): + casedir = self._create_test( + ["--no-build", "TESTRUNPASS_P1x1.f19_g16_rx1.A"], + test_id=self._baseline_name, + ) + + with Case(casedir, read_only=False) as case: + build_threaded = case.get_value("SMP_PRESENT") + self.assertFalse(build_threaded) + + build_threaded = case.get_build_threaded() + self.assertFalse(build_threaded) + + case.set_value("FORCE_BUILD_SMP", True) + + build_threaded = case.get_build_threaded() + self.assertTrue(build_threaded)
+ + +
+[docs] + def test_cime_case_build_threaded_2(self): + casedir = self._create_test( + ["--no-build", "TESTRUNPASS_P1x2.f19_g16_rx1.A"], + test_id=self._baseline_name, + ) + + with Case(casedir, read_only=False) as case: + build_threaded = case.get_value("SMP_PRESENT") + self.assertTrue(build_threaded) + + build_threaded = case.get_build_threaded() + self.assertTrue(build_threaded)
+ + +
+[docs] + def test_cime_case_mpi_serial(self): + casedir = self._create_test( + ["--no-build", "TESTRUNPASS_Mmpi-serial_P10.f19_g16_rx1.A"], + test_id=self._baseline_name, + ) + + with Case(casedir, read_only=True) as case: + + # Serial cases should not be using pnetcdf + self.assertEqual(case.get_value("CPL_PIO_TYPENAME"), "netcdf") + + # Serial cases should be using 1 task + self.assertEqual(case.get_value("TOTALPES"), 1) + + self.assertEqual(case.get_value("NTASKS_CPL"), 1)
+ + +
+[docs] + def test_cime_case_force_pecount(self): + casedir = self._create_test( + [ + "--no-build", + "--force-procs=16", + "--force-threads=8", + "TESTRUNPASS.f19_g16_rx1.A", + ], + test_id=self._baseline_name, + ) + + with Case(casedir, read_only=True) as case: + self.assertEqual(case.get_value("NTASKS_CPL"), 16) + + self.assertEqual(case.get_value("NTHRDS_CPL"), 8)
+ + +
+[docs] + def test_cime_case_xmlchange_append(self): + casedir = self._create_test( + ["--no-build", "TESTRUNPASS_P1x1.f19_g16_rx1.A"], + test_id=self._baseline_name, + ) + + self.run_cmd_assert_result( + "./xmlchange --id PIO_CONFIG_OPTS --val='-opt1'", from_dir=casedir + ) + result = self.run_cmd_assert_result( + "./xmlquery --value PIO_CONFIG_OPTS", from_dir=casedir + ) + self.assertEqual(result, "-opt1") + + self.run_cmd_assert_result( + "./xmlchange --id PIO_CONFIG_OPTS --val='-opt2' --append", from_dir=casedir + ) + result = self.run_cmd_assert_result( + "./xmlquery --value PIO_CONFIG_OPTS", from_dir=casedir + ) + self.assertEqual(result, "-opt1 -opt2")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_1(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping walltime test. Depends on E3SM batch settings") + + test_name = "ERS.f19_g16_rx1.A" + casedir = self._create_test( + ["--no-setup", "--machine=blues", "--non-local", test_name], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME -N --subgroup=case.test --value", + from_dir=casedir, + ) + self.assertEqual(result, "00:10:00") + + result = self.run_cmd_assert_result( + "./xmlquery JOB_QUEUE -N --subgroup=case.test --value", from_dir=casedir + ) + self.assertEqual(result, "biggpu")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_2(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping walltime test. Depends on E3SM batch settings") + + test_name = "ERS_P64.f19_g16_rx1.A" + casedir = self._create_test( + ["--no-setup", "--machine=blues", "--non-local", test_name], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME -N --subgroup=case.test --value", + from_dir=casedir, + ) + self.assertEqual(result, "01:00:00") + + result = self.run_cmd_assert_result( + "./xmlquery JOB_QUEUE -N --subgroup=case.test --value", from_dir=casedir + ) + self.assertEqual(result, "biggpu")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_3(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping walltime test. Depends on E3SM batch settings") + + test_name = "ERS_P64.f19_g16_rx1.A" + casedir = self._create_test( + [ + "--no-setup", + "--machine=blues", + "--non-local", + "--walltime=0:10:00", + test_name, + ], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME -N --subgroup=case.test --value", + from_dir=casedir, + ) + self.assertEqual(result, "00:10:00") + + result = self.run_cmd_assert_result( + "./xmlquery JOB_QUEUE -N --subgroup=case.test --value", from_dir=casedir + ) + self.assertEqual(result, "biggpu") # Not smart enough to select faster queue
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_4(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping walltime test. Depends on E3SM batch settings") + + test_name = "ERS_P1.f19_g16_rx1.A" + casedir = self._create_test( + [ + "--no-setup", + "--machine=blues", + "--non-local", + "--walltime=2:00:00", + test_name, + ], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME -N --subgroup=case.test --value", + from_dir=casedir, + ) + self.assertEqual(result, "01:00:00") + + result = self.run_cmd_assert_result( + "./xmlquery JOB_QUEUE -N --subgroup=case.test --value", from_dir=casedir + ) + self.assertEqual(result, "biggpu")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_5(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping walltime test. Depends on E3SM batch settings") + + test_name = "ERS_P1.f19_g16_rx1.A" + casedir = self._create_test( + ["--no-setup", "--machine=blues", "--non-local", test_name], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + self.run_cmd_assert_result( + "./xmlchange JOB_QUEUE=slartibartfast -N --subgroup=case.test", + from_dir=casedir, + expected_stat=1, + ) + + self.run_cmd_assert_result( + "./xmlchange JOB_QUEUE=slartibartfast -N --force --subgroup=case.test", + from_dir=casedir, + ) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME -N --subgroup=case.test --value", + from_dir=casedir, + ) + self.assertEqual(result, "01:00:00") + + result = self.run_cmd_assert_result( + "./xmlquery JOB_QUEUE -N --subgroup=case.test --value", from_dir=casedir + ) + self.assertEqual(result, "slartibartfast")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_6(self): + if not self._hasbatch: + self.skipTest("Skipping walltime test. Depends on batch system") + + test_name = "ERS_P1.f19_g16_rx1.A" + casedir = self._create_test( + ["--no-build", test_name], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + self.run_cmd_assert_result( + "./xmlchange JOB_WALLCLOCK_TIME=421:32:11 --subgroup=case.test", + from_dir=casedir, + ) + + self.run_cmd_assert_result("./case.setup --reset", from_dir=casedir) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME --subgroup=case.test --value", + from_dir=casedir, + ) + with Case(casedir) as case: + walltime_format = case.get_value("walltime_format", subgroup=None) + if walltime_format is not None and walltime_format.count(":") == 1: + self.assertEqual(result, "421:32") + else: + self.assertEqual(result, "421:32:11")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_7(self): + if not self._hasbatch: + self.skipTest("Skipping walltime test. Depends on batch system") + + test_name = "ERS_P1.f19_g16_rx1.A" + casedir = self._create_test( + ["--no-build", "--walltime=01:00:00", test_name], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + self.run_cmd_assert_result( + "./xmlchange JOB_WALLCLOCK_TIME=421:32:11 --subgroup=case.test", + from_dir=casedir, + ) + + self.run_cmd_assert_result("./case.setup --reset", from_dir=casedir) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME --subgroup=case.test --value", + from_dir=casedir, + ) + with Case(casedir) as case: + walltime_format = case.get_value("walltime_format", subgroup=None) + if walltime_format is not None and walltime_format.count(":") == 1: + self.assertEqual(result, "421:32") + else: + self.assertEqual(result, "421:32:11")
+ + +
+[docs] + def test_cime_case_test_walltime_mgmt_8(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping walltime test. Depends on E3SM batch settings") + + test_name = "SMS_P25600.f19_g16_rx1.A" + machine, compiler = "theta", "gnu" + casedir = self._create_test( + [ + "--no-setup", + "--non-local", + "--machine={}".format(machine), + "--compiler={}".format(compiler), + "--project e3sm", + test_name, + ], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + result = self.run_cmd_assert_result( + "./xmlquery JOB_WALLCLOCK_TIME -N --subgroup=case.test --value", + from_dir=casedir, + ) + self.assertEqual(result, "09:00:00") + + result = self.run_cmd_assert_result( + "./xmlquery JOB_QUEUE -N --subgroup=case.test --value", from_dir=casedir + ) + self.assertEqual(result, "default")
+ + +
+[docs] + def test_cime_case_test_custom_project(self): + test_name = "ERS_P1.f19_g16_rx1.A" + # have to use a machine both models know and one that doesn't put PROJECT in any key paths + machine = self._config.test_custom_project_machine + compiler = "gnu" + casedir = self._create_test( + [ + "--no-setup", + "--machine={}".format(machine), + "--compiler={}".format(compiler), + "--project=testproj", + test_name, + "--mpilib=mpi-serial", + "--non-local", + ], + test_id=self._baseline_name, + env_changes="unset CIME_GLOBAL_WALLTIME &&", + ) + + result = self.run_cmd_assert_result( + "./xmlquery --non-local --value PROJECT --subgroup=case.test", + from_dir=casedir, + ) + self.assertEqual(result, "testproj")
+ + +
+[docs] + def test_create_test_longname(self): + self._create_test( + ["SMS.f19_g16.2000_SATM_XLND_SICE_SOCN_XROF_XGLC_SWAV", "--no-build"] + )
+ + +
+[docs] + def test_env_loading(self): + if self._machine != "mappy": + self.skipTest("Skipping env load test - Only works on mappy") + + casedir = self._create_test( + ["--no-build", "TESTRUNPASS.f19_g16_rx1.A"], test_id=self._baseline_name + ) + + with Case(casedir, read_only=True) as case: + env_mach = case.get_env("mach_specific") + orig_env = dict(os.environ) + + env_mach.load_env(case) + module_env = dict(os.environ) + + os.environ.clear() + os.environ.update(orig_env) + + env_mach.load_env(case, force_method="generic") + generic_env = dict(os.environ) + + os.environ.clear() + os.environ.update(orig_env) + + problems = "" + for mkey, mval in module_env.items(): + if mkey not in generic_env: + if not mkey.startswith("PS") and mkey != "OLDPWD": + problems += "Generic missing key: {}\n".format(mkey) + elif ( + mval != generic_env[mkey] + and mkey not in ["_", "SHLVL", "PWD"] + and not mkey.endswith("()") + ): + problems += "Value mismatch for key {}: {} != {}\n".format( + mkey, repr(mval), repr(generic_env[mkey]) + ) + + for gkey in generic_env.keys(): + if gkey not in module_env: + problems += "Modules missing key: {}\n".format(gkey) + + self.assertEqual(problems, "", msg=problems)
+ + +
+[docs] + def test_case_submit_interface(self): + # the current directory may not exist, so make sure we are in a real directory + os.chdir(os.getenv("HOME")) + sys.path.append(self.TOOLS_DIR) + case_submit_path = os.path.join(self.TOOLS_DIR, "case.submit") + + module = utils.import_from_file("case.submit", case_submit_path) + + sys.argv = [ + "case.submit", + "--batch-args", + "'random_arguments_here.%j'", + "--mail-type", + "fail", + "--mail-user", + "'random_arguments_here.%j'", + ] + module._main_func(None, True)
+ + +
+[docs] + def test_xml_caching(self): + casedir = self._create_test( + ["--no-build", "TESTRUNPASS.f19_g16_rx1.A"], test_id=self._baseline_name + ) + + active = os.path.join(casedir, "env_run.xml") + backup = os.path.join(casedir, "env_run.xml.bak") + + utils.safe_copy(active, backup) + + with Case(casedir, read_only=False) as case: + env_run = EnvRun(casedir, read_only=True) + self.assertEqual(case.get_value("RUN_TYPE"), "startup") + case.set_value("RUN_TYPE", "branch") + self.assertEqual(case.get_value("RUN_TYPE"), "branch") + self.assertEqual(env_run.get_value("RUN_TYPE"), "branch") + + with Case(casedir) as case: + self.assertEqual(case.get_value("RUN_TYPE"), "branch") + + time.sleep(0.2) + utils.safe_copy(backup, active) + + with Case(casedir, read_only=False) as case: + self.assertEqual(case.get_value("RUN_TYPE"), "startup") + case.set_value("RUN_TYPE", "branch") + + with Case(casedir, read_only=False) as case: + self.assertEqual(case.get_value("RUN_TYPE"), "branch") + time.sleep(0.2) + utils.safe_copy(backup, active) + case.read_xml() # Manual re-sync + self.assertEqual(case.get_value("RUN_TYPE"), "startup") + case.set_value("RUN_TYPE", "branch") + self.assertEqual(case.get_value("RUN_TYPE"), "branch") + + with Case(casedir) as case: + self.assertEqual(case.get_value("RUN_TYPE"), "branch") + time.sleep(0.2) + utils.safe_copy(backup, active) + env_run = EnvRun(casedir, read_only=True) + self.assertEqual(env_run.get_value("RUN_TYPE"), "startup") + + with Case(casedir, read_only=False) as case: + self.assertEqual(case.get_value("RUN_TYPE"), "startup") + case.set_value("RUN_TYPE", "branch") + + # behind the back detection. + with self.assertRaises(utils.CIMEError): + with Case(casedir, read_only=False) as case: + case.set_value("RUN_TYPE", "startup") + time.sleep(0.2) + utils.safe_copy(backup, active) + + with Case(casedir, read_only=False) as case: + case.set_value("RUN_TYPE", "branch") + + # If there's no modications within CIME, the files should not be written + # and therefore no timestamp check + with Case(casedir) as case: + time.sleep(0.2) + utils.safe_copy(backup, active)
+ + +
+[docs] + def test_configure(self): + testname = "SMS.f09_g16.X" + casedir = self._create_test( + [testname, "--no-build"], test_id=self._baseline_name + ) + + manual_config_dir = os.path.join(casedir, "manual_config") + os.mkdir(manual_config_dir) + + utils.run_cmd_no_fail( + "{} --machine={} --compiler={}".format( + os.path.join(utils.get_cime_root(), "CIME", "scripts", "configure"), + self._machine, + self._compiler, + ), + from_dir=manual_config_dir, + ) + + with open(os.path.join(casedir, "env_mach_specific.xml"), "r") as fd: + case_env_contents = fd.read() + + with open(os.path.join(manual_config_dir, "env_mach_specific.xml"), "r") as fd: + man_env_contents = fd.read() + + self.assertEqual(case_env_contents, man_env_contents)
+ + +
+[docs] + def test_self_build_cprnc(self): + if self.NO_FORTRAN_RUN: + self.skipTest("Skipping fortran test") + if self.TEST_COMPILER and "gpu" in self.TEST_COMPILER: + self.skipTest("Skipping cprnc test for gpu compiler") + + testname = "ERS_Ln7.f19_g16_rx1.A" + casedir = self._create_test( + [testname, "--no-build"], test_id=self._baseline_name + ) + + self.run_cmd_assert_result( + "./xmlchange CCSM_CPRNC=this_is_a_broken_cprnc", from_dir=casedir + ) + self.run_cmd_assert_result("./case.build", from_dir=casedir) + self.run_cmd_assert_result("./case.submit", from_dir=casedir) + + self._wait_for_tests(self._baseline_name, always_wait=True)
+ + +
+[docs] + def test_case_clean(self): + testname = "ERS_Ln7.f19_g16_rx1.A" + casedir = self._create_test( + [testname, "--no-build"], test_id=self._baseline_name + ) + + self.run_cmd_assert_result("./case.setup --clean", from_dir=casedir) + self.run_cmd_assert_result("./case.setup --clean", from_dir=casedir) + self.run_cmd_assert_result("./case.setup", from_dir=casedir)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_cime_performance.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_cime_performance.html new file mode 100644 index 00000000000..d79e442e1fd --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_cime_performance.html @@ -0,0 +1,146 @@ + + + + + + CIME.tests.test_sys_cime_performance — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_cime_performance

+#!/usr/bin/env python3
+
+import time
+
+from CIME.tests import base
+
+
+
+[docs] +class TestCimePerformance(base.BaseTestCase): +
+[docs] + def test_cime_case_ctrl_performance(self): + + ts = time.time() + + num_repeat = 5 + for _ in range(num_repeat): + self._create_test(["cime_tiny", "--no-build"]) + + elapsed = time.time() - ts + + print("Perf test result: {:0.2f}".format(elapsed))
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_create_newcase.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_create_newcase.html new file mode 100644 index 00000000000..c741f959580 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_create_newcase.html @@ -0,0 +1,1052 @@ + + + + + + CIME.tests.test_sys_create_newcase — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_create_newcase

+#!/usr/bin/env python3
+
+import filecmp
+import os
+import re
+import shutil
+import sys
+
+from CIME import utils
+from CIME.tests import base
+from CIME.case.case import Case
+from CIME.build import CmakeTmpBuildDir
+
+
+
+[docs] +class TestCreateNewcase(base.BaseTestCase): +
+[docs] + @classmethod + def setUpClass(cls): + cls._testdirs = [] + cls._do_teardown = [] + cls._testroot = os.path.join(cls.TEST_ROOT, "TestCreateNewcase") + cls._root_dir = os.getcwd()
+ + +
+[docs] + def tearDown(self): + cls = self.__class__ + os.chdir(cls._root_dir)
+ + +
+[docs] + def test_a_createnewcase(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "testcreatenewcase") + if os.path.exists(testdir): + shutil.rmtree(testdir) + args = " --case %s --compset X --output-root %s --handle-preexisting-dirs=r" % ( + testdir, + cls._testroot, + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args = args + " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args = args + " --mpilib %s" % self.TEST_MPILIB + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + + args += f" --machine {self.MACHINE.get_machine_name()}" + + cls._testdirs.append(testdir) + self.run_cmd_assert_result( + "./create_newcase %s" % (args), from_dir=self.SCRIPT_DIR + ) + self.assertTrue(os.path.exists(testdir)) + self.assertTrue(os.path.exists(os.path.join(testdir, "case.setup"))) + + self.run_cmd_assert_result("./case.setup", from_dir=testdir) + self.run_cmd_assert_result("./case.build", from_dir=testdir) + + with Case(testdir, read_only=False) as case: + ntasks = case.get_value("NTASKS_ATM") + case.set_value("NTASKS_ATM", ntasks + 1) + + # this should fail with a locked file issue + self.run_cmd_assert_result("./case.build", from_dir=testdir, expected_stat=1) + + self.run_cmd_assert_result("./case.setup --reset", from_dir=testdir) + self.run_cmd_assert_result("./case.build", from_dir=testdir) + with Case(testdir, read_only=False) as case: + case.set_value("CHARGE_ACCOUNT", "fred") + # to be used in next test + batch_system = case.get_value("BATCH_SYSTEM") + + # on systems (like github workflow) that do not have batch, set this for the next test + if batch_system == "none": + self.run_cmd_assert_result( + './xmlchange --subgroup case.run BATCH_COMMAND_FLAGS="-q \$JOB_QUEUE"', + from_dir=testdir, + ) + + # this should not fail with a locked file issue + self.run_cmd_assert_result("./case.build", from_dir=testdir) + + self.run_cmd_assert_result("./case.st_archive --test-all", from_dir=testdir) + + with Case(testdir, read_only=False) as case: + batch_command = case.get_value("BATCH_COMMAND_FLAGS", subgroup="case.run") + + self.run_cmd_assert_result( + './xmlchange --append --subgroup case.run BATCH_COMMAND_FLAGS="-l trythis"', + from_dir=testdir, + ) + # Test that changes to BATCH_COMMAND_FLAGS work + with Case(testdir, read_only=False) as case: + new_batch_command = case.get_value( + "BATCH_COMMAND_FLAGS", subgroup="case.run" + ) + + self.assertTrue( + new_batch_command == batch_command + " -l trythis", + msg=f"Failed to correctly append BATCH_COMMAND_FLAGS {new_batch_command} {batch_command}#", + ) + + self.run_cmd_assert_result( + "./xmlchange JOB_QUEUE=fred --subgroup case.run --force", from_dir=testdir + ) + + with Case(testdir, read_only=False) as case: + new_batch_command = case.get_value( + "BATCH_COMMAND_FLAGS", subgroup="case.run" + ) + self.assertTrue( + "fred" in new_batch_command, + msg="Failed to update JOB_QUEUE in BATCH_COMMAND_FLAGS", + ) + + # Trying to set values outside of context manager should fail + case = Case(testdir, read_only=False) + with self.assertRaises(utils.CIMEError): + case.set_value("NTASKS_ATM", 42) + + # Trying to read_xml with pending changes should fail + with self.assertRaises(utils.CIMEError): + with Case(testdir, read_only=False) as case: + case.set_value("CHARGE_ACCOUNT", "fouc") + case.read_xml() + + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_aa_no_flush_on_instantiate(self): + testdir = os.path.join(self.__class__._testroot, "testcreatenewcase") + with Case(testdir, read_only=False) as case: + for env_file in case._files: + self.assertFalse( + env_file.needsrewrite, + msg="Instantiating a case should not trigger a flush call", + ) + + with Case(testdir, read_only=False) as case: + case.set_value("HIST_OPTION", "nyears") + runfile = case.get_env("run") + self.assertTrue( + runfile.needsrewrite, msg="Expected flush call not triggered" + ) + for env_file in case._files: + if env_file != runfile: + self.assertFalse( + env_file.needsrewrite, + msg="Unexpected flush triggered for file {}".format( + env_file.filename + ), + ) + # Flush the file + runfile.write() + # set it again to the same value + case.set_value("HIST_OPTION", "nyears") + # now the file should not need to be flushed + for env_file in case._files: + self.assertFalse( + env_file.needsrewrite, + msg="Unexpected flush triggered for file {}".format( + env_file.filename + ), + ) + + # Check once more with a new instance + with Case(testdir, read_only=False) as case: + case.set_value("HIST_OPTION", "nyears") + for env_file in case._files: + self.assertFalse( + env_file.needsrewrite, + msg="Unexpected flush triggered for file {}".format( + env_file.filename + ), + )
+ + +
+[docs] + def test_b_user_mods(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "testusermods") + if os.path.exists(testdir): + shutil.rmtree(testdir) + + cls._testdirs.append(testdir) + + user_mods_dir = os.path.join(os.path.dirname(__file__), "user_mods_test1") + args = ( + " --case %s --compset X --user-mods-dir %s --output-root %s --handle-preexisting-dirs=r" + % (testdir, user_mods_dir, cls._testroot) + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args = args + " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args = args + " --mpilib %s" % self.TEST_MPILIB + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "%s/create_newcase %s " % (self.SCRIPT_DIR, args), from_dir=self.SCRIPT_DIR + ) + + self.assertTrue( + os.path.isfile( + os.path.join(testdir, "SourceMods", "src.drv", "somefile.F90") + ), + msg="User_mods SourceMod missing", + ) + + with open(os.path.join(testdir, "user_nl_cpl"), "r") as fd: + contents = fd.read() + self.assertTrue( + "a different cpl test option" in contents, + msg="User_mods contents of user_nl_cpl missing", + ) + self.assertTrue( + "a cpl namelist option" in contents, + msg="User_mods contents of user_nl_cpl missing", + ) + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_c_create_clone_keepexe(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "test_create_clone_keepexe") + if os.path.exists(testdir): + shutil.rmtree(testdir) + prevtestdir = cls._testdirs[0] + user_mods_dir = os.path.join(os.path.dirname(__file__), "user_mods_test3") + + cmd = "%s/create_clone --clone %s --case %s --keepexe --user-mods-dir %s" % ( + self.SCRIPT_DIR, + prevtestdir, + testdir, + user_mods_dir, + ) + self.run_cmd_assert_result(cmd, from_dir=self.SCRIPT_DIR, expected_stat=1) + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_d_create_clone_new_user(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "test_create_clone_new_user") + if os.path.exists(testdir): + shutil.rmtree(testdir) + prevtestdir = cls._testdirs[0] + cls._testdirs.append(testdir) + # change the USER and CIME_OUTPUT_ROOT to nonsense values + # this is intended as a test of whether create_clone is independent of user + self.run_cmd_assert_result( + "./xmlchange USER=this_is_not_a_user", from_dir=prevtestdir + ) + + fakeoutputroot = cls._testroot.replace( + os.environ.get("USER"), "this_is_not_a_user" + ) + self.run_cmd_assert_result( + "./xmlchange CIME_OUTPUT_ROOT=%s" % fakeoutputroot, from_dir=prevtestdir + ) + + # this test should pass (user name is replaced) + self.run_cmd_assert_result( + "%s/create_clone --clone %s --case %s " + % (self.SCRIPT_DIR, prevtestdir, testdir), + from_dir=self.SCRIPT_DIR, + ) + + shutil.rmtree(testdir) + # this test should pass + self.run_cmd_assert_result( + "%s/create_clone --clone %s --case %s --cime-output-root %s" + % (self.SCRIPT_DIR, prevtestdir, testdir, cls._testroot), + from_dir=self.SCRIPT_DIR, + ) + + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_dd_create_clone_not_writable(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "test_create_clone_not_writable") + if os.path.exists(testdir): + shutil.rmtree(testdir) + prevtestdir = cls._testdirs[0] + cls._testdirs.append(testdir) + + with Case(prevtestdir, read_only=False) as case1: + case2 = case1.create_clone(testdir) + with self.assertRaises(utils.CIMEError): + case2.set_value("CHARGE_ACCOUNT", "fouc") + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_e_xmlquery(self): + # Set script and script path + xmlquery = "./xmlquery" + cls = self.__class__ + casedir = cls._testdirs[0] + + # Check for environment + self.assertTrue(os.path.isdir(self.SCRIPT_DIR)) + self.assertTrue(os.path.isdir(self.TOOLS_DIR)) + self.assertTrue(os.path.isfile(os.path.join(casedir, xmlquery))) + + # Test command line options + with Case(casedir, read_only=True, non_local=True) as case: + STOP_N = case.get_value("STOP_N") + COMP_CLASSES = case.get_values("COMP_CLASSES") + BUILD_COMPLETE = case.get_value("BUILD_COMPLETE") + cmd = xmlquery + " --non-local STOP_N --value" + output = utils.run_cmd_no_fail(cmd, from_dir=casedir) + self.assertTrue(output == str(STOP_N), msg="%s != %s" % (output, STOP_N)) + cmd = xmlquery + " --non-local BUILD_COMPLETE --value" + output = utils.run_cmd_no_fail(cmd, from_dir=casedir) + self.assertTrue(output == "TRUE", msg="%s != %s" % (output, BUILD_COMPLETE)) + # we expect DOCN_MODE to be undefined in this X compset + # this test assures that we do not try to resolve this as a compvar + cmd = xmlquery + " --non-local DOCN_MODE --value" + _, output, error = utils.run_cmd(cmd, from_dir=casedir) + self.assertTrue( + error == "ERROR: No results found for variable DOCN_MODE", + msg="unexpected result for DOCN_MODE, output {}, error {}".format( + output, error + ), + ) + + for comp in COMP_CLASSES: + caseresult = case.get_value("NTASKS_%s" % comp) + cmd = xmlquery + " --non-local NTASKS_%s --value" % comp + output = utils.run_cmd_no_fail(cmd, from_dir=casedir) + self.assertTrue( + output == str(caseresult), msg="%s != %s" % (output, caseresult) + ) + cmd = xmlquery + " --non-local NTASKS --subgroup %s --value" % comp + output = utils.run_cmd_no_fail(cmd, from_dir=casedir) + self.assertTrue( + output == str(caseresult), msg="%s != %s" % (output, caseresult) + ) + if self.MACHINE.has_batch_system(): + JOB_QUEUE = case.get_value("JOB_QUEUE", subgroup="case.run") + cmd = xmlquery + " --non-local JOB_QUEUE --subgroup case.run --value" + output = utils.run_cmd_no_fail(cmd, from_dir=casedir) + self.assertTrue( + output == JOB_QUEUE, msg="%s != %s" % (output, JOB_QUEUE) + ) + + cmd = xmlquery + " --non-local --listall" + utils.run_cmd_no_fail(cmd, from_dir=casedir) + + cls._do_teardown.append(cls._testroot)
+ + +
+[docs] + def test_f_createnewcase_with_user_compset(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "testcreatenewcase_with_user_compset") + if os.path.exists(testdir): + shutil.rmtree(testdir) + + cls._testdirs.append(testdir) + + if self._config.test_mode == "cesm": + if utils.get_cime_default_driver() == "nuopc": + pesfile = os.path.join( + utils.get_src_root(), + "components", + "cmeps", + "cime_config", + "config_pes.xml", + ) + else: + pesfile = os.path.join( + utils.get_src_root(), + "components", + "cpl7", + "driver", + "cime_config", + "config_pes.xml", + ) + else: + pesfile = os.path.join( + utils.get_src_root(), "driver-mct", "cime_config", "config_pes.xml" + ) + + args = ( + "--case %s --compset 2000_SATM_XLND_SICE_SOCN_XROF_XGLC_SWAV --pesfile %s --res f19_g16 --output-root %s --handle-preexisting-dirs=r" + % (testdir, pesfile, cls._testroot) + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args += " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args = args + " --mpilib %s" % self.TEST_MPILIB + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "%s/create_newcase %s" % (self.SCRIPT_DIR, args), from_dir=self.SCRIPT_DIR + ) + self.run_cmd_assert_result("./case.setup", from_dir=testdir) + self.run_cmd_assert_result("./case.build", from_dir=testdir) + + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_g_createnewcase_with_user_compset_and_env_mach_pes(self): + cls = self.__class__ + + testdir = os.path.join( + cls._testroot, "testcreatenewcase_with_user_compset_and_env_mach_pes" + ) + if os.path.exists(testdir): + shutil.rmtree(testdir) + previous_testdir = cls._testdirs[-1] + cls._testdirs.append(testdir) + + pesfile = os.path.join(previous_testdir, "env_mach_pes.xml") + args = ( + "--case %s --compset 2000_SATM_XLND_SICE_SOCN_XROF_XGLC_SWAV --pesfile %s --res f19_g16 --output-root %s --handle-preexisting-dirs=r" + % (testdir, pesfile, cls._testroot) + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args += " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args += " --mpilib %s" % self.TEST_MPILIB + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "%s/create_newcase %s" % (self.SCRIPT_DIR, args), from_dir=self.SCRIPT_DIR + ) + self.run_cmd_assert_result( + "diff env_mach_pes.xml %s" % (previous_testdir), from_dir=testdir + ) + # this line should cause the diff to fail (I assume no machine is going to default to 17 tasks) + self.run_cmd_assert_result("./xmlchange NTASKS=17", from_dir=testdir) + self.run_cmd_assert_result( + "diff env_mach_pes.xml %s" % (previous_testdir), + from_dir=testdir, + expected_stat=1, + ) + + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_h_primary_component(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "testprimarycomponent") + if os.path.exists(testdir): + shutil.rmtree(testdir) + + cls._testdirs.append(testdir) + args = ( + " --case CreateNewcaseTest --script-root %s --compset X --output-root %s --handle-preexisting-dirs u" + % (testdir, cls._testroot) + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args += " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args += " --mpilib %s" % self.TEST_MPILIB + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "%s/create_newcase %s" % (self.SCRIPT_DIR, args), from_dir=self.SCRIPT_DIR + ) + self.assertTrue(os.path.exists(testdir)) + self.assertTrue(os.path.exists(os.path.join(testdir, "case.setup"))) + + with Case(testdir, read_only=False) as case: + case._compsetname = case.get_value("COMPSET") + case.set_comp_classes(case.get_values("COMP_CLASSES")) + primary = case._find_primary_component() + self.assertEqual( + primary, + "drv", + msg="primary component test expected drv but got %s" % primary, + ) + # now we are going to corrupt the case so that we can do more primary_component testing + case.set_valid_values("COMP_GLC", "%s,fred" % case.get_value("COMP_GLC")) + case.set_value("COMP_GLC", "fred") + primary = case._find_primary_component() + self.assertEqual( + primary, + "fred", + msg="primary component test expected fred but got %s" % primary, + ) + case.set_valid_values("COMP_ICE", "%s,wilma" % case.get_value("COMP_ICE")) + case.set_value("COMP_ICE", "wilma") + primary = case._find_primary_component() + self.assertEqual( + primary, + "wilma", + msg="primary component test expected wilma but got %s" % primary, + ) + + case.set_valid_values( + "COMP_OCN", "%s,bambam,docn" % case.get_value("COMP_OCN") + ) + case.set_value("COMP_OCN", "bambam") + primary = case._find_primary_component() + self.assertEqual( + primary, + "bambam", + msg="primary component test expected bambam but got %s" % primary, + ) + + case.set_valid_values("COMP_LND", "%s,barney" % case.get_value("COMP_LND")) + case.set_value("COMP_LND", "barney") + primary = case._find_primary_component() + # This is a "J" compset + self.assertEqual( + primary, + "allactive", + msg="primary component test expected allactive but got %s" % primary, + ) + case.set_value("COMP_OCN", "docn") + case.set_valid_values("COMP_LND", "%s,barney" % case.get_value("COMP_LND")) + case.set_value("COMP_LND", "barney") + primary = case._find_primary_component() + self.assertEqual( + primary, + "barney", + msg="primary component test expected barney but got %s" % primary, + ) + case.set_valid_values("COMP_ATM", "%s,wilma" % case.get_value("COMP_ATM")) + case.set_value("COMP_ATM", "wilma") + primary = case._find_primary_component() + self.assertEqual( + primary, + "wilma", + msg="primary component test expected wilma but got %s" % primary, + ) + # this is a "E" compset + case._compsetname = case._compsetname.replace("XOCN", "DOCN%SOM") + primary = case._find_primary_component() + self.assertEqual( + primary, + "allactive", + msg="primary component test expected allactive but got %s" % primary, + ) + # finally a "B" compset + case.set_value("COMP_OCN", "bambam") + primary = case._find_primary_component() + self.assertEqual( + primary, + "allactive", + msg="primary component test expected allactive but got %s" % primary, + ) + + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_j_createnewcase_user_compset_vs_alias(self): + """ + Create a compset using the alias and another compset using the full compset name + and make sure they are the same by comparing the namelist files in CaseDocs. + Ignore the modelio files and clean the directory names out first. + """ + cls = self.__class__ + + testdir1 = os.path.join(cls._testroot, "testcreatenewcase_user_compset") + if os.path.exists(testdir1): + shutil.rmtree(testdir1) + cls._testdirs.append(testdir1) + + args = " --case CreateNewcaseTest --script-root {} --compset 2000_DATM%NYF_SLND_SICE_DOCN%SOMAQP_SROF_SGLC_SWAV --output-root {} --handle-preexisting-dirs u".format( + testdir1, cls._testroot + ) + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args += " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args += " --mpilib %s" % self.TEST_MPILIB + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "{}/create_newcase {}".format(self.SCRIPT_DIR, args), + from_dir=self.SCRIPT_DIR, + ) + self.run_cmd_assert_result("./case.setup ", from_dir=testdir1) + self.run_cmd_assert_result("./preview_namelists ", from_dir=testdir1) + + dir1 = os.path.join(testdir1, "CaseDocs") + dir2 = os.path.join(testdir1, "CleanCaseDocs") + os.mkdir(dir2) + for _file in os.listdir(dir1): + if "modelio" in _file: + continue + with open(os.path.join(dir1, _file), "r") as fi: + file_text = fi.read() + file_text = file_text.replace(os.path.basename(testdir1), "PATH") + file_text = re.sub(r"logfile =.*", "", file_text) + with open(os.path.join(dir2, _file), "w") as fo: + fo.write(file_text) + cleancasedocs1 = dir2 + + testdir2 = os.path.join(cls._testroot, "testcreatenewcase_alias_compset") + if os.path.exists(testdir2): + shutil.rmtree(testdir2) + cls._testdirs.append(testdir2) + args = " --case CreateNewcaseTest --script-root {} --compset ADSOMAQP --output-root {} --handle-preexisting-dirs u".format( + testdir2, cls._testroot + ) + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args += " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args += " --mpilib %s" % self.TEST_MPILIB + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "{}/create_newcase {}".format(self.SCRIPT_DIR, args), + from_dir=self.SCRIPT_DIR, + ) + self.run_cmd_assert_result("./case.setup ", from_dir=testdir2) + self.run_cmd_assert_result("./preview_namelists ", from_dir=testdir2) + + dir1 = os.path.join(testdir2, "CaseDocs") + dir2 = os.path.join(testdir2, "CleanCaseDocs") + os.mkdir(dir2) + for _file in os.listdir(dir1): + if "modelio" in _file: + continue + with open(os.path.join(dir1, _file), "r") as fi: + file_text = fi.read() + file_text = file_text.replace(os.path.basename(testdir2), "PATH") + file_text = re.sub(r"logfile =.*", "", file_text) + with open(os.path.join(dir2, _file), "w") as fo: + fo.write(file_text) + + cleancasedocs2 = dir2 + dcmp = filecmp.dircmp(cleancasedocs1, cleancasedocs2) + self.assertTrue( + len(dcmp.diff_files) == 0, "CaseDocs differ {}".format(dcmp.diff_files) + ) + + cls._do_teardown.append(testdir1) + cls._do_teardown.append(testdir2)
+ + +
+[docs] + def test_k_append_config(self): + machlist_before = self.MACHINE.list_available_machines() + self.assertEqual( + len(machlist_before) > 1, True, msg="Problem reading machine list" + ) + + newmachfile = os.path.join( + utils.get_cime_root(), + "CIME", + "data", + "config", + "xml_schemas", + "config_machines_template.xml", + ) + self.MACHINE.read(newmachfile) + machlist_after = self.MACHINE.list_available_machines() + + self.assertEqual( + len(machlist_after) - len(machlist_before), + 1, + msg="Not able to append config_machines.xml {} {}".format( + len(machlist_after), len(machlist_before) + ), + ) + self.assertEqual( + "mymachine" in machlist_after, + True, + msg="Not able to append config_machines.xml", + )
+ + +
+[docs] + def test_ka_createnewcase_extra_machines_dir(self): + # Test that we pick up changes in both config_machines.xml and + # cmake macros in a directory specified with the --extra-machines-dir + # argument to create_newcase. + cls = self.__class__ + casename = "testcreatenewcase_extra_machines_dir" + + # Setup: stage some xml files in a temporary directory + extra_machines_dir = os.path.join( + cls._testroot, "{}_machine_config".format(casename) + ) + os.makedirs(os.path.join(extra_machines_dir, "cmake_macros")) + cls._do_teardown.append(extra_machines_dir) + newmachfile = os.path.join( + utils.get_cime_root(), + "CIME", + "data", + "config", + "xml_schemas", + "config_machines_template.xml", + ) + utils.safe_copy( + newmachfile, os.path.join(extra_machines_dir, "config_machines.xml") + ) + cmake_macro_text = """\ +set(NETCDF_PATH /my/netcdf/path) +""" + cmake_macro_path = os.path.join( + extra_machines_dir, "cmake_macros", "mymachine.cmake" + ) + with open(cmake_macro_path, "w") as cmake_macro: + cmake_macro.write(cmake_macro_text) + + # Create the case + testdir = os.path.join(cls._testroot, casename) + if os.path.exists(testdir): + shutil.rmtree(testdir) + # In the following, note that 'mymachine' is the machine name defined in + # config_machines_template.xml + args = ( + " --case {testdir} --compset X --mach mymachine" + " --output-root {testroot} --non-local" + " --extra-machines-dir {extra_machines_dir}".format( + testdir=testdir, + testroot=cls._testroot, + extra_machines_dir=extra_machines_dir, + ) + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + self.run_cmd_assert_result( + "./create_newcase {}".format(args), from_dir=self.SCRIPT_DIR + ) + + args += f" --machine {self.MACHINE.get_machine_name()}" + + cls._do_teardown.append(testdir) + + # Run case.setup + self.run_cmd_assert_result("./case.setup --non-local", from_dir=testdir) + + # Make sure Macros file contains expected text + + with Case(testdir, non_local=True) as case: + with CmakeTmpBuildDir(macroloc=testdir) as cmaketmp: + macros_contents = cmaketmp.get_makefile_vars(case=case) + + expected_re = re.compile("NETCDF_PATH.*/my/netcdf/path") + self.assertTrue( + expected_re.search(macros_contents), + msg="{} not found in:\n{}".format(expected_re.pattern, macros_contents), + )
+ + +
+[docs] + def test_m_createnewcase_alternate_drivers(self): + # Test that case.setup runs for nuopc and moab drivers + cls = self.__class__ + + # TODO refactor + if self._config.test_mode == "cesm": + alternative_driver = ("nuopc",) + else: + alternative_driver = ("moab",) + + for driver in alternative_driver: + if driver == "moab" and not os.path.exists( + os.path.join(utils.get_cime_root(), "src", "drivers", driver) + ): + self.skipTest( + "Skipping driver test for {}, driver not found".format(driver) + ) + if driver == "nuopc" and not os.path.exists( + os.path.join(utils.get_src_root(), "components", "cmeps") + ): + self.skipTest( + "Skipping driver test for {}, driver not found".format(driver) + ) + + testdir = os.path.join(cls._testroot, "testcreatenewcase.{}".format(driver)) + if os.path.exists(testdir): + shutil.rmtree(testdir) + args = " --driver {} --case {} --compset X --res f19_g16 --output-root {} --handle-preexisting-dirs=r".format( + driver, testdir, cls._testroot + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args = args + " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args = args + " --mpilib %s" % self.TEST_MPILIB + + args += f" --machine {self.MACHINE.get_machine_name()}" + + cls._testdirs.append(testdir) + self.run_cmd_assert_result( + "./create_newcase %s" % (args), from_dir=self.SCRIPT_DIR + ) + self.assertTrue(os.path.exists(testdir)) + self.assertTrue(os.path.exists(os.path.join(testdir, "case.setup"))) + + self.run_cmd_assert_result("./case.setup", from_dir=testdir) + with Case(testdir, read_only=False) as case: + comp_interface = case.get_value("COMP_INTERFACE") + self.assertTrue( + driver == comp_interface, msg="%s != %s" % (driver, comp_interface) + ) + + cls._do_teardown.append(testdir)
+ + +
+[docs] + def test_n_createnewcase_bad_compset(self): + cls = self.__class__ + + testdir = os.path.join(cls._testroot, "testcreatenewcase_bad_compset") + if os.path.exists(testdir): + shutil.rmtree(testdir) + args = ( + " --case %s --compset InvalidCompsetName --output-root %s --handle-preexisting-dirs=r " + % (testdir, cls._testroot) + ) + if self._config.allow_unsupported: + args += " --run-unsupported" + if self.TEST_COMPILER is not None: + args = args + " --compiler %s" % self.TEST_COMPILER + if self.TEST_MPILIB is not None: + args = args + " --mpilib %s" % self.TEST_MPILIB + if utils.get_cime_default_driver() == "nuopc": + args += " --res f19_g17 " + else: + args += " --res f19_g16 " + + args += f" --machine {self.MACHINE.get_machine_name()}" + + self.run_cmd_assert_result( + "./create_newcase %s" % (args), from_dir=self.SCRIPT_DIR, expected_stat=1 + ) + self.assertFalse(os.path.exists(testdir))
+ + +
+[docs] + @classmethod + def tearDownClass(cls): + do_teardown = ( + len(cls._do_teardown) > 0 + and sys.exc_info() == (None, None, None) + and not cls.NO_TEARDOWN + ) + rmtestroot = True + for tfile in cls._testdirs: + if tfile not in cls._do_teardown: + print("Detected failed test or user request no teardown") + print("Leaving case directory : %s" % tfile) + rmtestroot = False + elif do_teardown: + try: + print("Attempt to remove directory {}".format(tfile)) + shutil.rmtree(tfile) + except BaseException: + print("Could not remove directory {}".format(tfile)) + if rmtestroot and do_teardown: + shutil.rmtree(cls._testroot)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_full_system.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_full_system.html new file mode 100644 index 00000000000..72c307d9171 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_full_system.html @@ -0,0 +1,206 @@ + + + + + + CIME.tests.test_sys_full_system — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_full_system

+#!/usr/bin/env python3
+
+import os
+
+from CIME import get_tests
+from CIME import test_status
+from CIME import utils
+from CIME import wait_for_tests
+from CIME.tests import base
+
+
+
+[docs] +class TestFullSystem(base.BaseTestCase): +
+[docs] + def test_full_system(self): + # Put this inside any test that's slow + if self.FAST_ONLY: + self.skipTest("Skipping slow test") + + driver = utils.get_cime_default_driver() + if driver == "mct": + cases = self._create_test( + ["--walltime=0:15:00", "cime_developer"], test_id=self._baseline_name + ) + else: + cases = self._create_test( + ["--walltime=0:30:00", "cime_developer"], test_id=self._baseline_name + ) + + self.run_cmd_assert_result( + "%s/cs.status.%s" % (self._testroot, self._baseline_name), + from_dir=self._testroot, + ) + + # Ensure that we can get test times + for case_dir in cases: + tstatus = os.path.join(case_dir, "TestStatus") + test_time = wait_for_tests.get_test_time(os.path.dirname(tstatus)) + self.assertIs( + type(test_time), int, msg="get time did not return int for %s" % tstatus + ) + self.assertTrue(test_time > 0, msg="test time was zero for %s" % tstatus) + + # Test that re-running works + skip_tests = None + if utils.get_cime_default_driver() == "nuopc": + skip_tests = [ + "SMS_Ln3.T42_T42.S", + "PRE.f19_f19.ADESP_TEST", + "PRE.f19_f19.ADESP", + "DAE.ww3a.ADWAV", + ] + tests = get_tests.get_test_suite( + "cime_developer", + machine=self._machine, + compiler=self._compiler, + skip_tests=skip_tests, + ) + + for test in tests: + casedir = self.get_casedir(test, cases) + + # Subtle issue: The run phases of these tests will be in the PASS state until + # the submitted case.test script is run, which could take a while if the system is + # busy. This potentially leaves a window where the wait_for_tests command below will + # not wait for the re-submitted jobs to run because it sees the original PASS. + # The code below forces things back to PEND to avoid this race condition. Note + # that we must use the MEMLEAK phase, not the RUN phase, because RUN being in a non-PEND + # state is how system tests know they are being re-run and must reset certain + # case settings. + if self._hasbatch: + with test_status.TestStatus(test_dir=casedir) as ts: + ts.set_status( + test_status.MEMLEAK_PHASE, test_status.TEST_PEND_STATUS + ) + + self.run_cmd_assert_result( + "./case.submit --skip-preview-namelist", from_dir=casedir + ) + + self._wait_for_tests(self._baseline_name)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_grid_generation.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_grid_generation.html new file mode 100644 index 00000000000..012fefc117d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_grid_generation.html @@ -0,0 +1,189 @@ + + + + + + CIME.tests.test_sys_grid_generation — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_grid_generation

+#!/usr/bin/env python3
+
+import os
+import shutil
+import sys
+
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestGridGeneration(base.BaseTestCase): +
+[docs] + @classmethod + def setUpClass(cls): + cls._do_teardown = [] + cls._testroot = os.path.join(cls.TEST_ROOT, "TestGridGeneration") + cls._testdirs = []
+ + +
+[docs] + def test_gen_domain(self): + if self._config.test_mode == "cesm": + self.skipTest("Skipping gen_domain test. Depends on E3SM tools") + cime_root = utils.get_cime_root() + inputdata = self.MACHINE.get_value("DIN_LOC_ROOT") + + tool_name = "test_gen_domain" + tool_location = os.path.join( + cime_root, "tools", "mapping", "gen_domain_files", "test_gen_domain.sh" + ) + args = "--cime_root={} --inputdata_root={}".format(cime_root, inputdata) + + cls = self.__class__ + test_dir = os.path.join(cls._testroot, tool_name) + cls._testdirs.append(test_dir) + os.makedirs(test_dir) + self.run_cmd_assert_result( + "{} {}".format(tool_location, args), from_dir=test_dir + ) + cls._do_teardown.append(test_dir)
+ + +
+[docs] + @classmethod + def tearDownClass(cls): + do_teardown = ( + len(cls._do_teardown) > 0 + and sys.exc_info() == (None, None, None) + and not cls.NO_TEARDOWN + ) + teardown_root = True + for tfile in cls._testdirs: + if tfile not in cls._do_teardown: + print("Detected failed test or user request no teardown") + print("Leaving case directory : %s" % tfile) + teardown_root = False + elif do_teardown: + shutil.rmtree(tfile) + + if teardown_root and do_teardown: + shutil.rmtree(cls._testroot)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_jenkins_generic_job.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_jenkins_generic_job.html new file mode 100644 index 00000000000..ee81555f753 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_jenkins_generic_job.html @@ -0,0 +1,332 @@ + + + + + + CIME.tests.test_sys_jenkins_generic_job — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_jenkins_generic_job

+#!/usr/bin/env python3
+
+import glob
+import os
+import signal
+import stat
+import threading
+import time
+
+from CIME import get_tests
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestJenkinsGenericJob(base.BaseTestCase): +
+[docs] + def setUp(self): + super().setUp() + + if self._config.test_mode == "cesm": + self.skipTest("Skipping Jenkins tests. E3SM feature") + + # Need to run in a subdir in order to not have CTest clash. Name it + # such that it should be cleaned up by the parent tearDown + self._testdir = os.path.join( + self._testroot, "jenkins_test_%s" % self._baseline_name + ) + os.makedirs(self._testdir) + + # Change root to avoid clashing with other jenkins_generic_jobs + self._jenkins_root = os.path.join(self._testdir, "J")
+ + +
+[docs] + def tearDown(self): + super().tearDown() + + if "TESTRUNDIFF_ALTERNATE" in os.environ: + del os.environ["TESTRUNDIFF_ALTERNATE"]
+ + +
+[docs] + def simple_test(self, expect_works, extra_args, build_name=None): + if self.NO_BATCH: + extra_args += " --no-batch" + + # Need these flags to test dashboard if e3sm + if self._config.test_mode == "e3sm" and build_name is not None: + extra_args += ( + " -p ACME_test --submit-to-cdash --cdash-build-group=Nightly -c %s" + % build_name + ) + + self.run_cmd_assert_result( + "%s/jenkins_generic_job -r %s %s -B %s" + % (self.TOOLS_DIR, self._testdir, extra_args, self._baseline_area), + from_dir=self._testdir, + expected_stat=(0 if expect_works else utils.TESTS_FAILED_ERR_CODE), + shell=False, + )
+ + +
+[docs] + def threaded_test(self, expect_works, extra_args, build_name=None): + try: + self.simple_test(expect_works, extra_args, build_name) + except AssertionError as e: + self._thread_error = str(e)
+ + +
+[docs] + def assert_num_leftovers(self, suite): + num_tests_in_suite = len(get_tests.get_test_suite(suite)) + + case_glob = "%s/*%s*/" % (self._jenkins_root, self._baseline_name.capitalize()) + jenkins_dirs = glob.glob(case_glob) # Case dirs + # scratch_dirs = glob.glob("%s/*%s*/" % (self._testroot, test_id)) # blr/run dirs + + self.assertEqual( + num_tests_in_suite, + len(jenkins_dirs), + msg="Wrong number of leftover directories in %s, expected %d, see %s. Glob checked %s" + % (self._jenkins_root, num_tests_in_suite, jenkins_dirs, case_glob), + )
+ + + # JGF: Can't test this at the moment due to root change flag given to jenkins_generic_job + # self.assertEqual(num_tests_in_tiny + 1, len(scratch_dirs), + # msg="Wrong number of leftover directories in %s, expected %d, see %s" % \ + # (self._testroot, num_tests_in_tiny, scratch_dirs)) + +
+[docs] + def test_jenkins_generic_job(self): + # Generate fresh baselines so that this test is not impacted by + # unresolved diffs + self.simple_test(True, "-t cime_test_only_pass -g -b %s" % self._baseline_name) + self.assert_num_leftovers("cime_test_only_pass") + + build_name = "jenkins_generic_job_pass_%s" % utils.get_timestamp() + self.simple_test( + True, + "-t cime_test_only_pass -b %s" % self._baseline_name, + build_name=build_name, + ) + self.assert_num_leftovers( + "cime_test_only_pass" + ) # jenkins_generic_job should have automatically cleaned up leftovers from prior run + self.assert_dashboard_has_build(build_name)
+ + +
+[docs] + def test_jenkins_generic_job_save_timing(self): + self.simple_test( + True, "-t cime_test_timing --save-timing -b %s" % self._baseline_name + ) + self.assert_num_leftovers("cime_test_timing") + + jenkins_dirs = glob.glob( + "%s/*%s*/" % (self._jenkins_root, self._baseline_name.capitalize()) + ) # case dirs + case = jenkins_dirs[0] + result = self.run_cmd_assert_result( + "./xmlquery --value SAVE_TIMING", from_dir=case + ) + self.assertEqual(result, "TRUE")
+ + +
+[docs] + def test_jenkins_generic_job_kill(self): + build_name = "jenkins_generic_job_kill_%s" % utils.get_timestamp() + run_thread = threading.Thread( + target=self.threaded_test, + args=(False, " -t cime_test_only_slow_pass -b master", build_name), + ) + run_thread.daemon = True + run_thread.start() + + time.sleep(120) + + self.kill_subprocesses(sig=signal.SIGTERM) + + run_thread.join(timeout=30) + + self.assertFalse( + run_thread.is_alive(), msg="jenkins_generic_job should have finished" + ) + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + ) + self.assert_dashboard_has_build(build_name)
+ + +
+[docs] + def test_jenkins_generic_job_realistic_dash(self): + # The actual quality of the cdash results for this test can only + # be inspected manually + + # Generate fresh baselines so that this test is not impacted by + # unresolved diffs + self.simple_test(False, "-t cime_test_all -g -b %s" % self._baseline_name) + self.assert_num_leftovers("cime_test_all") + + # Should create a diff + os.environ["TESTRUNDIFF_ALTERNATE"] = "True" + + # Should create a nml diff + # Modify namelist + fake_nl = """ + &fake_nml + fake_item = 'fake' + fake = .true. +/""" + baseline_glob = glob.glob( + os.path.join(self._baseline_area, self._baseline_name, "TESTRUNPASS*") + ) + self.assertEqual( + len(baseline_glob), + 1, + msg="Expected one match, got:\n%s" % "\n".join(baseline_glob), + ) + + for baseline_dir in baseline_glob: + nl_path = os.path.join(baseline_dir, "CaseDocs", "datm_in") + self.assertTrue(os.path.isfile(nl_path), msg="Missing file %s" % nl_path) + + os.chmod(nl_path, stat.S_IRUSR | stat.S_IWUSR) + with open(nl_path, "a") as nl_file: + nl_file.write(fake_nl) + + build_name = "jenkins_generic_job_mixed_%s" % utils.get_timestamp() + self.simple_test( + False, "-t cime_test_all -b %s" % self._baseline_name, build_name=build_name + ) + self.assert_num_leftovers( + "cime_test_all" + ) # jenkins_generic_job should have automatically cleaned up leftovers from prior run + self.assert_dashboard_has_build(build_name)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_manage_and_query.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_manage_and_query.html new file mode 100644 index 00000000000..3726729545e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_manage_and_query.html @@ -0,0 +1,186 @@ + + + + + + CIME.tests.test_sys_manage_and_query — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_manage_and_query

+#!/usr/bin/env python3
+
+from CIME import utils
+from CIME.tests import base
+from CIME.XML.files import Files
+
+
+
+[docs] +class TestManageAndQuery(base.BaseTestCase): + """Tests various scripts to manage and query xml files""" + +
+[docs] + def setUp(self): + super().setUp() + + if self._config.test_mode == "e3sm": + self.skipTest("Skipping XML test management tests. E3SM does not use this.")
+ + + def _run_and_assert_query_testlist(self, extra_args=""): + """Ensure that query_testlist runs successfully with the given extra arguments""" + files = Files() + testlist_drv = files.get_value("TESTS_SPEC_FILE", {"component": "drv"}) + + self.run_cmd_assert_result( + "{}/query_testlists --xml-testlist {} {}".format( + self.SCRIPT_DIR, testlist_drv, extra_args + ) + ) + +
+[docs] + def test_query_testlists_runs(self): + """Make sure that query_testlists runs successfully + + This simply makes sure that query_testlists doesn't generate any errors + when it runs. This helps ensure that changes in other utilities don't + break query_testlists. + """ + self._run_and_assert_query_testlist(extra_args="--show-options")
+ + +
+[docs] + def test_query_testlists_define_testtypes_runs(self): + """Make sure that query_testlists runs successfully with the --define-testtypes argument""" + self._run_and_assert_query_testlist(extra_args="--define-testtypes")
+ + +
+[docs] + def test_query_testlists_count_runs(self): + """Make sure that query_testlists runs successfully with the --count argument""" + self._run_and_assert_query_testlist(extra_args="--count")
+ + +
+[docs] + def test_query_testlists_list_runs(self): + """Make sure that query_testlists runs successfully with the --list argument""" + self._run_and_assert_query_testlist(extra_args="--list categories")
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_query_config.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_query_config.html new file mode 100644 index 00000000000..64cbe56950c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_query_config.html @@ -0,0 +1,160 @@ + + + + + + CIME.tests.test_sys_query_config — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_query_config

+#!/usr/bin/env python3
+
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestQueryConfig(base.BaseTestCase): +
+[docs] + def setUp(self): + super().setUp()
+ + +
+[docs] + def test_query_compsets(self): + utils.run_cmd_no_fail("{}/query_config --compsets".format(self.SCRIPT_DIR))
+ + +
+[docs] + def test_query_components(self): + utils.run_cmd_no_fail("{}/query_config --components".format(self.SCRIPT_DIR))
+ + +
+[docs] + def test_query_grids(self): + utils.run_cmd_no_fail("{}/query_config --grids".format(self.SCRIPT_DIR))
+ + +
+[docs] + def test_query_machines(self): + utils.run_cmd_no_fail("{}/query_config --machines".format(self.SCRIPT_DIR))
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_run_restart.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_run_restart.html new file mode 100644 index 00000000000..d28f06de79c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_run_restart.html @@ -0,0 +1,178 @@ + + + + + + CIME.tests.test_sys_run_restart — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_run_restart

+#!/usr/bin/env python3
+
+import os
+
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestRunRestart(base.BaseTestCase): +
+[docs] + def test_run_restart(self): + if self.NO_FORTRAN_RUN: + self.skipTest("Skipping fortran test") + driver = utils.get_cime_default_driver() + if driver == "mct": + walltime = "00:15:00" + else: + walltime = "00:30:00" + + casedir = self._create_test( + ["--walltime " + walltime, "NODEFAIL_P1.f09_g16.X"], + test_id=self._baseline_name, + ) + rundir = utils.run_cmd_no_fail("./xmlquery RUNDIR --value", from_dir=casedir) + fail_sentinel = os.path.join(rundir, "FAIL_SENTINEL") + self.assertTrue(os.path.exists(fail_sentinel), msg="Missing %s" % fail_sentinel) + + self.assertEqual(open(fail_sentinel, "r").read().count("FAIL"), 3)
+ + +
+[docs] + def test_run_restart_too_many_fails(self): + if self.NO_FORTRAN_RUN: + self.skipTest("Skipping fortran test") + driver = utils.get_cime_default_driver() + if driver == "mct": + walltime = "00:15:00" + else: + walltime = "00:30:00" + + casedir = self._create_test( + ["--walltime " + walltime, "NODEFAIL_P1.f09_g16.X"], + test_id=self._baseline_name, + env_changes="NODEFAIL_NUM_FAILS=5", + run_errors=True, + ) + rundir = utils.run_cmd_no_fail("./xmlquery RUNDIR --value", from_dir=casedir) + fail_sentinel = os.path.join(rundir, "FAIL_SENTINEL") + self.assertTrue(os.path.exists(fail_sentinel), msg="Missing %s" % fail_sentinel) + + self.assertEqual(open(fail_sentinel, "r").read().count("FAIL"), 4)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_save_timings.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_save_timings.html new file mode 100644 index 00000000000..0743cd054c1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_save_timings.html @@ -0,0 +1,286 @@ + + + + + + CIME.tests.test_sys_save_timings — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_save_timings

+#!/usr/bin/env python3
+
+import getpass
+import glob
+import os
+
+from CIME import provenance
+from CIME import utils
+from CIME.tests import base
+from CIME.case.case import Case
+
+
+
+[docs] +class TestSaveTimings(base.BaseTestCase): +
+[docs] + def simple_test(self, manual_timing=False): + if self.NO_FORTRAN_RUN: + self.skipTest("Skipping fortran test") + timing_flag = "" if manual_timing else "--save-timing" + driver = utils.get_cime_default_driver() + if driver == "mct": + walltime = "00:15:00" + else: + walltime = "00:30:00" + self._create_test( + ["SMS_Ln9_P1.f19_g16_rx1.A", timing_flag, "--walltime=" + walltime], + test_id=self._baseline_name, + ) + + statuses = glob.glob( + "%s/*%s/TestStatus" % (self._testroot, self._baseline_name) + ) + self.assertEqual( + len(statuses), + 1, + msg="Should have had exactly one match, found %s" % statuses, + ) + casedir = os.path.dirname(statuses[0]) + + with Case(casedir, read_only=True) as case: + lids = utils.get_lids(case) + timing_dir = case.get_value("SAVE_TIMING_DIR") + casename = case.get_value("CASE") + + self.assertEqual(len(lids), 1, msg="Expected one LID, found %s" % lids) + + if manual_timing: + self.run_cmd_assert_result( + "cd %s && %s/save_provenance postrun" % (casedir, self.TOOLS_DIR) + ) + if self._config.test_mode == "e3sm": + provenance_glob = os.path.join( + timing_dir, + "performance_archive", + getpass.getuser(), + casename, + lids[0] + "*", + ) + provenance_dirs = glob.glob(provenance_glob) + self.assertEqual( + len(provenance_dirs), + 1, + msg="wrong number of provenance dirs, expected 1, got {}, looked for {}".format( + provenance_dirs, provenance_glob + ), + ) + self.verify_perms("".join(provenance_dirs))
+ + +
+[docs] + def test_save_timings(self): + self.simple_test()
+ + +
+[docs] + def test_save_timings_manual(self): + self.simple_test(manual_timing=True)
+ + + def _record_success( + self, + test_name, + test_success, + commit, + exp_last_pass, + exp_trans_fail, + baseline_dir, + ): + provenance.save_test_success( + baseline_dir, None, test_name, test_success, force_commit_test=commit + ) + was_success, last_pass, trans_fail = provenance.get_test_success( + baseline_dir, None, test_name, testing=True + ) + self.assertEqual( + test_success, + was_success, + msg="Broken was_success {} {}".format(test_name, commit), + ) + self.assertEqual( + last_pass, + exp_last_pass, + msg="Broken last_pass {} {}".format(test_name, commit), + ) + self.assertEqual( + trans_fail, + exp_trans_fail, + msg="Broken trans_fail {} {}".format(test_name, commit), + ) + if test_success: + self.assertEqual(exp_last_pass, commit, msg="Should never") + +
+[docs] + def test_success_recording(self): + if self._config.test_mode == "e3sm": + self.skipTest("Skipping success recording tests. E3SM feature") + + fake_test1 = "faketest1" + fake_test2 = "faketest2" + baseline_dir = os.path.join(self._baseline_area, self._baseline_name) + + # Test initial state + was_success, last_pass, trans_fail = provenance.get_test_success( + baseline_dir, None, fake_test1, testing=True + ) + self.assertFalse(was_success, msg="Broken initial was_success") + self.assertEqual(last_pass, None, msg="Broken initial last_pass") + self.assertEqual(trans_fail, None, msg="Broken initial trans_fail") + + # Test first result (test1 fails, test2 passes) + # test_name , success, commit , expP , expTF, baseline) + self._record_success(fake_test1, False, "AAA", None, "AAA", baseline_dir) + self._record_success(fake_test2, True, "AAA", "AAA", None, baseline_dir) + + # Test second result matches first (no transition) (test1 fails, test2 passes) + # test_name , success, commit , expP , expTF, baseline) + self._record_success(fake_test1, False, "BBB", None, "AAA", baseline_dir) + self._record_success(fake_test2, True, "BBB", "BBB", None, baseline_dir) + + # Test transition to new state (first real transition) (test1 passes, test2 fails) + # test_name , success, commit , expP , expTF, baseline) + self._record_success(fake_test1, True, "CCC", "CCC", "AAA", baseline_dir) + self._record_success(fake_test2, False, "CCC", "BBB", "CCC", baseline_dir) + + # Test transition to new state (second real transition) (test1 fails, test2 passes) + # test_name , success, commit , expP , expTF, baseline) + self._record_success(fake_test1, False, "DDD", "CCC", "DDD", baseline_dir) + self._record_success(fake_test2, True, "DDD", "DDD", "CCC", baseline_dir) + + # Test final repeat (test1 fails, test2 passes) + # test_name , success, commit , expP , expTF, baseline) + self._record_success(fake_test1, False, "EEE", "CCC", "DDD", baseline_dir) + self._record_success(fake_test2, True, "EEE", "EEE", "CCC", baseline_dir) + + # Test final transition (test1 passes, test2 fails) + # test_name , success, commit , expP , expTF, baseline) + self._record_success(fake_test1, True, "FFF", "FFF", "DDD", baseline_dir) + self._record_success(fake_test2, False, "FFF", "EEE", "FFF", baseline_dir)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_single_submit.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_single_submit.html new file mode 100644 index 00000000000..7724e155d5d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_single_submit.html @@ -0,0 +1,148 @@ + + + + + + CIME.tests.test_sys_single_submit — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_single_submit

+#!/usr/bin/env python3
+
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestSingleSubmit(base.BaseTestCase): +
+[docs] + def test_single_submit(self): + # Skip unless on a batch system and users did not select no-batch + if not self._hasbatch: + self.skipTest("Skipping single submit. Not valid without batch") + if self._config.test_mode == "cesm": + self.skipTest("Skipping single submit. E3SM experimental feature") + if self._machine not in ["sandiatoss3"]: + self.skipTest("Skipping single submit. Only works on sandiatoss3") + + # Keep small enough for now that we don't have to worry about load balancing + self._create_test( + ["--single-submit", "SMS_Ln9_P8.f45_g37_rx1.A", "SMS_Ln9_P8.f19_g16_rx1.A"], + env_changes="unset CIME_GLOBAL_WALLTIME &&", + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_test_scheduler.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_test_scheduler.html new file mode 100644 index 00000000000..c49f3dd05e0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_test_scheduler.html @@ -0,0 +1,665 @@ + + + + + + CIME.tests.test_sys_test_scheduler — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_test_scheduler

+#!/usr/bin/env python3
+
+import glob
+import logging
+import os
+import unittest
+from unittest import mock
+
+from CIME import get_tests
+from CIME import utils
+from CIME import test_status
+from CIME import test_scheduler
+from CIME.tests import base
+
+
+
+[docs] +class TestTestScheduler(base.BaseTestCase): +
+[docs] + @mock.patch("time.strftime", return_value="00:00:00") + def test_chksum(self, strftime): # pylint: disable=unused-argument + if self._config.test_mode == "e3sm": + self.skipTest("Skipping chksum test. Depends on CESM settings") + + ts = test_scheduler.TestScheduler( + ["SEQ_Ln9.f19_g16_rx1.A.cori-haswell_gnu"], + machine_name="cori-haswell", + chksum=True, + test_root="/tests", + ) + + with mock.patch.object(ts, "_shell_cmd_for_phase") as _shell_cmd_for_phase: + ts._run_phase( + "SEQ_Ln9.f19_g16_rx1.A.cori-haswell_gnu" + ) # pylint: disable=protected-access + + _shell_cmd_for_phase.assert_called_with( + "SEQ_Ln9.f19_g16_rx1.A.cori-haswell_gnu", + "./case.submit --skip-preview-namelist --chksum", + "RUN", + from_dir="/tests/SEQ_Ln9.f19_g16_rx1.A.cori-haswell_gnu.00:00:00", + )
+ + +
+[docs] + def test_a_phases(self): + # exclude the MEMLEAK tests here. + tests = get_tests.get_full_test_names( + [ + "cime_test_only", + "^TESTMEMLEAKFAIL_P1.f09_g16.X", + "^TESTMEMLEAKPASS_P1.f09_g16.X", + "^TESTRUNSTARCFAIL_P1.f19_g16_rx1.A", + "^TESTTESTDIFF_P1.f19_g16_rx1.A", + "^TESTBUILDFAILEXC_P1.f19_g16_rx1.A", + "^TESTRUNFAILEXC_P1.f19_g16_rx1.A", + ], + self._machine, + self._compiler, + ) + self.assertEqual(len(tests), 3) + ct = test_scheduler.TestScheduler( + tests, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + ) + + build_fail_test = [item for item in tests if "TESTBUILDFAIL" in item][0] + run_fail_test = [item for item in tests if "TESTRUNFAIL" in item][0] + pass_test = [item for item in tests if "TESTRUNPASS" in item][0] + + self.assertTrue( + "BUILDFAIL" in build_fail_test, msg="Wrong test '%s'" % build_fail_test + ) + self.assertTrue( + "RUNFAIL" in run_fail_test, msg="Wrong test '%s'" % run_fail_test + ) + self.assertTrue("RUNPASS" in pass_test, msg="Wrong test '%s'" % pass_test) + + for idx, phase in enumerate(ct._phases): + for test in ct._tests: + if phase == test_scheduler.TEST_START: + continue + elif phase == test_status.MODEL_BUILD_PHASE: + ct._update_test_status(test, phase, test_status.TEST_PEND_STATUS) + + if test == build_fail_test: + ct._update_test_status( + test, phase, test_status.TEST_FAIL_STATUS + ) + self.assertTrue(ct._is_broken(test)) + self.assertFalse(ct._work_remains(test)) + else: + ct._update_test_status( + test, phase, test_status.TEST_PASS_STATUS + ) + self.assertFalse(ct._is_broken(test)) + self.assertTrue(ct._work_remains(test)) + + elif phase == test_status.RUN_PHASE: + if test == build_fail_test: + with self.assertRaises(utils.CIMEError): + ct._update_test_status( + test, phase, test_status.TEST_PEND_STATUS + ) + else: + ct._update_test_status( + test, phase, test_status.TEST_PEND_STATUS + ) + self.assertFalse(ct._work_remains(test)) + + if test == run_fail_test: + ct._update_test_status( + test, phase, test_status.TEST_FAIL_STATUS + ) + self.assertTrue(ct._is_broken(test)) + else: + ct._update_test_status( + test, phase, test_status.TEST_PASS_STATUS + ) + self.assertFalse(ct._is_broken(test)) + + self.assertFalse(ct._work_remains(test)) + + else: + with self.assertRaises(utils.CIMEError): + ct._update_test_status( + test, ct._phases[idx + 1], test_status.TEST_PEND_STATUS + ) + + with self.assertRaises(utils.CIMEError): + ct._update_test_status( + test, phase, test_status.TEST_PASS_STATUS + ) + + ct._update_test_status(test, phase, test_status.TEST_PEND_STATUS) + self.assertFalse(ct._is_broken(test)) + self.assertTrue(ct._work_remains(test)) + + with self.assertRaises(utils.CIMEError): + ct._update_test_status( + test, phase, test_status.TEST_PEND_STATUS + ) + + ct._update_test_status(test, phase, test_status.TEST_PASS_STATUS) + + with self.assertRaises(utils.CIMEError): + ct._update_test_status( + test, phase, test_status.TEST_FAIL_STATUS + ) + + self.assertFalse(ct._is_broken(test)) + self.assertTrue(ct._work_remains(test))
+ + +
+[docs] + def test_b_full(self): + tests = get_tests.get_full_test_names( + ["cime_test_only"], self._machine, self._compiler + ) + test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + ct = test_scheduler.TestScheduler( + tests, + test_id=test_id, + no_batch=self.NO_BATCH, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + ) + + build_fail_test = [item for item in tests if "TESTBUILDFAIL_" in item][0] + build_fail_exc_test = [item for item in tests if "TESTBUILDFAILEXC" in item][0] + run_fail_test = [item for item in tests if "TESTRUNFAIL_" in item][0] + run_fail_exc_test = [item for item in tests if "TESTRUNFAILEXC" in item][0] + pass_test = [item for item in tests if "TESTRUNPASS" in item][0] + test_diff_test = [item for item in tests if "TESTTESTDIFF" in item][0] + mem_fail_test = [item for item in tests if "TESTMEMLEAKFAIL" in item][0] + mem_pass_test = [item for item in tests if "TESTMEMLEAKPASS" in item][0] + st_arch_fail_test = [item for item in tests if "TESTRUNSTARCFAIL" in item][0] + + log_lvl = logging.getLogger().getEffectiveLevel() + logging.disable(logging.CRITICAL) + try: + ct.run_tests() + finally: + logging.getLogger().setLevel(log_lvl) + + self._wait_for_tests(test_id, expect_works=False) + + test_statuses = glob.glob("%s/*%s/TestStatus" % (self._testroot, test_id)) + self.assertEqual(len(tests), len(test_statuses)) + + for x in test_statuses: + ts = test_status.TestStatus(test_dir=os.path.dirname(x)) + test_name = ts.get_name() + log_files = glob.glob( + "%s/%s*%s/TestStatus.log" % (self._testroot, test_name, test_id) + ) + self.assertEqual( + len(log_files), + 1, + "Expected exactly one test_status.TestStatus.log file, found %d" + % len(log_files), + ) + log_file = log_files[0] + if test_name == build_fail_test: + + self.assert_test_status( + test_name, + ts, + test_status.MODEL_BUILD_PHASE, + test_status.TEST_FAIL_STATUS, + ) + data = open(log_file, "r").read() + self.assertTrue( + "Intentional fail for testing infrastructure" in data, + "Broken test did not report build error:\n%s" % data, + ) + elif test_name == build_fail_exc_test: + data = open(log_file, "r").read() + self.assert_test_status( + test_name, + ts, + test_status.SHAREDLIB_BUILD_PHASE, + test_status.TEST_FAIL_STATUS, + ) + self.assertTrue( + "Exception from init" in data, + "Broken test did not report build error:\n%s" % data, + ) + elif test_name == run_fail_test: + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_FAIL_STATUS + ) + elif test_name == run_fail_exc_test: + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_FAIL_STATUS + ) + data = open(log_file, "r").read() + self.assertTrue( + "Exception from run_phase" in data, + "Broken test did not report run error:\n%s" % data, + ) + elif test_name == mem_fail_test: + self.assert_test_status( + test_name, + ts, + test_status.MEMLEAK_PHASE, + test_status.TEST_FAIL_STATUS, + ) + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + ) + elif test_name == test_diff_test: + self.assert_test_status( + test_name, ts, "COMPARE_base_rest", test_status.TEST_FAIL_STATUS + ) + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + ) + elif test_name == st_arch_fail_test: + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + ) + self.assert_test_status( + test_name, + ts, + test_status.STARCHIVE_PHASE, + test_status.TEST_FAIL_STATUS, + ) + else: + self.assertTrue(test_name in [pass_test, mem_pass_test]) + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + ) + if test_name == mem_pass_test: + self.assert_test_status( + test_name, + ts, + test_status.MEMLEAK_PHASE, + test_status.TEST_PASS_STATUS, + )
+ + +
+[docs] + def test_force_rebuild(self): + tests = get_tests.get_full_test_names( + [ + "TESTBUILDFAIL_P1.f19_g16_rx1.A", + "TESTRUNFAIL_P1.f19_g16_rx1.A", + "TESTRUNPASS_P1.f19_g16_rx1.A", + ], + self._machine, + self._compiler, + ) + test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + ct = test_scheduler.TestScheduler( + tests, + test_id=test_id, + no_batch=self.NO_BATCH, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + ) + + log_lvl = logging.getLogger().getEffectiveLevel() + logging.disable(logging.CRITICAL) + try: + ct.run_tests() + finally: + logging.getLogger().setLevel(log_lvl) + + ct = test_scheduler.TestScheduler( + tests, + test_id=test_id, + no_batch=self.NO_BATCH, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + force_rebuild=True, + use_existing=True, + ) + + test_statuses = glob.glob("%s/*%s/TestStatus" % (self._testroot, test_id)) + + for x in test_statuses: + casedir = os.path.dirname(x) + + ts = test_status.TestStatus(test_dir=casedir) + + self.assertTrue( + ts.get_status(test_status.SHAREDLIB_BUILD_PHASE) + == test_status.TEST_PEND_STATUS + )
+ + +
+[docs] + def test_c_use_existing(self): + tests = get_tests.get_full_test_names( + [ + "TESTBUILDFAIL_P1.f19_g16_rx1.A", + "TESTRUNFAIL_P1.f19_g16_rx1.A", + "TESTRUNPASS_P1.f19_g16_rx1.A", + ], + self._machine, + self._compiler, + ) + test_id = "%s-%s" % (self._baseline_name, utils.get_timestamp()) + ct = test_scheduler.TestScheduler( + tests, + test_id=test_id, + no_batch=self.NO_BATCH, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + ) + + build_fail_test = [item for item in tests if "TESTBUILDFAIL" in item][0] + run_fail_test = [item for item in tests if "TESTRUNFAIL" in item][0] + pass_test = [item for item in tests if "TESTRUNPASS" in item][0] + + log_lvl = logging.getLogger().getEffectiveLevel() + logging.disable(logging.CRITICAL) + try: + ct.run_tests() + finally: + logging.getLogger().setLevel(log_lvl) + + test_statuses = glob.glob("%s/*%s/TestStatus" % (self._testroot, test_id)) + self.assertEqual(len(tests), len(test_statuses)) + + self._wait_for_tests(test_id, expect_works=False) + + for x in test_statuses: + casedir = os.path.dirname(x) + ts = test_status.TestStatus(test_dir=casedir) + test_name = ts.get_name() + if test_name == build_fail_test: + self.assert_test_status( + test_name, + ts, + test_status.MODEL_BUILD_PHASE, + test_status.TEST_FAIL_STATUS, + ) + with test_status.TestStatus(test_dir=casedir) as ts: + ts.set_status( + test_status.MODEL_BUILD_PHASE, test_status.TEST_PEND_STATUS + ) + elif test_name == run_fail_test: + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_FAIL_STATUS + ) + with test_status.TestStatus(test_dir=casedir) as ts: + ts.set_status( + test_status.SUBMIT_PHASE, test_status.TEST_PEND_STATUS + ) + else: + self.assertTrue(test_name == pass_test) + self.assert_test_status( + test_name, + ts, + test_status.MODEL_BUILD_PHASE, + test_status.TEST_PASS_STATUS, + ) + self.assert_test_status( + test_name, + ts, + test_status.SUBMIT_PHASE, + test_status.TEST_PASS_STATUS, + ) + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + ) + + os.environ["TESTBUILDFAIL_PASS"] = "True" + os.environ["TESTRUNFAIL_PASS"] = "True" + ct2 = test_scheduler.TestScheduler( + tests, + test_id=test_id, + no_batch=self.NO_BATCH, + use_existing=True, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + ) + + log_lvl = logging.getLogger().getEffectiveLevel() + logging.disable(logging.CRITICAL) + try: + ct2.run_tests() + finally: + logging.getLogger().setLevel(log_lvl) + + self._wait_for_tests(test_id) + + for x in test_statuses: + ts = test_status.TestStatus(test_dir=os.path.dirname(x)) + test_name = ts.get_name() + self.assert_test_status( + test_name, + ts, + test_status.MODEL_BUILD_PHASE, + test_status.TEST_PASS_STATUS, + ) + self.assert_test_status( + test_name, ts, test_status.SUBMIT_PHASE, test_status.TEST_PASS_STATUS + ) + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + ) + + del os.environ["TESTBUILDFAIL_PASS"] + del os.environ["TESTRUNFAIL_PASS"] + + # test that passed tests are not re-run + + ct2 = test_scheduler.TestScheduler( + tests, + test_id=test_id, + no_batch=self.NO_BATCH, + use_existing=True, + test_root=self._testroot, + output_root=self._testroot, + compiler=self._compiler, + mpilib=self.TEST_MPILIB, + machine_name=self.MACHINE.get_machine_name(), + ) + + log_lvl = logging.getLogger().getEffectiveLevel() + logging.disable(logging.CRITICAL) + try: + ct2.run_tests() + finally: + logging.getLogger().setLevel(log_lvl) + + self._wait_for_tests(test_id) + + for x in test_statuses: + ts = test_status.TestStatus(test_dir=os.path.dirname(x)) + test_name = ts.get_name() + self.assert_test_status( + test_name, + ts, + test_status.MODEL_BUILD_PHASE, + test_status.TEST_PASS_STATUS, + ) + self.assert_test_status( + test_name, ts, test_status.SUBMIT_PHASE, test_status.TEST_PASS_STATUS + ) + self.assert_test_status( + test_name, ts, test_status.RUN_PHASE, test_status.TEST_PASS_STATUS + )
+ + +
+[docs] + def test_d_retry(self): + args = [ + "TESTBUILDFAIL_P1.f19_g16_rx1.A", + "TESTRUNFAILRESET_P1.f19_g16_rx1.A", + "TESTRUNPASS_P1.f19_g16_rx1.A", + "--retry=1", + ] + + self._create_test(args)
+ + +
+[docs] + def test_e_test_inferred_compiler(self): + if self._config.test_mode != "e3sm" or self._machine != "docker": + self.skipTest("Skipping create_test test. Depends on E3SM settings") + + args = ["SMS.f19_g16_rx1.A.docker_gnuX", "--no-setup"] + + case = self._create_test(args, default_baseline_area=True) + result = self.run_cmd_assert_result( + "./xmlquery --value BASELINE_ROOT", from_dir=case + ) + self.assertEqual(os.path.split(result)[1], "gnuX")
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_unittest.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_unittest.html new file mode 100644 index 00000000000..4af68ad2594 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_unittest.html @@ -0,0 +1,245 @@ + + + + + + CIME.tests.test_sys_unittest — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_sys_unittest

+#!/usr/bin/env python3
+
+import os
+import shutil
+import sys
+
+from CIME import utils
+from CIME.tests import base
+from CIME.XML.files import Files
+
+
+
+[docs] +class TestUnitTest(base.BaseTestCase): +
+[docs] + @classmethod + def setUpClass(cls): + cls._do_teardown = [] + cls._testroot = os.path.join(cls.TEST_ROOT, "TestUnitTests") + cls._testdirs = []
+ + + def _has_unit_test_support(self): + if self.TEST_COMPILER is None: + compiler = self.MACHINE.get_default_compiler() + else: + compiler = self.TEST_COMPILER + + mach = self.MACHINE.get_machine_name() + cmake_macros_dir = Files().get_value("CMAKE_MACROS_DIR") + + macros_to_check = [ + os.path.join(cmake_macros_dir, "{}_{}.cmake".format(compiler, mach)), + os.path.join(cmake_macros_dir, "{}.cmake".format(mach)), + os.path.join( + os.environ.get("HOME"), ".cime", "{}_{}.cmake".format(compiler, mach) + ), + os.path.join(os.environ.get("HOME"), ".cime", "{}.cmake".format(mach)), + ] + + for macro_to_check in macros_to_check: + if os.path.exists(macro_to_check): + macro_text = open(macro_to_check, "r").read() + + return "PFUNIT_PATH" in macro_text + + return False + +
+[docs] + def test_a_unit_test(self): + cls = self.__class__ + if not self._has_unit_test_support(): + self.skipTest( + "Skipping TestUnitTest - PFUNIT_PATH not found for the default compiler on this machine" + ) + test_dir = os.path.join(cls._testroot, "unit_tester_test") + cls._testdirs.append(test_dir) + os.makedirs(test_dir) + unit_test_tool = os.path.abspath( + os.path.join( + utils.get_cime_root(), "scripts", "fortran_unit_testing", "run_tests.py" + ) + ) + test_spec_dir = os.path.join( + os.path.dirname(unit_test_tool), "Examples", "interpolate_1d", "tests" + ) + args = "--build-dir {} --test-spec-dir {}".format(test_dir, test_spec_dir) + args += " --machine {}".format(self.MACHINE.get_machine_name()) + utils.run_cmd_no_fail("{} {}".format(unit_test_tool, args)) + cls._do_teardown.append(test_dir)
+ + +
+[docs] + def test_b_cime_f90_unit_tests(self): + cls = self.__class__ + if self.FAST_ONLY: + self.skipTest("Skipping slow test") + + if not self._has_unit_test_support(): + self.skipTest( + "Skipping TestUnitTest - PFUNIT_PATH not found for the default compiler on this machine" + ) + + test_dir = os.path.join(cls._testroot, "driver_f90_tests") + cls._testdirs.append(test_dir) + os.makedirs(test_dir) + test_spec_dir = utils.get_cime_root() + unit_test_tool = os.path.abspath( + os.path.join( + test_spec_dir, "scripts", "fortran_unit_testing", "run_tests.py" + ) + ) + args = "--build-dir {} --test-spec-dir {}".format(test_dir, test_spec_dir) + args += " --machine {}".format(self.MACHINE.get_machine_name()) + utils.run_cmd_no_fail("{} {}".format(unit_test_tool, args)) + cls._do_teardown.append(test_dir)
+ + +
+[docs] + @classmethod + def tearDownClass(cls): + do_teardown = ( + len(cls._do_teardown) > 0 + and sys.exc_info() == (None, None, None) + and not cls.NO_TEARDOWN + ) + + teardown_root = True + for tfile in cls._testdirs: + if tfile not in cls._do_teardown: + print("Detected failed test or user request no teardown") + print("Leaving case directory : %s" % tfile) + teardown_root = False + elif do_teardown: + shutil.rmtree(tfile) + + if teardown_root and do_teardown: + shutil.rmtree(cls._testroot)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_user_concurrent_mods.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_user_concurrent_mods.html new file mode 100644 index 00000000000..9a559c7b83b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_user_concurrent_mods.html @@ -0,0 +1,165 @@ + + + + + + CIME.tests.test_sys_user_concurrent_mods — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_user_concurrent_mods

+#!/usr/bin/env python3
+
+import os
+import time
+
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestUserConcurrentMods(base.BaseTestCase): +
+[docs] + def test_user_concurrent_mods(self): + # Put this inside any test that's slow + if self.FAST_ONLY: + self.skipTest("Skipping slow test") + + casedir = self._create_test( + ["--walltime=0:30:00", "TESTRUNUSERXMLCHANGE_Mmpi-serial.f19_g16.X"], + test_id=self._baseline_name, + ) + + with utils.Timeout(3000): + while True: + with open(os.path.join(casedir, "CaseStatus"), "r") as fd: + self._wait_for_tests(self._baseline_name) + contents = fd.read() + if contents.count("model execution success") == 2: + break + + time.sleep(5) + + rundir = utils.run_cmd_no_fail("./xmlquery RUNDIR --value", from_dir=casedir) + if utils.get_cime_default_driver() == "nuopc": + chk_file = "nuopc.runconfig" + else: + chk_file = "drv_in" + with open(os.path.join(rundir, chk_file), "r") as fd: + contents = fd.read() + self.assertTrue("stop_n = 6" in contents)
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_wait_for_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_wait_for_tests.html new file mode 100644 index 00000000000..e6893f043d5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_sys_wait_for_tests.html @@ -0,0 +1,552 @@ + + + + + + CIME.tests.test_sys_wait_for_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_sys_wait_for_tests

+#!/usr/bin/env python3
+
+import os
+import signal
+import shutil
+import sys
+import time
+import threading
+
+from CIME import utils
+from CIME import test_status
+from CIME.tests import base
+from CIME.tests import utils as test_utils
+
+
+
+[docs] +class TestWaitForTests(base.BaseTestCase): +
+[docs] + def setUp(self): + super().setUp() + + self._testroot = os.path.join(self.TEST_ROOT, "TestWaitForTests") + self._timestamp = utils.get_timestamp() + + # basic tests + self._testdir_all_pass = os.path.join( + self._testroot, "scripts_regression_tests.testdir_all_pass" + ) + self._testdir_with_fail = os.path.join( + self._testroot, "scripts_regression_tests.testdir_with_fail" + ) + self._testdir_unfinished = os.path.join( + self._testroot, "scripts_regression_tests.testdir_unfinished" + ) + self._testdir_unfinished2 = os.path.join( + self._testroot, "scripts_regression_tests.testdir_unfinished2" + ) + + # live tests + self._testdir_teststatus1 = os.path.join( + self._testroot, "scripts_regression_tests.testdir_teststatus1" + ) + self._testdir_teststatus2 = os.path.join( + self._testroot, "scripts_regression_tests.testdir_teststatus2" + ) + + self._testdirs = [ + self._testdir_all_pass, + self._testdir_with_fail, + self._testdir_unfinished, + self._testdir_unfinished2, + self._testdir_teststatus1, + self._testdir_teststatus2, + ] + basic_tests = self._testdirs[: self._testdirs.index(self._testdir_teststatus1)] + + for testdir in self._testdirs: + if os.path.exists(testdir): + shutil.rmtree(testdir) + os.makedirs(testdir) + + for r in range(10): + for testdir in basic_tests: + os.makedirs(os.path.join(testdir, str(r))) + test_utils.make_fake_teststatus( + os.path.join(testdir, str(r)), + "Test_%d" % r, + test_status.TEST_PASS_STATUS, + test_status.RUN_PHASE, + ) + + test_utils.make_fake_teststatus( + os.path.join(self._testdir_with_fail, "5"), + "Test_5", + test_status.TEST_FAIL_STATUS, + test_status.RUN_PHASE, + ) + test_utils.make_fake_teststatus( + os.path.join(self._testdir_unfinished, "5"), + "Test_5", + test_status.TEST_PEND_STATUS, + test_status.RUN_PHASE, + ) + test_utils.make_fake_teststatus( + os.path.join(self._testdir_unfinished2, "5"), + "Test_5", + test_status.TEST_PASS_STATUS, + test_status.SUBMIT_PHASE, + ) + + integration_tests = self._testdirs[len(basic_tests) :] + for integration_test in integration_tests: + os.makedirs(os.path.join(integration_test, "0")) + test_utils.make_fake_teststatus( + os.path.join(integration_test, "0"), + "Test_0", + test_status.TEST_PASS_STATUS, + test_status.CORE_PHASES[0], + ) + + # Set up proxy if possible + self._unset_proxy = self.setup_proxy() + + self._thread_error = None
+ + +
+[docs] + def tearDown(self): + super().tearDown() + + do_teardown = sys.exc_info() == (None, None, None) and not self.NO_TEARDOWN + + if do_teardown: + for testdir in self._testdirs: + shutil.rmtree(testdir)
+ + +
+[docs] + def simple_test(self, testdir, expected_results, extra_args="", build_name=None): + # Need these flags to test dashboard if e3sm + if self._config.create_test_flag_mode == "e3sm" and build_name is not None: + extra_args += " -b %s" % build_name + + expected_stat = 0 + for expected_result in expected_results: + if not ( + expected_result == "PASS" + or (expected_result == "PEND" and "-n" in extra_args) + ): + expected_stat = utils.TESTS_FAILED_ERR_CODE + + output = self.run_cmd_assert_result( + "%s/wait_for_tests -p ACME_test */TestStatus %s" + % (self.TOOLS_DIR, extra_args), + from_dir=testdir, + expected_stat=expected_stat, + ) + + lines = [ + line + for line in output.splitlines() + if ( + line.startswith("PASS") + or line.startswith("FAIL") + or line.startswith("PEND") + ) + ] + self.assertEqual(len(lines), len(expected_results)) + for idx, line in enumerate(lines): + testname, status = test_utils.parse_test_status(line) + self.assertEqual(status, expected_results[idx]) + self.assertEqual(testname, "Test_%d" % idx)
+ + +
+[docs] + def threaded_test(self, testdir, expected_results, extra_args="", build_name=None): + try: + self.simple_test(testdir, expected_results, extra_args, build_name) + except AssertionError as e: + self._thread_error = str(e)
+ + +
+[docs] + def test_wait_for_test_all_pass(self): + self.simple_test(self._testdir_all_pass, ["PASS"] * 10)
+ + +
+[docs] + def test_wait_for_test_with_fail(self): + expected_results = ["FAIL" if item == 5 else "PASS" for item in range(10)] + self.simple_test(self._testdir_with_fail, expected_results)
+ + +
+[docs] + def test_wait_for_test_no_wait(self): + expected_results = ["PEND" if item == 5 else "PASS" for item in range(10)] + self.simple_test(self._testdir_unfinished, expected_results, "-n")
+ + +
+[docs] + def test_wait_for_test_timeout(self): + expected_results = ["PEND" if item == 5 else "PASS" for item in range(10)] + self.simple_test(self._testdir_unfinished, expected_results, "--timeout=3")
+ + +
+[docs] + def test_wait_for_test_wait_for_pend(self): + run_thread = threading.Thread( + target=self.threaded_test, args=(self._testdir_unfinished, ["PASS"] * 10) + ) + run_thread.daemon = True + run_thread.start() + + time.sleep(5) # Kinda hacky + + self.assertTrue(run_thread.is_alive(), msg="wait_for_tests should have waited") + + with test_status.TestStatus( + test_dir=os.path.join(self._testdir_unfinished, "5") + ) as ts: + ts.set_status(test_status.RUN_PHASE, test_status.TEST_PASS_STATUS) + + run_thread.join(timeout=10) + + self.assertFalse( + run_thread.is_alive(), msg="wait_for_tests should have finished" + ) + + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + )
+ + +
+[docs] + def test_wait_for_test_wait_for_missing_run_phase(self): + run_thread = threading.Thread( + target=self.threaded_test, args=(self._testdir_unfinished2, ["PASS"] * 10) + ) + run_thread.daemon = True + run_thread.start() + + time.sleep(5) # Kinda hacky + + self.assertTrue(run_thread.is_alive(), msg="wait_for_tests should have waited") + + with test_status.TestStatus( + test_dir=os.path.join(self._testdir_unfinished2, "5") + ) as ts: + ts.set_status(test_status.RUN_PHASE, test_status.TEST_PASS_STATUS) + + run_thread.join(timeout=10) + + self.assertFalse( + run_thread.is_alive(), msg="wait_for_tests should have finished" + ) + + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + )
+ + +
+[docs] + def test_wait_for_test_wait_kill(self): + expected_results = ["PEND" if item == 5 else "PASS" for item in range(10)] + run_thread = threading.Thread( + target=self.threaded_test, args=(self._testdir_unfinished, expected_results) + ) + run_thread.daemon = True + run_thread.start() + + time.sleep(5) + + self.assertTrue(run_thread.is_alive(), msg="wait_for_tests should have waited") + + self.kill_python_subprocesses(signal.SIGTERM, expected_num_killed=1) + + run_thread.join(timeout=10) + + self.assertFalse( + run_thread.is_alive(), msg="wait_for_tests should have finished" + ) + + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + )
+ + +
+[docs] + def test_wait_for_test_cdash_pass(self): + expected_results = ["PASS"] * 10 + build_name = "regression_test_pass_" + self._timestamp + run_thread = threading.Thread( + target=self.threaded_test, + args=(self._testdir_all_pass, expected_results, "", build_name), + ) + run_thread.daemon = True + run_thread.start() + + run_thread.join(timeout=10) + + self.assertFalse( + run_thread.is_alive(), msg="wait_for_tests should have finished" + ) + + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + ) + + self.assert_dashboard_has_build(build_name)
+ + +
+[docs] + def test_wait_for_test_cdash_kill(self): + expected_results = ["PEND" if item == 5 else "PASS" for item in range(10)] + build_name = "regression_test_kill_" + self._timestamp + run_thread = threading.Thread( + target=self.threaded_test, + args=(self._testdir_unfinished, expected_results, "", build_name), + ) + run_thread.daemon = True + run_thread.start() + + time.sleep(5) + + self.assertTrue(run_thread.is_alive(), msg="wait_for_tests should have waited") + + self.kill_python_subprocesses(signal.SIGTERM, expected_num_killed=1) + + run_thread.join(timeout=10) + + self.assertFalse( + run_thread.is_alive(), msg="wait_for_tests should have finished" + ) + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + ) + + self.assert_dashboard_has_build(build_name) + + if self._config.test_mode == "e3sm": + cdash_result_dir = os.path.join(self._testdir_unfinished, "Testing") + tag_file = os.path.join(cdash_result_dir, "TAG") + self.assertTrue(os.path.isdir(cdash_result_dir)) + self.assertTrue(os.path.isfile(tag_file)) + + tag = open(tag_file, "r").readlines()[0].strip() + xml_file = os.path.join(cdash_result_dir, tag, "Test.xml") + self.assertTrue(os.path.isfile(xml_file)) + + xml_contents = open(xml_file, "r").read() + self.assertTrue( + r"<TestList><Test>Test_0</Test><Test>Test_1</Test><Test>Test_2</Test><Test>Test_3</Test><Test>Test_4</Test><Test>Test_5</Test><Test>Test_6</Test><Test>Test_7</Test><Test>Test_8</Test><Test>Test_9</Test></TestList>" + in xml_contents + ) + self.assertTrue( + r'<Test Status="notrun"><Name>Test_5</Name>' in xml_contents + )
+ + + # TODO: Any further checking of xml output worth doing? + +
+[docs] + def live_test_impl(self, testdir, expected_results, last_phase, last_status): + run_thread = threading.Thread( + target=self.threaded_test, args=(testdir, expected_results) + ) + run_thread.daemon = True + run_thread.start() + + time.sleep(5) + + self.assertTrue(run_thread.is_alive(), msg="wait_for_tests should have waited") + + for core_phase in test_status.CORE_PHASES[1:]: + with test_status.TestStatus( + test_dir=os.path.join(self._testdir_teststatus1, "0") + ) as ts: + ts.set_status( + core_phase, + last_status + if core_phase == last_phase + else test_status.TEST_PASS_STATUS, + ) + + time.sleep(5) + + if core_phase != last_phase: + self.assertTrue( + run_thread.is_alive(), + msg="wait_for_tests should have waited after passing phase {}".format( + core_phase + ), + ) + else: + run_thread.join(timeout=10) + self.assertFalse( + run_thread.is_alive(), + msg="wait_for_tests should have finished after phase {}".format( + core_phase + ), + ) + break + + self.assertTrue( + self._thread_error is None, + msg="Thread had failure: %s" % self._thread_error, + )
+ + +
+[docs] + def test_wait_for_test_test_status_integration_pass(self): + self.live_test_impl( + self._testdir_teststatus1, + ["PASS"], + test_status.RUN_PHASE, + test_status.TEST_PASS_STATUS, + )
+ + +
+[docs] + def test_wait_for_test_test_status_integration_submit_fail(self): + self.live_test_impl( + self._testdir_teststatus1, + ["FAIL"], + test_status.SUBMIT_PHASE, + test_status.TEST_FAIL_STATUS, + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_aprun.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_aprun.html new file mode 100644 index 00000000000..a677716d4de --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_aprun.html @@ -0,0 +1,258 @@ + + + + + + CIME.tests.test_unit_aprun — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_aprun

+import unittest
+from unittest import mock
+
+from CIME import aprun
+
+# NTASKS, NTHRDS, ROOTPE, PSTRID
+DEFAULT_COMP_ATTRS = [
+    512,
+    2,
+    0,
+    1,
+    675,
+    2,
+    0,
+    1,
+    168,
+    2,
+    512,
+    1,
+    512,
+    2,
+    0,
+    1,
+    128,
+    4,
+    680,
+    1,
+    168,
+    2,
+    512,
+    1,
+    168,
+    2,
+    512,
+    1,
+    512,
+    2,
+    0,
+    1,
+    1,
+    1,
+    0,
+    1,
+]
+
+# MAX_TASKS_PER_NODE, MAX_MPITASKS_PER_NODE, PIO_NUMTASKS, PIO_ASYNC_INTERFACE, COMPILER, MACH
+DEFAULT_ARGS = [
+    16,
+    16,
+    -1,
+    False,
+    "gnu",
+    "docker",
+]
+
+
+
+[docs] +class TestUnitAprun(unittest.TestCase): +
+[docs] + def test_aprun_extra_args(self): + case = mock.MagicMock() + + case.get_values.return_value = [ + "CPL", + "ATM", + "LND", + "ICE", + "OCN", + "ROF", + "GLC", + "WAV", + "IAC", + ] + + case.get_value.side_effect = DEFAULT_COMP_ATTRS + DEFAULT_ARGS + + extra_args = { + "-e DEBUG=true": {"position": "global"}, + "-j 20": {"position": "per"}, + } + + ( + aprun_args, + total_node_count, + total_task_count, + min_tasks_per_node, + max_thread_count, + ) = aprun.get_aprun_cmd_for_case(case, "e3sm.exe", extra_args=extra_args) + + assert ( + aprun_args + == " -e DEBUG=true -n 680 -N 8 -d 2 -j 20 e3sm.exe : -n 128 -N 4 -d 4 -j 20 e3sm.exe " + ) + assert total_node_count == 117 + assert total_task_count == 808 + assert min_tasks_per_node == 4 + assert max_thread_count == 4
+ + +
+[docs] + def test_aprun(self): + case = mock.MagicMock() + + case.get_values.return_value = [ + "CPL", + "ATM", + "LND", + "ICE", + "OCN", + "ROF", + "GLC", + "WAV", + "IAC", + ] + + case.get_value.side_effect = DEFAULT_COMP_ATTRS + DEFAULT_ARGS + + ( + aprun_args, + total_node_count, + total_task_count, + min_tasks_per_node, + max_thread_count, + ) = aprun.get_aprun_cmd_for_case(case, "e3sm.exe") + + assert ( + aprun_args == " -n 680 -N 8 -d 2 e3sm.exe : -n 128 -N 4 -d 4 e3sm.exe " + ) + assert total_node_count == 117 + assert total_task_count == 808 + assert min_tasks_per_node == 4 + assert max_thread_count == 4
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_baselines_performance.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_baselines_performance.html new file mode 100644 index 00000000000..c0eb01aed62 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_baselines_performance.html @@ -0,0 +1,875 @@ + + + + + + CIME.tests.test_unit_baselines_performance — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_baselines_performance

+#!/usr/bin/env python3
+
+import gzip
+import tempfile
+import unittest
+from unittest import mock
+from pathlib import Path
+
+from CIME.baselines import performance
+from CIME.tests.test_unit_system_tests import CPLLOG
+
+
+
+[docs] +def create_mock_case(tempdir, get_latest_cpl_logs=None): + caseroot = Path(tempdir, "0", "caseroot") + + rundir = caseroot / "run" + + if get_latest_cpl_logs is not None: + get_latest_cpl_logs.return_value = (str(rundir / "cpl.log.gz"),) + + baseline_root = Path(tempdir, "baselines") + + baseline_root.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + + return case, caseroot, rundir, baseline_root
+ + + +
+[docs] +class TestUnitBaselinesPerformance(unittest.TestCase): +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_memory") + def test_perf_get_memory_default(self, _perf_get_memory): + _perf_get_memory.return_value = ("1000", "a") + + case = mock.MagicMock() + + config = mock.MagicMock() + + config.perf_get_memory.side_effect = AttributeError + + mem = performance.perf_get_memory(case, config) + + assert mem == ("1000", "a")
+ + +
+[docs] + def test_perf_get_memory(self): + case = mock.MagicMock() + + config = mock.MagicMock() + + config.perf_get_memory.return_value = ("1000", "a") + + mem = performance.perf_get_memory(case, config) + + assert mem == ("1000", "a")
+ + +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_throughput") + def test_perf_get_throughput_default(self, _perf_get_throughput): + _perf_get_throughput.return_value = ("100", "a") + + case = mock.MagicMock() + + config = mock.MagicMock() + + config.perf_get_throughput.side_effect = AttributeError + + tput = performance.perf_get_throughput(case, config) + + assert tput == ("100", "a")
+ + +
+[docs] + def test_perf_get_throughput(self): + case = mock.MagicMock() + + config = mock.MagicMock() + + config.perf_get_throughput.return_value = ("100", "a") + + tput = performance.perf_get_throughput(case, config) + + assert tput == ("100", "a")
+ + +
+[docs] + def test_get_cpl_throughput_no_file(self): + throughput = performance.get_cpl_throughput("/tmp/cpl.log") + + assert throughput is None
+ + +
+[docs] + def test_get_cpl_throughput(self): + with tempfile.TemporaryDirectory() as tempdir: + cpl_log_path = Path(tempdir, "cpl.log.gz") + + with gzip.open(cpl_log_path, "w") as fd: + fd.write(CPLLOG.encode("utf-8")) + + throughput = performance.get_cpl_throughput(str(cpl_log_path)) + + assert throughput == 719.635
+ + +
+[docs] + def test_get_cpl_mem_usage_gz(self): + with tempfile.TemporaryDirectory() as tempdir: + cpl_log_path = Path(tempdir, "cpl.log.gz") + + with gzip.open(cpl_log_path, "w") as fd: + fd.write(CPLLOG.encode("utf-8")) + + mem_usage = performance.get_cpl_mem_usage(str(cpl_log_path)) + + assert mem_usage == [ + (10102.0, 1673.89), + (10103.0, 1673.89), + (10104.0, 1673.89), + (10105.0, 1673.89), + ]
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.os.path.isfile") + def test_get_cpl_mem_usage(self, isfile): + isfile.return_value = True + + with mock.patch( + "builtins.open", mock.mock_open(read_data=CPLLOG.encode("utf-8")) + ) as mock_file: + mem_usage = performance.get_cpl_mem_usage("/tmp/cpl.log") + + assert mem_usage == [ + (10102.0, 1673.89), + (10103.0, 1673.89), + (10104.0, 1673.89), + (10105.0, 1673.89), + ]
+ + +
+[docs] + def test_read_baseline_file_multi_line(self): + with mock.patch( + "builtins.open", + mock.mock_open( + read_data="sha:1df0 date:2023 1000.0\nsha:3b05 date:2023 2000.0" + ), + ) as mock_file: + baseline = performance.read_baseline_file("/tmp/cpl-mem.log") + + mock_file.assert_called_with("/tmp/cpl-mem.log") + assert baseline == "sha:1df0 date:2023 1000.0\nsha:3b05 date:2023 2000.0"
+ + +
+[docs] + def test_read_baseline_file_content(self): + with mock.patch( + "builtins.open", mock.mock_open(read_data="sha:1df0 date:2023 1000.0") + ) as mock_file: + baseline = performance.read_baseline_file("/tmp/cpl-mem.log") + + mock_file.assert_called_with("/tmp/cpl-mem.log") + assert baseline == "sha:1df0 date:2023 1000.0"
+ + +
+[docs] + def test_write_baseline_file(self): + with mock.patch("builtins.open", mock.mock_open()) as mock_file: + performance.write_baseline_file("/tmp/cpl-tput.log", "1000") + + mock_file.assert_called_with("/tmp/cpl-tput.log", "a")
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_throughput") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test__perf_get_throughput(self, get_latest_cpl_logs, get_cpl_throughput): + get_cpl_throughput.side_effect = FileNotFoundError() + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + with self.assertRaises(RuntimeError): + performance._perf_get_throughput(case)
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test__perf_get_memory_override(self, get_latest_cpl_logs, get_cpl_mem_usage): + get_cpl_mem_usage.side_effect = FileNotFoundError() + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + with self.assertRaises(RuntimeError): + performance._perf_get_memory(case, "/tmp/override")
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test__perf_get_memory(self, get_latest_cpl_logs, get_cpl_mem_usage): + get_cpl_mem_usage.side_effect = FileNotFoundError() + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + with self.assertRaises(RuntimeError): + performance._perf_get_memory(case)
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.write_baseline_file") + @mock.patch("CIME.baselines.performance.perf_get_memory") + @mock.patch("CIME.baselines.performance.perf_get_throughput") + def test_write_baseline_skip( + self, perf_get_throughput, perf_get_memory, write_baseline_file + ): + perf_get_throughput.return_value = "100" + + perf_get_memory.return_value = "1000" + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir) + + performance.perf_write_baseline( + case, + baseline_root, + False, + False, + ) + + perf_get_throughput.assert_not_called() + perf_get_memory.assert_not_called() + write_baseline_file.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.write_baseline_file") + @mock.patch("CIME.baselines.performance.perf_get_memory") + @mock.patch("CIME.baselines.performance.perf_get_throughput") + def test_write_baseline_runtimeerror( + self, perf_get_throughput, perf_get_memory, write_baseline_file + ): + perf_get_throughput.side_effect = RuntimeError + + perf_get_memory.side_effect = RuntimeError + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir) + + performance.perf_write_baseline(case, baseline_root) + + perf_get_throughput.assert_called() + perf_get_memory.assert_called() + write_baseline_file.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.write_baseline_file") + @mock.patch("CIME.baselines.performance.perf_get_memory") + @mock.patch("CIME.baselines.performance.perf_get_throughput") + def test_perf_write_baseline( + self, perf_get_throughput, perf_get_memory, write_baseline_file + ): + perf_get_throughput.return_value = ("100", "a") + + perf_get_memory.return_value = ("1000", "a") + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir) + + performance.perf_write_baseline(case, baseline_root) + + perf_get_throughput.assert_called() + perf_get_memory.assert_called() + write_baseline_file.assert_any_call( + str(baseline_root / "cpl-tput.log"), "100", "a" + ) + write_baseline_file.assert_any_call( + str(baseline_root / "cpl-mem.log"), "1000", "a" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_throughput") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_throughput_baseline_no_baseline_file( + self, get_latest_cpl_logs, read_baseline_file, _perf_get_throughput + ): + read_baseline_file.side_effect = FileNotFoundError + + _perf_get_throughput.return_value = 504 + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_value.side_effect = ( + str(baseline_root), + "master/ERIO.ne30_g16_rx1.A.docker_gnu", + "/tmp/components/cpl", + 0.05, + ) + + with self.assertRaises(FileNotFoundError): + performance.perf_compare_throughput_baseline(case)
+ + +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_throughput") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_throughput_baseline_no_baseline( + self, get_latest_cpl_logs, read_baseline_file, _perf_get_throughput + ): + read_baseline_file.return_value = "" + + _perf_get_throughput.return_value = ("504", "a") + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_throughput_baseline( + case + ) + + assert below_tolerance is None + assert ( + comment + == "Could not compare throughput to baseline, as baseline had no value." + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_throughput") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_throughput_baseline_no_tolerance( + self, get_latest_cpl_logs, read_baseline_file, _perf_get_throughput + ): + read_baseline_file.return_value = "500" + + _perf_get_throughput.return_value = ("504", "a") + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + None, + ) + + (below_tolerance, comment) = performance.perf_compare_throughput_baseline( + case + ) + + assert below_tolerance + assert ( + comment + == "TPUTCOMP: Throughput changed by -0.80%: baseline=500.000 sypd, tolerance=10%, current=504.000 sypd" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_throughput") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_throughput_baseline_above_threshold( + self, get_latest_cpl_logs, read_baseline_file, _perf_get_throughput + ): + read_baseline_file.return_value = "1000" + + _perf_get_throughput.return_value = ("504", "a") + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_throughput_baseline( + case + ) + + assert not below_tolerance + assert ( + comment + == "Error: TPUTCOMP: Throughput changed by 49.60%: baseline=1000.000 sypd, tolerance=5%, current=504.000 sypd" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance._perf_get_throughput") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_throughput_baseline( + self, get_latest_cpl_logs, read_baseline_file, _perf_get_throughput + ): + read_baseline_file.return_value = "500" + + _perf_get_throughput.return_value = ("504", "a") + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_throughput_baseline( + case + ) + + assert below_tolerance + assert ( + comment + == "TPUTCOMP: Throughput changed by -0.80%: baseline=500.000 sypd, tolerance=5%, current=504.000 sypd" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_memory_baseline_no_baseline( + self, get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage + ): + read_baseline_file.return_value = "" + + get_cpl_mem_usage.return_value = [ + (1, 1000.0), + (2, 1001.0), + (3, 1002.0), + (4, 1003.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_memory_baseline(case) + + assert below_tolerance + assert ( + comment + == "MEMCOMP: Memory usage highwater changed by 0.00%: baseline=0.000 MB, tolerance=5%, current=1003.000 MB" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_memory_baseline_not_enough_samples( + self, get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage + ): + read_baseline_file.return_value = ["1000.0"] + + get_cpl_mem_usage.return_value = [ + (1, 1000.0), + (2, 1001.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_value.side_effect = ( + str(baseline_root), + "master/ERIO.ne30_g16_rx1.A.docker_gnu", + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_memory_baseline(case) + + assert below_tolerance is None + assert comment == "Found 2 memory usage samples, need atleast 4"
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_memory_baseline_no_baseline_file( + self, get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage + ): + read_baseline_file.side_effect = FileNotFoundError + + get_cpl_mem_usage.return_value = [ + (1, 1000.0), + (2, 1001.0), + (3, 1002.0), + (4, 1003.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_value.side_effect = ( + str(baseline_root), + "master/ERIO.ne30_g16_rx1.A.docker_gnu", + "/tmp/components/cpl", + 0.05, + ) + + with self.assertRaises(FileNotFoundError): + performance.perf_compare_memory_baseline(case)
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_memory_baseline_no_tolerance( + self, get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage + ): + read_baseline_file.return_value = "1000.0" + + get_cpl_mem_usage.return_value = [ + (1, 1000.0), + (2, 1001.0), + (3, 1002.0), + (4, 1003.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + None, + ) + + (below_tolerance, comment) = performance.perf_compare_memory_baseline(case) + + assert below_tolerance + assert ( + comment + == "MEMCOMP: Memory usage highwater changed by 0.30%: baseline=1000.000 MB, tolerance=10%, current=1003.000 MB" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_memory_baseline_above_threshold( + self, get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage + ): + read_baseline_file.return_value = "1000.0" + + get_cpl_mem_usage.return_value = [ + (1, 2000.0), + (2, 2001.0), + (3, 2002.0), + (4, 2003.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_memory_baseline(case) + + assert not below_tolerance + assert ( + comment + == "Error: MEMCOMP: Memory usage highwater changed by 100.30%: baseline=1000.000 MB, tolerance=5%, current=2003.000 MB" + )
+ + +
+[docs] + @mock.patch("CIME.baselines.performance.get_cpl_mem_usage") + @mock.patch("CIME.baselines.performance.read_baseline_file") + @mock.patch("CIME.baselines.performance.get_latest_cpl_logs") + def test_perf_compare_memory_baseline( + self, get_latest_cpl_logs, read_baseline_file, get_cpl_mem_usage + ): + read_baseline_file.return_value = "1000.0" + + get_cpl_mem_usage.return_value = [ + (1, 1000.0), + (2, 1001.0), + (3, 1002.0), + (4, 1003.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + case, _, _, baseline_root = create_mock_case(tempdir, get_latest_cpl_logs) + + case.get_baseline_dir.return_value = str( + baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + ) + + case.get_value.side_effect = ( + "/tmp/components/cpl", + 0.05, + ) + + (below_tolerance, comment) = performance.perf_compare_memory_baseline(case) + + assert below_tolerance + assert ( + comment + == "MEMCOMP: Memory usage highwater changed by 0.30%: baseline=1000.000 MB, tolerance=5%, current=1003.000 MB" + )
+ + +
+[docs] + def test_get_latest_cpl_logs_found_multiple(self): + with tempfile.TemporaryDirectory() as tempdir: + run_dir = Path(tempdir) / "run" + run_dir.mkdir(parents=True, exist_ok=False) + + cpl_log_path = run_dir / "cpl.log.gz" + cpl_log_path.touch() + + cpl_log_2_path = run_dir / "cpl-2023-01-01.log.gz" + cpl_log_2_path.touch() + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(run_dir), + "mct", + ) + + latest_cpl_logs = performance.get_latest_cpl_logs(case) + + assert len(latest_cpl_logs) == 2 + assert sorted(latest_cpl_logs) == sorted( + [str(cpl_log_path), str(cpl_log_2_path)] + )
+ + +
+[docs] + def test_get_latest_cpl_logs_found_single(self): + with tempfile.TemporaryDirectory() as tempdir: + run_dir = Path(tempdir) / "run" + run_dir.mkdir(parents=True, exist_ok=False) + + cpl_log_path = run_dir / "cpl.log.gz" + cpl_log_path.touch() + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(run_dir), + "mct", + ) + + latest_cpl_logs = performance.get_latest_cpl_logs(case) + + assert len(latest_cpl_logs) == 1 + assert latest_cpl_logs[0] == str(cpl_log_path)
+ + +
+[docs] + def test_get_latest_cpl_logs(self): + case = mock.MagicMock() + case.get_value.side_effect = ( + f"/tmp/run", + "mct", + ) + + latest_cpl_logs = performance.get_latest_cpl_logs(case) + + assert len(latest_cpl_logs) == 0
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_bless_test_results.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_bless_test_results.html new file mode 100644 index 00000000000..c7d1f9d0443 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_bless_test_results.html @@ -0,0 +1,1268 @@ + + + + + + CIME.tests.test_unit_bless_test_results — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_bless_test_results

+import re
+import unittest
+import tempfile
+from unittest import mock
+from pathlib import Path
+
+from CIME.bless_test_results import (
+    bless_test_results,
+    _bless_throughput,
+    _bless_memory,
+    bless_history,
+    bless_namelists,
+    is_bless_needed,
+)
+
+
+
+[docs] +class TestUnitBlessTestResults(unittest.TestCase): +
+[docs] + @mock.patch("CIME.bless_test_results.generate_baseline") + @mock.patch("CIME.bless_test_results.compare_baseline") + def test_bless_history_fail(self, compare_baseline, generate_baseline): + generate_baseline.return_value = (False, "") + + compare_baseline.return_value = (False, "") + + case = mock.MagicMock() + case.get_value.side_effect = [ + "USER", + "SMS.f19_g16.S", + "/tmp/run", + ] + + success, comment = bless_history( + "SMS.f19_g16.S", case, "master", "/tmp/baselines", False, True + ) + + assert not success + assert comment == "Generate baseline failed: "
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.generate_baseline") + @mock.patch("CIME.bless_test_results.compare_baseline") + def test_bless_history_force(self, compare_baseline, generate_baseline): + generate_baseline.return_value = (True, "") + + compare_baseline.return_value = (False, "") + + case = mock.MagicMock() + case.get_value.side_effect = [ + "USER", + "SMS.f19_g16.S", + "/tmp/run", + ] + + success, comment = bless_history( + "SMS.f19_g16.S", case, "master", "/tmp/baselines", False, True + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.compare_baseline") + def test_bless_history(self, compare_baseline): + compare_baseline.return_value = (True, "") + + case = mock.MagicMock() + case.get_value.side_effect = [ + "USER", + "SMS.f19_g16.S", + "/tmp/run", + ] + + success, comment = bless_history( + "SMS.f19_g16.S", case, "master", "/tmp/baselines", True, False + ) + + assert success + assert comment is None
+ + +
+[docs] + def test_bless_namelists_report_only(self): + success, comment = bless_namelists( + "SMS.f19_g16.S", + True, + False, + None, + "master", + "/tmp/baselines", + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.get_scripts_root") + @mock.patch("CIME.bless_test_results.run_cmd") + def test_bless_namelists_pes_file(self, run_cmd, get_scripts_root): + get_scripts_root.return_value = "/tmp/cime" + + run_cmd.return_value = [1, None, None] + + success, comment = bless_namelists( + "SMS.f19_g16.S", + False, + True, + "/tmp/pes/new_layout.xml", + "master", + "/tmp/baselines", + ) + + assert not success + assert comment == "Namelist regen failed: 'None'" + + call = run_cmd.call_args_list[0] + + assert re.match( + r"/tmp/cime/create_test SMS.f19_g16.S --namelists-only -g (?:-b )?master --pesfile /tmp/pes/new_layout.xml --baseline-root /tmp/baselines -o", + call[0][0], + )
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.get_scripts_root") + @mock.patch("CIME.bless_test_results.run_cmd") + def test_bless_namelists_new_test_id(self, run_cmd, get_scripts_root): + get_scripts_root.return_value = "/tmp/cime" + + run_cmd.return_value = [1, None, None] + + success, comment = bless_namelists( + "SMS.f19_g16.S", + False, + True, + None, + "master", + "/tmp/baselines", + new_test_root="/tmp/other-test-root", + new_test_id="hello", + ) + + assert not success + assert comment == "Namelist regen failed: 'None'" + + call = run_cmd.call_args_list[0] + + assert re.match( + r"/tmp/cime/create_test SMS.f19_g16.S --namelists-only -g (?:-b )?master --test-root=/tmp/other-test-root --output-root=/tmp/other-test-root -t hello --baseline-root /tmp/baselines -o", + call[0][0], + )
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.get_scripts_root") + @mock.patch("CIME.bless_test_results.run_cmd") + def test_bless_namelists_new_test_root(self, run_cmd, get_scripts_root): + get_scripts_root.return_value = "/tmp/cime" + + run_cmd.return_value = [1, None, None] + + success, comment = bless_namelists( + "SMS.f19_g16.S", + False, + True, + None, + "master", + "/tmp/baselines", + new_test_root="/tmp/other-test-root", + ) + + assert not success + assert comment == "Namelist regen failed: 'None'" + + call = run_cmd.call_args_list[0] + + assert re.match( + r"/tmp/cime/create_test SMS.f19_g16.S --namelists-only -g (?:-b )?master --test-root=/tmp/other-test-root --output-root=/tmp/other-test-root --baseline-root /tmp/baselines -o", + call[0][0], + )
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.get_scripts_root") + @mock.patch("CIME.bless_test_results.run_cmd") + def test_bless_namelists_fail(self, run_cmd, get_scripts_root): + get_scripts_root.return_value = "/tmp/cime" + + run_cmd.return_value = [1, None, None] + + success, comment = bless_namelists( + "SMS.f19_g16.S", + False, + True, + None, + "master", + "/tmp/baselines", + ) + + assert not success + assert comment == "Namelist regen failed: 'None'" + + call = run_cmd.call_args_list[0] + + assert re.match( + r"/tmp/cime/create_test SMS.f19_g16.S --namelists-only -g (?:-b )?master --baseline-root /tmp/baselines -o", + call[0][0], + )
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.get_scripts_root") + @mock.patch("CIME.bless_test_results.run_cmd") + def test_bless_namelists_force(self, run_cmd, get_scripts_root): + get_scripts_root.return_value = "/tmp/cime" + + run_cmd.return_value = [0, None, None] + + success, comment = bless_namelists( + "SMS.f19_g16.S", + False, + True, + None, + "master", + "/tmp/baselines", + ) + + assert success + assert comment is None + + call = run_cmd.call_args_list[0] + + assert re.match( + r"/tmp/cime/create_test SMS.f19_g16.S --namelists-only -g (?:-b )?master --baseline-root /tmp/baselines -o", + call[0][0], + )
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_memory_baseline") + def test_bless_memory_force_error( + self, perf_compare_memory_baseline, perf_write_baseline + ): + perf_write_baseline.side_effect = Exception + + perf_compare_memory_baseline.return_value = (False, "") + + case = mock.MagicMock() + + success, comment = _bless_memory( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert not success + assert ( + comment + == "Failed to write baseline memory usage for test 'SMS.f19_g16.S': " + ) + perf_write_baseline.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_memory_baseline") + def test_bless_memory_force( + self, perf_compare_memory_baseline, perf_write_baseline + ): + perf_compare_memory_baseline.return_value = (False, "") + + case = mock.MagicMock() + + success, comment = _bless_memory( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert success + assert comment is None + perf_write_baseline.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_compare_memory_baseline") + def test_bless_memory_report_only(self, perf_compare_memory_baseline): + perf_compare_memory_baseline.return_value = (True, "") + + case = mock.MagicMock() + + success, comment = _bless_memory( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", True, False + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_memory_baseline") + def test_bless_memory_general_error( + self, perf_compare_memory_baseline, perf_write_baseline + ): + perf_compare_memory_baseline.side_effect = Exception + + case = mock.MagicMock() + + success, comment = _bless_memory( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_memory_baseline") + def test_bless_memory_file_not_found_error( + self, perf_compare_memory_baseline, perf_write_baseline + ): + perf_compare_memory_baseline.side_effect = FileNotFoundError + + case = mock.MagicMock() + + success, comment = _bless_memory( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_compare_memory_baseline") + def test_bless_memory(self, perf_compare_memory_baseline): + perf_compare_memory_baseline.return_value = (True, "") + + case = mock.MagicMock() + + success, comment = _bless_memory( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, False + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_throughput_baseline") + def test_bless_throughput_force_error( + self, perf_compare_throughput_baseline, perf_write_baseline + ): + perf_write_baseline.side_effect = Exception + + perf_compare_throughput_baseline.return_value = (False, "") + + case = mock.MagicMock() + + success, comment = _bless_throughput( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert not success + assert comment == "Failed to write baseline throughput for 'SMS.f19_g16.S': " + perf_write_baseline.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_throughput_baseline") + def test_bless_throughput_force( + self, perf_compare_throughput_baseline, perf_write_baseline + ): + perf_compare_throughput_baseline.return_value = (False, "") + + case = mock.MagicMock() + + success, comment = _bless_throughput( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert success + assert comment is None + perf_write_baseline.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_compare_throughput_baseline") + def test_bless_throughput_report_only(self, perf_compare_throughput_baseline): + perf_compare_throughput_baseline.return_value = (True, "") + + case = mock.MagicMock() + + success, comment = _bless_throughput( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", True, False + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_compare_throughput_baseline") + def test_bless_throughput_general_error(self, perf_compare_throughput_baseline): + perf_compare_throughput_baseline.side_effect = Exception + + case = mock.MagicMock() + + success, comment = _bless_throughput( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_write_baseline") + @mock.patch("CIME.bless_test_results.perf_compare_throughput_baseline") + def test_bless_throughput_file_not_found_error( + self, + perf_compare_throughput_baseline, + perf_write_baseline, + ): + perf_compare_throughput_baseline.side_effect = FileNotFoundError + + case = mock.MagicMock() + + success, comment = _bless_throughput( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, True + ) + + assert success + assert comment is None
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.perf_compare_throughput_baseline") + def test_bless_throughput(self, perf_compare_throughput_baseline): + perf_compare_throughput_baseline.return_value = (True, "") + + case = mock.MagicMock() + + success, comment = _bless_throughput( + case, "SMS.f19_g16.S", "/tmp/baselines", "master", False, False + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results._bless_throughput") + @mock.patch("CIME.bless_test_results._bless_memory") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_perf( + self, + get_test_status_files, + TestStatus, + Case, + _bless_memory, + _bless_throughput, + ): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "FAIL", "FAIL", "FAIL"] + + case = Case.return_value.__enter__.return_value + + _bless_memory.return_value = (True, "") + + _bless_throughput.return_value = (True, "") + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + bless_perf=True, + ) + + assert success + _bless_memory.assert_called() + _bless_throughput.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results._bless_throughput") + @mock.patch("CIME.bless_test_results._bless_memory") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_memory_only( + self, + get_test_status_files, + TestStatus, + Case, + _bless_memory, + _bless_throughput, + ): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "FAIL", "FAIL"] + + case = Case.return_value.__enter__.return_value + + _bless_memory.return_value = (True, "") + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + bless_mem=True, + ) + + assert success + _bless_memory.assert_called() + _bless_throughput.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results._bless_throughput") + @mock.patch("CIME.bless_test_results._bless_memory") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_throughput_only( + self, + get_test_status_files, + TestStatus, + Case, + _bless_memory, + _bless_throughput, + ): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "FAIL", "FAIL"] + + case = Case.return_value.__enter__.return_value + + _bless_throughput.return_value = (True, "") + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + bless_tput=True, + ) + + assert success + _bless_memory.assert_not_called() + _bless_throughput.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.bless_namelists") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_namelists_only( + self, + get_test_status_files, + TestStatus, + Case, + bless_namelists, + ): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["FAIL", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + bless_namelists.return_value = (True, "") + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + namelists_only=True, + ) + + assert success + bless_namelists.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.bless_history") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_hist_only( + self, + get_test_status_files, + TestStatus, + Case, + bless_history, + ): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "FAIL"] + + case = Case.return_value.__enter__.return_value + + bless_history.return_value = (True, "") + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + hist_only=True, + ) + + assert success + bless_history.assert_called()
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_specific(self, get_test_status_files, TestStatus, Case): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker-gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS"] * 10 + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + bless_tests=["SMS"], + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results._bless_memory") + @mock.patch("CIME.bless_test_results._bless_throughput") + @mock.patch("CIME.bless_test_results.bless_history") + @mock.patch("CIME.bless_test_results.bless_namelists") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_tests_results_homme( + self, + get_test_status_files, + TestStatus, + Case, + bless_namelists, + bless_history, + _bless_throughput, + _bless_memory, + ): + _bless_memory.return_value = (False, "") + + _bless_throughput.return_value = (False, "") + + bless_history.return_value = (False, "") + + bless_namelists.return_value = (False, "") + + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker-gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.HOMME.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "PASS", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + no_skip_pass=True, + ) + + assert not success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results._bless_memory") + @mock.patch("CIME.bless_test_results._bless_throughput") + @mock.patch("CIME.bless_test_results.bless_history") + @mock.patch("CIME.bless_test_results.bless_namelists") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_tests_results_fail( + self, + get_test_status_files, + TestStatus, + Case, + bless_namelists, + bless_history, + _bless_throughput, + _bless_memory, + ): + _bless_memory.return_value = (False, "") + + _bless_throughput.return_value = (False, "") + + bless_history.return_value = (False, "") + + bless_namelists.return_value = (False, "") + + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker-gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "PASS", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + no_skip_pass=True, + ) + + assert not success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results._bless_memory") + @mock.patch("CIME.bless_test_results._bless_throughput") + @mock.patch("CIME.bless_test_results.bless_history") + @mock.patch("CIME.bless_test_results.bless_namelists") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_no_skip_pass( + self, + get_test_status_files, + TestStatus, + Case, + bless_namelists, + bless_history, + _bless_throughput, + _bless_memory, + ): + _bless_memory.return_value = (True, "") + + _bless_throughput.return_value = (True, "") + + bless_history.return_value = (True, "") + + bless_namelists.return_value = (True, "") + + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker-gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "PASS", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + no_skip_pass=True, + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_baseline_root_none(self, get_test_status_files, TestStatus, Case): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker-gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["FAIL"] + ["PASS"] * 9 + + case = Case.return_value.__enter__.return_value + case.get_value.side_effect = [None, None] + + success = bless_test_results( + "master", + None, + "/tmp/cases", + "gnu", + force=True, + ) + + assert not success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.bless_namelists") + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_baseline_name_none( + self, get_test_status_files, TestStatus, Case, bless_namelists + ): + bless_namelists.return_value = (True, "") + + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["FAIL"] + ["PASS"] * 9 + + case = Case.return_value.__enter__.return_value + case.get_value.side_effect = [None, None] + + success = bless_test_results( + None, + "/tmp/baselines", + "/tmp/cases", + "gnu", + force=True, + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_exclude(self, get_test_status_files, TestStatus, Case): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker-gnu.12345/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "PASS", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + exclude="SMS", + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_multiple_files(self, get_test_status_files, TestStatus, Case): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu.12345/TestStatus", + "/tmp/cases/SMS.f19_g16.S.docker-gnu.23456/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "PASS", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_tests_no_match(self, get_test_status_files, TestStatus, Case): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + "/tmp/cases/PET.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS"] * 10 + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + bless_tests=["SEQ"], + ) + + assert success
+ + +
+[docs] + @mock.patch("CIME.bless_test_results.Case") + @mock.patch("CIME.bless_test_results.TestStatus") + @mock.patch("CIME.bless_test_results.get_test_status_files") + def test_bless_all(self, get_test_status_files, TestStatus, Case): + get_test_status_files.return_value = [ + "/tmp/cases/SMS.f19_g16.S.docker_gnu/TestStatus", + ] + + ts = TestStatus.return_value + ts.get_name.return_value = "SMS.f19_g16.S.docker_gnu" + ts.get_overall_test_status.return_value = ("PASS", "RUN") + ts.get_status.side_effect = ["PASS", "PASS", "PASS", "PASS", "PASS"] + + case = Case.return_value.__enter__.return_value + + success = bless_test_results( + "master", + "/tmp/baseline", + "/tmp/cases", + "gnu", + force=True, + ) + + assert success
+ + +
+[docs] + def test_is_bless_needed_no_skip_fail(self): + ts = mock.MagicMock() + ts.get_status.side_effect = [ + "PASS", + ] + + broken_blesses = [] + + needed = is_bless_needed( + "SMS.f19_g16.A", ts, broken_blesses, "PASS", True, "RUN" + ) + + assert needed + assert broken_blesses == []
+ + +
+[docs] + def test_is_bless_needed_overall_fail(self): + ts = mock.MagicMock() + ts.get_status.side_effect = [ + "PASS", + ] + + broken_blesses = [] + + needed = is_bless_needed( + "SMS.f19_g16.A", ts, broken_blesses, "FAIL", False, "RUN" + ) + + assert not needed + assert broken_blesses == [("SMS.f19_g16.A", "test did not pass")]
+ + +
+[docs] + def test_is_bless_needed_baseline_fail(self): + ts = mock.MagicMock() + ts.get_status.side_effect = ["PASS", "FAIL"] + + broken_blesses = [] + + needed = is_bless_needed( + "SMS.f19_g16.A", ts, broken_blesses, "PASS", False, "RUN" + ) + + assert needed + assert broken_blesses == []
+ + +
+[docs] + def test_is_bless_needed_run_phase_fail(self): + ts = mock.MagicMock() + ts.get_status.side_effect = [ + "FAIL", + ] + + broken_blesses = [] + + needed = is_bless_needed( + "SMS.f19_g16.A", ts, broken_blesses, "PASS", False, "RUN" + ) + + assert not needed + assert broken_blesses == [("SMS.f19_g16.A", "run phase did not pass")]
+ + +
+[docs] + def test_is_bless_needed_no_run_phase(self): + ts = mock.MagicMock() + ts.get_status.side_effect = [None] + + broken_blesses = [] + + needed = is_bless_needed( + "SMS.f19_g16.A", ts, broken_blesses, "PASS", False, "RUN" + ) + + assert not needed + assert broken_blesses == [("SMS.f19_g16.A", "no run phase")]
+ + +
+[docs] + def test_is_bless_needed(self): + ts = mock.MagicMock() + ts.get_status.side_effect = ["PASS", "PASS"] + + broken_blesses = [] + + needed = is_bless_needed( + "SMS.f19_g16.A", ts, broken_blesses, "PASS", False, "RUN" + ) + + assert not needed
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case.html new file mode 100644 index 00000000000..c30be066ad3 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case.html @@ -0,0 +1,674 @@ + + + + + + CIME.tests.test_unit_case — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_case

+#!/usr/bin/env python3
+
+import os
+import unittest
+from unittest import mock
+import tempfile
+
+from CIME.case import case_submit
+from CIME.case import Case
+from CIME import utils as cime_utils
+
+
+
+[docs] +def make_valid_case(path): + """Make the given path look like a valid case to avoid errors""" + # Case validity is determined by checking for an env_case.xml file. So put one there + # to suggest that this directory is a valid case directory. Open in append mode in + # case the file already exists. + with open(os.path.join(path, "env_case.xml"), "a"): + pass
+ + + +
+[docs] +class TestCaseSubmit(unittest.TestCase): +
+[docs] + def test_check_case(self): + case = mock.MagicMock() + # get_value arguments TEST, COMP_WAV, COMP_INTERFACE, BUILD_COMPLETE + case.get_value.side_effect = [False, "", "", True] + case_submit.check_case(case, chksum=True) + + case.check_all_input_data.assert_called_with(chksum=True)
+ + +
+[docs] + def test_check_case_test(self): + case = mock.MagicMock() + # get_value arguments TEST, COMP_WAV, COMP_INTERFACE, BUILD_COMPLETE + case.get_value.side_effect = [True, "", "", True] + case_submit.check_case(case, chksum=True) + + case.check_all_input_data.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.case.case_submit.lock_file") + @mock.patch("CIME.case.case_submit.unlock_file") + @mock.patch("os.path.basename") + def test__submit( + self, lock_file, unlock_file, basename + ): # pylint: disable=unused-argument + case = mock.MagicMock() + + case_submit._submit(case, chksum=True) # pylint: disable=protected-access + + case.check_case.assert_called_with(skip_pnl=False, chksum=True)
+ + +
+[docs] + @mock.patch("CIME.case.case_submit._submit") + @mock.patch("CIME.case.case.Case.initialize_derived_attributes") + @mock.patch("CIME.case.case.Case.get_value") + @mock.patch("CIME.case.case.Case.read_xml") + def test_submit( + self, read_xml, get_value, init, _submit + ): # pylint: disable=unused-argument + with tempfile.TemporaryDirectory() as tempdir: + get_value.side_effect = [ + tempdir, + tempdir, + tempdir, + "test", + tempdir, + True, + "baseid", + None, + True, + ] + + make_valid_case(tempdir) + with Case(tempdir, non_local=True) as case: + case.submit(chksum=True) + + _submit.assert_called_with( + case, + job=None, + no_batch=False, + prereq=None, + allow_fail=False, + resubmit=False, + resubmit_immediate=False, + skip_pnl=False, + mail_user=None, + mail_type=None, + batch_args=None, + workflow=True, + chksum=True, + )
+
+ + + +
+[docs] +class TestCase(unittest.TestCase): +
+[docs] + def setUp(self): + self.srcroot = os.path.abspath(cime_utils.get_src_root()) + self.tempdir = tempfile.TemporaryDirectory()
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + def test_fix_sys_argv_quotes(self, read_xml): + input_data = ["./xmlquery", "--val", "PIO"] + expected_data = ["./xmlquery", "--val", "PIO"] + + with tempfile.TemporaryDirectory() as tempdir: + make_valid_case(tempdir) + + with Case(tempdir) as case: + output_data = case.fix_sys_argv_quotes(input_data) + + assert output_data == expected_data
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + def test_fix_sys_argv_quotes_incomplete(self, read_xml): + input_data = ["./xmlquery", "--val"] + expected_data = ["./xmlquery", "--val"] + + with tempfile.TemporaryDirectory() as tempdir: + make_valid_case(tempdir) + + with Case(tempdir) as case: + output_data = case.fix_sys_argv_quotes(input_data) + + assert output_data == expected_data
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + def test_fix_sys_argv_quotes_val(self, read_xml): + input_data = ["./xmlquery", "--val", "-test"] + expected_data = ["./xmlquery", "--val", "-test"] + + with tempfile.TemporaryDirectory() as tempdir: + make_valid_case(tempdir) + + with Case(tempdir) as case: + output_data = case.fix_sys_argv_quotes(input_data) + + assert output_data == expected_data
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + def test_fix_sys_argv_quotes_val_quoted(self, read_xml): + input_data = ["./xmlquery", "--val", " -nlev 267 "] + expected_data = ["./xmlquery", "--val", '" -nlev 267 "'] + + with tempfile.TemporaryDirectory() as tempdir: + make_valid_case(tempdir) + + with Case(tempdir) as case: + output_data = case.fix_sys_argv_quotes(input_data) + + assert output_data == expected_data
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + def test_fix_sys_argv_quotes_kv(self, read_xml): + input_data = ["./xmlquery", "CAM_CONFIG_OPTS= -nlev 267", "OTHER_OPTS=-test"] + expected_data = [ + "./xmlquery", + 'CAM_CONFIG_OPTS=" -nlev 267"', + "OTHER_OPTS=-test", + ] + + with tempfile.TemporaryDirectory() as tempdir: + make_valid_case(tempdir) + + with Case(tempdir) as case: + output_data = case.fix_sys_argv_quotes(input_data) + + assert output_data == expected_data
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + @mock.patch("sys.argv", ["/src/create_newcase", "--machine", "docker"]) + @mock.patch("time.strftime", return_value="00:00:00") + @mock.patch("socket.getfqdn", return_value="host1") + @mock.patch("getpass.getuser", side_effect=["root", "root", "johndoe"]) + def test_new_hash( + self, getuser, getfqdn, strftime, read_xml + ): # pylint: disable=unused-argument + with self.tempdir as tempdir: + make_valid_case(tempdir) + with Case(tempdir) as case: + expected = ( + "134a939f62115fb44bf08a46bfb2bd13426833b5c8848cf7c4884af7af05b91a" + ) + + # Check idempotency + for _ in range(2): + value = case.new_hash() + + self.assertTrue( + value == expected, "{} != {}".format(value, expected) + ) + + expected = ( + "bb59f1c473ac07e9dd30bfab153c0530a777f89280b716cf42e6fe2f49811a6e" + ) + + value = case.new_hash() + + self.assertTrue(value == expected, "{} != {}".format(value, expected))
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + @mock.patch("sys.argv", ["/src/create_newcase", "--machine", "docker"]) + @mock.patch("time.strftime", return_value="00:00:00") + @mock.patch("CIME.case.case.lock_file") + @mock.patch("CIME.case.case.Case.set_lookup_value") + @mock.patch("CIME.case.case.Case.apply_user_mods") + @mock.patch("CIME.case.case.Case.create_caseroot") + @mock.patch("CIME.case.case.Case.configure") + @mock.patch("socket.getfqdn", return_value="host1") + @mock.patch("getpass.getuser", return_value="root") + @mock.patch.dict(os.environ, {"CIME_MODEL": "cesm"}) + def test_copy( + self, + getuser, + getfqdn, + configure, + create_caseroot, # pylint: disable=unused-argument + apply_user_mods, + set_lookup_value, + lock_file, + strftime, # pylint: disable=unused-argument + read_xml, + ): # pylint: disable=unused-argument + expected_first_hash = ( + "134a939f62115fb44bf08a46bfb2bd13426833b5c8848cf7c4884af7af05b91a" + ) + expected_second_hash = ( + "3561339a49daab999e3c4ea2f03a9c6acc33296a5bc35f1bfb82e7b5e10bdf38" + ) + + with self.tempdir as tempdir: + caseroot = os.path.join(tempdir, "test1") + with Case(caseroot, read_only=False) as case: + case.create( + "test1", + self.srcroot, + "A", + "f19_g16_rx1", + machine_name="cori-haswell", + ) + + # Check that they're all called + configure.assert_called_with( + "A", + "f19_g16_rx1", + machine_name="cori-haswell", + project=None, + pecount=None, + compiler=None, + mpilib=None, + pesfile=None, + gridfile=None, + multi_driver=False, + ninst=1, + test=False, + walltime=None, + queue=None, + output_root=None, + run_unsupported=False, + answer=None, + input_dir=None, + driver=None, + workflowid="default", + non_local=False, + extra_machines_dir=None, + case_group=None, + ngpus_per_node=0, + gpu_type=None, + gpu_offload=None, + ) + create_caseroot.assert_called() + apply_user_mods.assert_called() + lock_file.assert_called() + + set_lookup_value.assert_called_with("CASE_HASH", expected_first_hash) + + strftime.return_value = "10:00:00" + with mock.patch( + "CIME.case.case.Case.set_value" + ) as set_value, mock.patch("sys.argv", ["/src/create_clone"]): + case.copy("test2", "{}_2".format(tempdir)) + + set_value.assert_called_with("CASE_HASH", expected_second_hash)
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.read_xml") + @mock.patch("sys.argv", ["/src/create_newcase", "--machine", "docker"]) + @mock.patch("time.strftime", return_value="00:00:00") + @mock.patch("CIME.case.case.lock_file") + @mock.patch("CIME.case.case.Case.set_lookup_value") + @mock.patch("CIME.case.case.Case.apply_user_mods") + @mock.patch("CIME.case.case.Case.create_caseroot") + @mock.patch("CIME.case.case.Case.configure") + @mock.patch("socket.getfqdn", return_value="host1") + @mock.patch("getpass.getuser", return_value="root") + @mock.patch.dict(os.environ, {"CIME_MODEL": "cesm"}) + def test_create( + self, + get_user, + getfqdn, + configure, + create_caseroot, # pylint: disable=unused-argument + apply_user_mods, + set_lookup_value, + lock_file, + strftime, # pylint: disable=unused-argument + read_xml, + ): # pylint: disable=unused-argument + with self.tempdir as tempdir: + caseroot = os.path.join(tempdir, "test1") + with Case(caseroot, read_only=False) as case: + case.create( + "test1", + self.srcroot, + "A", + "f19_g16_rx1", + machine_name="cori-haswell", + ) + + # Check that they're all called + configure.assert_called_with( + "A", + "f19_g16_rx1", + machine_name="cori-haswell", + project=None, + pecount=None, + compiler=None, + mpilib=None, + pesfile=None, + gridfile=None, + multi_driver=False, + ninst=1, + test=False, + walltime=None, + queue=None, + output_root=None, + run_unsupported=False, + answer=None, + input_dir=None, + driver=None, + workflowid="default", + non_local=False, + extra_machines_dir=None, + case_group=None, + ngpus_per_node=0, + gpu_type=None, + gpu_offload=None, + ) + create_caseroot.assert_called() + apply_user_mods.assert_called() + lock_file.assert_called() + + set_lookup_value.assert_called_with( + "CASE_HASH", + "134a939f62115fb44bf08a46bfb2bd13426833b5c8848cf7c4884af7af05b91a", + )
+
+ + + +
+[docs] +class TestCase_RecordCmd(unittest.TestCase): +
+[docs] + def setUp(self): + self.tempdir = tempfile.TemporaryDirectory()
+ + +
+[docs] + def assert_calls_match(self, calls, expected): + self.assertTrue(len(calls) == len(expected), calls) + + for x, y in zip(calls, expected): + self.assertTrue(x == y, calls)
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.__init__", return_value=None) + @mock.patch("CIME.case.case.Case.flush") + @mock.patch("CIME.case.case.Case.get_value") + @mock.patch("CIME.case.case.open", mock.mock_open()) + @mock.patch("time.strftime", return_value="00:00:00") + @mock.patch("sys.argv", ["/src/create_newcase"]) + def test_error( + self, strftime, get_value, flush, init + ): # pylint: disable=unused-argument + Case._force_read_only = False # pylint: disable=protected-access + + with self.tempdir as tempdir, mock.patch( + "CIME.case.case.open", mock.mock_open() + ) as m: + m.side_effect = PermissionError() + + with Case(tempdir) as case: + get_value.side_effect = [tempdir, "/src"] + + # We didn't need to make tempdir look like a valid case for the Case + # constructor because we mock that constructor, but we *do* need to make + # it look like a valid case for record_cmd. + make_valid_case(tempdir) + case.record_cmd()
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.__init__", return_value=None) + @mock.patch("CIME.case.case.Case.flush") + @mock.patch("CIME.case.case.Case.get_value") + @mock.patch("CIME.case.case.open", mock.mock_open()) + @mock.patch("time.strftime", return_value="00:00:00") + @mock.patch("sys.argv", ["/src/create_newcase"]) + def test_init( + self, strftime, get_value, flush, init + ): # pylint: disable=unused-argument + Case._force_read_only = False # pylint: disable=protected-access + + mocked_open = mock.mock_open() + + with self.tempdir as tempdir, mock.patch("CIME.case.case.open", mocked_open): + with Case(tempdir) as case: + get_value.side_effect = [tempdir, "/src"] + + case.record_cmd(init=True) + + mocked_open.assert_called_with(f"{tempdir}/replay.sh", "a") + + handle = mocked_open() + + handle.writelines.assert_called_with( + [ + "#!/bin/bash\n\n", + "set -e\n\n", + "# Created 00:00:00\n\n", + 'CASEDIR="{}"\n\n'.format(tempdir), + "/src/create_newcase\n\n", + 'cd "${CASEDIR}"\n\n', + ] + )
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.__init__", return_value=None) + @mock.patch("CIME.case.case.Case.flush") + @mock.patch("CIME.case.case.Case.get_value") + @mock.patch("CIME.case.case.open", mock.mock_open()) + @mock.patch("time.strftime", return_value="00:00:00") + @mock.patch("sys.argv", ["/src/scripts/create_newcase"]) + def test_sub_relative( + self, strftime, get_value, flush, init + ): # pylint: disable=unused-argument + Case._force_read_only = False # pylint: disable=protected-access + + mocked_open = mock.mock_open() + + with self.tempdir as tempdir, mock.patch("CIME.case.case.open", mocked_open): + with Case(tempdir) as case: + get_value.side_effect = [tempdir, "/src"] + + case.record_cmd(init=True) + + expected = [ + "#!/bin/bash\n\n", + "set -e\n\n", + "# Created 00:00:00\n\n", + 'CASEDIR="{}"\n\n'.format(tempdir), + "/src/scripts/create_newcase\n\n", + 'cd "${CASEDIR}"\n\n', + ] + + handle = mocked_open() + handle.writelines.assert_called_with(expected)
+ + +
+[docs] + @mock.patch("CIME.case.case.Case.__init__", return_value=None) + @mock.patch("CIME.case.case.Case.flush") + @mock.patch("CIME.case.case.Case.get_value") + def test_cmd_arg(self, get_value, flush, init): # pylint: disable=unused-argument + Case._force_read_only = False # pylint: disable=protected-access + + mocked_open = mock.mock_open() + + with self.tempdir as tempdir, mock.patch("CIME.case.case.open", mocked_open): + with Case(tempdir) as case: + get_value.side_effect = [ + tempdir, + "/src", + ] + + # We didn't need to make tempdir look like a valid case for the Case + # constructor because we mock that constructor, but we *do* need to make + # it look like a valid case for record_cmd. + make_valid_case(tempdir) + case.record_cmd(["/some/custom/command", "arg1"]) + + expected = [ + "/some/custom/command arg1\n\n", + ] + + handle = mocked_open() + handle.writelines.assert_called_with(expected)
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case_fake.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case_fake.html new file mode 100644 index 00000000000..6bd22614a06 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case_fake.html @@ -0,0 +1,174 @@ + + + + + + CIME.tests.test_unit_case_fake — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_case_fake

+#!/usr/bin/env python3
+
+"""
+This module contains unit tests of CaseFake
+"""
+
+import unittest
+import tempfile
+import os
+import shutil
+
+from CIME.tests.case_fake import CaseFake
+
+
+
+[docs] +class TestCaseFake(unittest.TestCase): +
+[docs] + def setUp(self): + self.tempdir = tempfile.mkdtemp()
+ + +
+[docs] + def tearDown(self): + shutil.rmtree(self.tempdir, ignore_errors=True)
+ + +
+[docs] + def test_create_clone(self): + # Setup + old_caseroot = os.path.join(self.tempdir, "oldcase") + oldcase = CaseFake(old_caseroot) + oldcase.set_value("foo", "bar") + + # Exercise + new_caseroot = os.path.join(self.tempdir, "newcase") + clone = oldcase.create_clone(new_caseroot) + + # Verify + self.assertEqual("bar", clone.get_value("foo")) + self.assertEqual("newcase", clone.get_value("CASE")) + self.assertEqual("newcase", clone.get_value("CASEBASEID")) + self.assertEqual(new_caseroot, clone.get_value("CASEROOT")) + self.assertEqual(os.path.join(new_caseroot, "run"), clone.get_value("RUNDIR"))
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case_setup.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case_setup.html new file mode 100644 index 00000000000..467b631b3a9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_case_setup.html @@ -0,0 +1,356 @@ + + + + + + CIME.tests.test_unit_case_setup — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_case_setup

+#!/usr/bin/env python3
+
+import os
+import unittest
+import tempfile
+import contextlib
+from pathlib import Path
+from unittest import mock
+
+from CIME.case import case_setup
+
+
+
+[docs] +@contextlib.contextmanager +def create_machines_dir(): + """Creates temp machines directory with fake content""" + with tempfile.TemporaryDirectory() as temp_path: + machines_path = os.path.join(temp_path, "machines") + cmake_path = os.path.join(machines_path, "cmake_macros") + Path(cmake_path).mkdir(parents=True) + Path(os.path.join(cmake_path, "Macros.cmake")).touch() + Path(os.path.join(cmake_path, "test.cmake")).touch() + + yield temp_path
+ + + +
+[docs] +@contextlib.contextmanager +def chdir(path): + old_path = os.getcwd() + os.chdir(path) + + try: + yield + finally: + os.chdir(old_path)
+ + + +# pylint: disable=protected-access +
+[docs] +class TestCaseSetup(unittest.TestCase): +
+[docs] + @mock.patch("CIME.case.case_setup.copy_depends_files") + def test_create_macros_cmake(self, copy_depends_files): + machine_mock = mock.MagicMock() + machine_mock.get_machine_name.return_value = "test" + + # create context stack to cleanup after test + with contextlib.ExitStack() as stack: + root_path = stack.enter_context(create_machines_dir()) + case_path = stack.enter_context(tempfile.TemporaryDirectory()) + + machines_path = os.path.join(root_path, "machines") + type(machine_mock).machines_dir = mock.PropertyMock( + return_value=machines_path + ) + + # make sure we're calling everything from within the case root + stack.enter_context(chdir(case_path)) + + case_setup._create_macros_cmake( + case_path, + os.path.join(machines_path, "cmake_macros"), + machine_mock, + "gnu-test", + os.path.join(case_path, "cmake_macros"), + ) + + assert os.path.exists(os.path.join(case_path, "Macros.cmake")) + assert os.path.exists(os.path.join(case_path, "cmake_macros", "test.cmake")) + + copy_depends_files.assert_called_with( + "test", machines_path, case_path, "gnu-test" + )
+ + +
+[docs] + @mock.patch("CIME.case.case_setup._create_macros_cmake") + def test_create_macros(self, _create_macros_cmake): + case_mock = mock.MagicMock() + + machine_mock = mock.MagicMock() + machine_mock.get_machine_name.return_value = "test" + + # create context stack to cleanup after test + with contextlib.ExitStack() as stack: + root_path = stack.enter_context(create_machines_dir()) + case_path = stack.enter_context(tempfile.TemporaryDirectory()) + + cmake_macros_path = os.path.join(root_path, "machines", "cmake_macros") + case_mock.get_value.return_value = cmake_macros_path + + machines_path = os.path.join(root_path, "machines") + type(machine_mock).machines_dir = mock.PropertyMock( + return_value=machines_path + ) + + # do not generate env_mach_specific.xml + Path(os.path.join(case_path, "env_mach_specific.xml")).touch() + + case_setup._create_macros( + case_mock, + machine_mock, + case_path, + "gnu-test", + "openmpi", + False, + "mct", + "LINUX", + ) + + case_mock.get_value.assert_any_call("CMAKE_MACROS_DIR") + + # make sure we're calling everything from within the case root + stack.enter_context(chdir(case_path)) + + _create_macros_cmake.assert_called_with( + case_path, + cmake_macros_path, + machine_mock, + "gnu-test", + os.path.join(case_path, "cmake_macros"), + )
+ + +
+[docs] + def test_create_macros_copy_user(self): + case_mock = mock.MagicMock() + + machine_mock = mock.MagicMock() + machine_mock.get_machine_name.return_value = "test" + + # create context stack to cleanup after test + with contextlib.ExitStack() as stack: + root_path = stack.enter_context(create_machines_dir()) + case_path = stack.enter_context(tempfile.TemporaryDirectory()) + user_path = stack.enter_context(tempfile.TemporaryDirectory()) + + user_cime_path = Path(os.path.join(user_path, ".cime")) + user_cime_path.mkdir() + user_cmake = user_cime_path / "user.cmake" + user_cmake.touch() + + cmake_macros_path = os.path.join(root_path, "machines", "cmake_macros") + case_mock.get_value.return_value = cmake_macros_path + + machines_path = os.path.join(root_path, "machines") + type(machine_mock).machines_dir = mock.PropertyMock( + return_value=machines_path + ) + + # do not generate env_mach_specific.xml + Path(os.path.join(case_path, "env_mach_specific.xml")).touch() + + stack.enter_context(mock.patch.dict(os.environ, {"HOME": user_path})) + + # make sure we're calling everything from within the case root + stack.enter_context(chdir(case_path)) + + case_setup._create_macros( + case_mock, + machine_mock, + case_path, + "gnu-test", + "openmpi", + False, + "mct", + "LINUX", + ) + + case_mock.get_value.assert_any_call("CMAKE_MACROS_DIR") + + assert os.path.exists(os.path.join(case_path, "cmake_macros", "user.cmake"))
+ + +
+[docs] + def test_create_macros_copy_extra(self): + case_mock = mock.MagicMock() + + machine_mock = mock.MagicMock() + machine_mock.get_machine_name.return_value = "test" + + # create context stack to cleanup after test + with contextlib.ExitStack() as stack: + root_path = stack.enter_context(create_machines_dir()) + case_path = stack.enter_context(tempfile.TemporaryDirectory()) + extra_path = stack.enter_context(tempfile.TemporaryDirectory()) + + extra_cmake_path = Path(extra_path, "cmake_macros") + extra_cmake_path.mkdir() + + extra_macros_path = extra_cmake_path / "extra.cmake" + extra_macros_path.touch() + + cmake_macros_path = os.path.join(root_path, "machines", "cmake_macros") + case_mock.get_value.side_effect = [cmake_macros_path, extra_path] + + machines_path = os.path.join(root_path, "machines") + type(machine_mock).machines_dir = mock.PropertyMock( + return_value=machines_path + ) + + # do not generate env_mach_specific.xml + Path(os.path.join(case_path, "env_mach_specific.xml")).touch() + + # make sure we're calling everything from within the case root + stack.enter_context(chdir(case_path)) + + case_setup._create_macros( + case_mock, + machine_mock, + case_path, + "gnu-test", + "openmpi", + False, + "mct", + "LINUX", + ) + + case_mock.get_value.assert_any_call("EXTRA_MACHDIR") + + assert os.path.exists( + os.path.join(case_path, "cmake_macros", "extra.cmake") + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_compare_test_results.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_compare_test_results.html new file mode 100644 index 00000000000..316619e70ad --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_compare_test_results.html @@ -0,0 +1,250 @@ + + + + + + CIME.tests.test_unit_compare_test_results — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_compare_test_results

+#!/usr/bin/env python3
+
+"""
+This module contains unit tests for compare_test_results
+"""
+
+import unittest
+import tempfile
+import os
+import shutil
+
+from CIME import utils
+from CIME import compare_test_results
+from CIME.test_status import *
+from CIME.tests.case_fake import CaseFake
+
+
+
+[docs] +class TestCaseFake(unittest.TestCase): +
+[docs] + def setUp(self): + self.tempdir = tempfile.mkdtemp() + self.test_root = os.path.join(self.tempdir, "tests") + self.baseline_root = os.path.join(self.test_root, "baselines") + + # TODO switch to unittest.mock + self._old_strftime = utils.time.strftime + utils.time.strftime = lambda *args: "2021-02-20" + + self._old_init = CaseFake.__init__ + CaseFake.__init__ = lambda x, y, *args: self._old_init( + x, y, create_case_root=False + ) + + self._old_case = compare_test_results.Case + compare_test_results.Case = CaseFake
+ + +
+[docs] + def tearDown(self): + utils.time.strftime = self._old_strftime + CaseFake.__init__ = self._old_init + compare_test_results.Case = self._old_case + + shutil.rmtree(self.tempdir, ignore_errors=True)
+ + + def _compare_test_results(self, baseline, test_id, phases, **kwargs): + test_status_root = os.path.join(self.test_root, "gnu." + test_id) + os.makedirs(test_status_root) + + with TestStatus(test_status_root, "test") as status: + for x in phases: + status.set_status(x[0], x[1]) + + compare_test_results.compare_test_results( + baseline, self.baseline_root, self.test_root, "gnu", test_id, **kwargs + ) + + compare_log = os.path.join( + test_status_root, "compare.log.{}.2021-02-20".format(baseline) + ) + + self.assertTrue(os.path.exists(compare_log)) + +
+[docs] + def test_namelists_only(self): + compare_test_results.compare_namelists = lambda *args: True + compare_test_results.compare_history = lambda *args: (True, "Detail comments") + + phases = [ + (SETUP_PHASE, "PASS"), + (RUN_PHASE, "PASS"), + ] + + self._compare_test_results( + "test1", "test-baseline", phases, namelists_only=True + )
+ + +
+[docs] + def test_hist_only(self): + compare_test_results.compare_namelists = lambda *args: True + compare_test_results.compare_history = lambda *args: (True, "Detail comments") + + phases = [ + (SETUP_PHASE, "PASS"), + (RUN_PHASE, "PASS"), + ] + + self._compare_test_results("test1", "test-baseline", phases, hist_only=True)
+ + +
+[docs] + def test_failed_early(self): + compare_test_results.compare_namelists = lambda *args: True + compare_test_results.compare_history = lambda *args: (True, "Detail comments") + + phases = [ + (CREATE_NEWCASE_PHASE, "PASS"), + ] + + self._compare_test_results("test1", "test-baseline", phases)
+ + +
+[docs] + def test_baseline(self): + compare_test_results.compare_namelists = lambda *args: True + compare_test_results.compare_history = lambda *args: (True, "Detail comments") + + phases = [ + (SETUP_PHASE, "PASS"), + (RUN_PHASE, "PASS"), + ] + + self._compare_test_results("test1", "test-baseline", phases)
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_compare_two.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_compare_two.html new file mode 100644 index 00000000000..adda505d0a8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_compare_two.html @@ -0,0 +1,851 @@ + + + + + + CIME.tests.test_unit_compare_two — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_compare_two

+#!/usr/bin/env python3
+
+"""
+This module contains unit tests of the core logic in SystemTestsCompareTwo.
+"""
+
+# Ignore privacy concerns for unit tests, so that unit tests can access
+# protected members of the system under test
+#
+# pylint:disable=protected-access
+
+import unittest
+from collections import namedtuple
+import functools
+import os
+import shutil
+import tempfile
+from unittest import mock
+
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+import CIME.test_status as test_status
+from CIME.tests.case_fake import CaseFake
+
+# ========================================================================
+# Structure for storing information about calls made to methods
+# ========================================================================
+
+# You can create a Call object to record a single call made to a method:
+#
+# Call(method, arguments)
+#     method (str): name of method
+#     arguments (dict): dictionary mapping argument names to values
+#
+# Example:
+#     If you want to record a call to foo(bar = 1, baz = 2):
+#         somecall = Call(method = 'foo', arguments = {'bar': 1, 'baz': 2})
+#     Or simply:
+#         somecall = Call('foo', {'bar': 1, 'baz': 2})
+Call = namedtuple("Call", ["method", "arguments"])
+
+# ========================================================================
+# Names of methods for which we want to record calls
+# ========================================================================
+
+# We use constants for these method names because, in some cases, a typo in a
+# hard-coded string could cause a test to always pass, which would be a Bad
+# Thing.
+#
+# For now the names of the constants match the strings they equate to, which
+# match the actual method names. But it's fine if this doesn't remain the case
+# moving forward (which is another reason to use constants rather than
+# hard-coded strings in the tests).
+
+METHOD_case_one_custom_prerun_action = "_case_one_custom_prerun_action"
+METHOD_case_one_custom_postrun_action = "_case_one_custom_postrun_action"
+METHOD_case_two_custom_prerun_action = "_case_two_custom_prerun_action"
+METHOD_case_two_custom_postrun_action = "_case_two_custom_postrun_action"
+METHOD_link_to_case2_output = "_link_to_case2_output"
+METHOD_run_indv = "_run_indv"
+
+# ========================================================================
+# Fake version of SystemTestsCompareTwo that overrides some functionality for
+# the sake of unit testing
+# ========================================================================
+
+# A SystemTestsCompareTwoFake object can be controlled to fail at a given
+# point. See the documentation in its __init__ method for details.
+#
+# It logs what stubbed-out methods have been called in its log attribute; this
+# is a list of Call objects (see above for their definition).
+
+
+
+[docs] +class SystemTestsCompareTwoFake(SystemTestsCompareTwo): + def __init__( + self, + case1, + run_one_suffix="base", + run_two_suffix="test", + separate_builds=False, + multisubmit=False, + case2setup_raises_exception=False, + run_one_should_pass=True, + run_two_should_pass=True, + compare_should_pass=True, + ): + """ + Initialize a SystemTestsCompareTwoFake object + + The core test phases prior to RUN_PHASE are set to TEST_PASS_STATUS; + RUN_PHASE is left unset (as is any later phase) + + Args: + case1 (CaseFake): existing case + run_one_suffix (str, optional): Suffix used for first run. Defaults + to 'base'. Currently MUST be 'base'. + run_two_suffix (str, optional): Suffix used for the second run. Defaults to 'test'. + separate_builds (bool, optional): Passed to SystemTestsCompareTwo.__init__ + multisubmit (bool, optional): Passed to SystemTestsCompareTwo.__init__ + case2setup_raises_exception (bool, optional): If True, then the call + to _case_two_setup will raise an exception. Default is False. + run_one_should_pass (bool, optional): Whether the run_indv method should + pass for the first run. Default is True, meaning it will pass. + run_two_should_pass (bool, optional): Whether the run_indv method should + pass for the second run. Default is True, meaning it will pass. + compare_should_pass (bool, optional): Whether the comparison between the two + cases should pass. Default is True, meaning it will pass. + """ + + self._case2setup_raises_exception = case2setup_raises_exception + + # NOTE(wjs, 2016-08-03) Currently, due to limitations in the test + # infrastructure, run_one_suffix MUST be 'base'. However, I'm keeping it + # as an explicit argument to the constructor so that it's easy to relax + # this requirement later: To relax this assumption, remove the following + # assertion and add run_one_suffix as an argument to + # SystemTestsCompareTwo.__init__ + assert run_one_suffix == "base" + + SystemTestsCompareTwo.__init__( + self, + case1, + separate_builds=separate_builds, + run_two_suffix=run_two_suffix, + multisubmit=multisubmit, + ) + + # Need to tell test status that all phases prior to the run phase have + # passed, since this is checked in the run call (at least for the build + # phase status) + with self._test_status: + for phase in test_status.CORE_PHASES: + if phase == test_status.RUN_PHASE: + break + self._test_status.set_status(phase, test_status.TEST_PASS_STATUS) + + self.run_pass_caseroot = [] + if run_one_should_pass: + self.run_pass_caseroot.append(self._case1.get_value("CASEROOT")) + if run_two_should_pass: + self.run_pass_caseroot.append(self._case2.get_value("CASEROOT")) + + self.compare_should_pass = compare_should_pass + + self.log = [] + + # ------------------------------------------------------------------------ + # Stubs of methods called by SystemTestsCommon.__init__ that interact with + # the system or case object in ways we want to avoid here + # ------------------------------------------------------------------------ + + def _init_environment(self, caseroot): + pass + + def _init_locked_files(self, caseroot, expected): + pass + + def _init_case_setup(self): + pass + + # ------------------------------------------------------------------------ + # Fake implementations of methods that are typically provided by + # SystemTestsCommon + # ------------------------------------------------------------------------ + +
+[docs] + def run_indv( + self, + suffix="base", + st_archive=False, + submit_resubmits=None, + keep_init_generated_files=False, + ): + """ + This fake implementation appends to the log and raises an exception if + it's supposed to + + Note that the Call object appended to the log has the current CASE name + in addition to the method arguments. (This is mainly to ensure that the + proper suffix is used for the proper case, but this extra check can be + removed if it's a maintenance problem.) + """ + caseroot = self._case.get_value("CASEROOT") + self.log.append(Call(METHOD_run_indv, {"suffix": suffix, "CASEROOT": caseroot})) + + # Determine whether we should raise an exception + # + # It's important that this check be based on some attribute of the + # self._case object, to ensure that the right case has been activated + # for this call to run_indv (e.g., to catch if we forgot to activate + # case2 before the second call to run_indv). + if caseroot not in self.run_pass_caseroot: + raise RuntimeError("caseroot not in run_pass_caseroot")
+ + + def _do_compare_test(self, suffix1, suffix2, ignore_fieldlist_diffs=False): + """ + This fake implementation allows controlling whether compare_test + passes or fails + """ + return (self.compare_should_pass, "no comment", None) + + def _check_for_memleak(self): + pass + + def _st_archive_case_test(self): + pass + + # ------------------------------------------------------------------------ + # Fake implementations of methods that are typically provided by + # SystemTestsCompareTwo + # + # Since we're overriding these, their functionality is untested here! + # (Though note that _link_to_case2_output is tested elsewhere.) + # ------------------------------------------------------------------------ + + def _case_from_existing_caseroot(self, caseroot): + """ + Returns a CaseFake object instead of a Case object + """ + return CaseFake(caseroot, create_case_root=False) + + def _link_to_case2_output(self): + self.log.append(Call(METHOD_link_to_case2_output, {})) + + # ------------------------------------------------------------------------ + # Fake implementations of methods that are typically provided by the + # individual test + # + # The values set here are asserted against in some unit tests + # ------------------------------------------------------------------------ + + def _common_setup(self): + self._case.set_value("var_set_in_common_setup", "common_val") + + def _case_one_setup(self): + self._case.set_value("var_set_in_setup", "case1val") + + def _case_two_setup(self): + self._case.set_value("var_set_in_setup", "case2val") + if self._case2setup_raises_exception: + raise RuntimeError + + def _case_one_custom_prerun_action(self): + self.log.append(Call(METHOD_case_one_custom_prerun_action, {})) + + def _case_one_custom_postrun_action(self): + self.log.append(Call(METHOD_case_one_custom_postrun_action, {})) + + def _case_two_custom_prerun_action(self): + self.log.append(Call(METHOD_case_two_custom_prerun_action, {})) + + def _case_two_custom_postrun_action(self): + self.log.append(Call(METHOD_case_two_custom_postrun_action, {}))
+ + + +# ======================================================================== +# Test class itself +# ======================================================================== + + +
+[docs] +class TestSystemTestsCompareTwo(unittest.TestCase): +
+[docs] + def setUp(self): + self.original_wd = os.getcwd() + # create a sandbox in which case directories can be created + self.tempdir = tempfile.mkdtemp()
+ + +
+[docs] + def tearDown(self): + # Some tests trigger a chdir call in the SUT; make sure we return to the + # original directory at the end of the test + os.chdir(self.original_wd) + + shutil.rmtree(self.tempdir, ignore_errors=True)
+ + +
+[docs] + def get_caseroots(self, casename="mytest"): + """ + Returns a tuple (case1root, case2root) + """ + case1root = os.path.join(self.tempdir, casename) + case2root = os.path.join(case1root, "case2", casename) + return case1root, case2root
+ + +
+[docs] + def get_compare_phase_name(self, mytest): + """ + Returns a string giving the compare phase name for this test + """ + run_one_suffix = mytest._run_one_suffix + run_two_suffix = mytest._run_two_suffix + compare_phase_name = "{}_{}_{}".format( + test_status.COMPARE_PHASE, run_one_suffix, run_two_suffix + ) + return compare_phase_name
+ + +
+[docs] + def test_resetup_case_single_exe(self): + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + case1._read_only_mode = False + + mytest = SystemTestsCompareTwoFake(case1) + + case1.set_value = mock.MagicMock() + case1.get_value = mock.MagicMock() + case1.get_value.side_effect = ["/tmp", "/tmp/bld", False] + + mytest._resetup_case(test_status.RUN_PHASE, reset=True) + + case1.set_value.assert_not_called() + + case1.get_value.side_effect = ["/tmp", "/tmp/bld", True] + + mytest._resetup_case(test_status.RUN_PHASE, reset=True) + + case1.set_value.assert_not_called() + + case1.get_value.side_effect = ["/tmp", "/other/bld", False] + + mytest._resetup_case(test_status.RUN_PHASE, reset=True) + + case1.set_value.assert_not_called() + + case1.get_value.side_effect = ["/tmp", "/other/bld", True] + + mytest._resetup_case(test_status.RUN_PHASE, reset=True) + + case1.set_value.assert_called_with("BUILD_COMPLETE", True)
+ + +
+[docs] + def test_setup(self): + # Ensure that test setup properly sets up case 1 and case 2 + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + case1.set_value("var_preset", "preset_value") + + # Exercise + mytest = SystemTestsCompareTwoFake(case1) + + # Verify + # Make sure that pre-existing values in case1 are copied to case2 (via + # clone) + self.assertEqual("preset_value", mytest._case2.get_value("var_preset")) + + # Make sure that _common_setup is called for both + self.assertEqual( + "common_val", mytest._case1.get_value("var_set_in_common_setup") + ) + self.assertEqual( + "common_val", mytest._case2.get_value("var_set_in_common_setup") + ) + + # Make sure that _case_one_setup and _case_two_setup are called + # appropriately + self.assertEqual("case1val", mytest._case1.get_value("var_set_in_setup")) + self.assertEqual("case2val", mytest._case2.get_value("var_set_in_setup"))
+ + +
+[docs] + def test_setup_separate_builds_sharedlibroot(self): + # If we're using separate_builds, the two cases should still use + # the same sharedlibroot + + # Setup + case1root, _ = self.get_caseroots() + case1 = CaseFake(case1root) + case1.set_value("SHAREDLIBROOT", os.path.join(case1root, "sharedlibroot")) + + # Exercise + mytest = SystemTestsCompareTwoFake(case1, separate_builds=True) + + # Verify + self.assertEqual( + case1.get_value("SHAREDLIBROOT"), mytest._case2.get_value("SHAREDLIBROOT") + )
+ + +
+[docs] + def test_setup_case2_exists(self): + # If case2 already exists, then setup code should not be called + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + os.makedirs(os.path.join(case1root, "case2", "case1")) + + # Exercise + mytest = SystemTestsCompareTwoFake(case1, run_two_suffix="test") + + # Verify: + + # Make sure that case2 object is set (i.e., that it doesn't remain None) + self.assertEqual("case1", mytest._case2.get_value("CASE")) + + # Variables set in various setup methods should not be set + # (In the real world - i.e., outside of this unit testing fakery - these + # values would be set when the Case objects are created.) + self.assertIsNone(mytest._case1.get_value("var_set_in_common_setup")) + self.assertIsNone(mytest._case2.get_value("var_set_in_common_setup")) + self.assertIsNone(mytest._case1.get_value("var_set_in_setup")) + self.assertIsNone(mytest._case2.get_value("var_set_in_setup"))
+ + +
+[docs] + def test_setup_error(self): + # If there is an error in setup, an exception should be raised and the + # case2 directory should be removed + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + + # Exercise + with self.assertRaises(Exception): + SystemTestsCompareTwoFake( + case1, run_two_suffix="test", case2setup_raises_exception=True + ) + + # Verify + self.assertFalse(os.path.exists(os.path.join(case1root, "case1.test")))
+ + +
+[docs] + def test_run_phase_passes(self): + # Make sure the run phase behaves properly when all runs succeed. + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake(case1) + + # Exercise + mytest.run() + + # Verify + self.assertEqual( + test_status.TEST_PASS_STATUS, + mytest._test_status.get_status(test_status.RUN_PHASE), + )
+ + +
+[docs] + def test_run_phase_internal_calls(self): + # Make sure that the correct calls are made to methods stubbed out by + # SystemTestsCompareTwoFake (when runs succeed) + # + # The point of this is: A number of methods called from the run_phase + # method are stubbed out in the Fake test implementation, because their + # actions are awkward in these unit tests. But we still want to make + # sure that those methods actually got called correctly. + + # Setup + run_one_suffix = "base" + run_two_suffix = "run2" + case1root, case2root = self.get_caseroots() + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake( + case1, run_one_suffix=run_one_suffix, run_two_suffix=run_two_suffix + ) + + # Exercise + mytest.run() + + # Verify + expected_calls = [ + Call(METHOD_case_one_custom_prerun_action, {}), + Call(METHOD_run_indv, {"suffix": run_one_suffix, "CASEROOT": case1root}), + Call(METHOD_case_one_custom_postrun_action, {}), + Call(METHOD_case_two_custom_prerun_action, {}), + Call(METHOD_run_indv, {"suffix": run_two_suffix, "CASEROOT": case2root}), + Call(METHOD_case_two_custom_postrun_action, {}), + Call(METHOD_link_to_case2_output, {}), + ] + self.assertEqual(expected_calls, mytest.log)
+ + +
+[docs] + def test_run_phase_internal_calls_multisubmit_phase1(self): + # Make sure that the correct calls are made to methods stubbed out by + # SystemTestsCompareTwoFake (when runs succeed), when we have a + # multi-submit test, in the first phase + + # Setup + run_one_suffix = "base" + run_two_suffix = "run2" + case1root, _ = self.get_caseroots() + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake( + case1=case1, + run_one_suffix=run_one_suffix, + run_two_suffix=run_two_suffix, + multisubmit=True, + ) + # RESUBMIT=1 signals first phase + case1.set_value("RESUBMIT", 1) + + # Exercise + mytest.run() + + # Verify + expected_calls = [ + Call(METHOD_case_one_custom_prerun_action, {}), + Call(METHOD_run_indv, {"suffix": run_one_suffix, "CASEROOT": case1root}), + Call(METHOD_case_one_custom_postrun_action, {}), + ] + self.assertEqual(expected_calls, mytest.log) + + # Also verify that comparison is NOT called: + compare_phase_name = self.get_compare_phase_name(mytest) + self.assertEqual( + test_status.TEST_PEND_STATUS, + mytest._test_status.get_status(compare_phase_name), + )
+ + +
+[docs] + def test_run_phase_internal_calls_multisubmit_phase2(self): + # Make sure that the correct calls are made to methods stubbed out by + # SystemTestsCompareTwoFake (when runs succeed), when we have a + # multi-submit test, in the second phase + + # Setup + run_one_suffix = "base" + run_two_suffix = "run2" + case1root, case2root = self.get_caseroots() + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake( + case1=case1, + run_one_suffix=run_one_suffix, + run_two_suffix=run_two_suffix, + multisubmit=True, + compare_should_pass=True, + ) + # RESUBMIT=0 signals second phase + case1.set_value("RESUBMIT", 0) + + # Exercise + mytest.run() + + # Verify + expected_calls = [ + Call(METHOD_case_two_custom_prerun_action, {}), + Call(METHOD_run_indv, {"suffix": run_two_suffix, "CASEROOT": case2root}), + Call(METHOD_case_two_custom_postrun_action, {}), + Call(METHOD_link_to_case2_output, {}), + ] + self.assertEqual(expected_calls, mytest.log) + + # Also verify that comparison is called: + compare_phase_name = self.get_compare_phase_name(mytest) + self.assertEqual( + test_status.TEST_PASS_STATUS, + mytest._test_status.get_status(compare_phase_name), + )
+ + +
+[docs] + def test_internal_calls_multisubmit_failed_state(self): + run_one_suffix = "base" + run_two_suffix = "run2" + case1root, _ = self.get_caseroots() + case1 = CaseFake(case1root) + + def _set_initial_test_values(x): + x.set_value("RESUBMIT", 1) + + case1.set_initial_test_values = functools.partial( + _set_initial_test_values, case1 + ) + + # Standard first phase + case1.set_value("IS_FIRST_RUN", True) + case1.set_value("RESUBMIT", 1) + + mytest = SystemTestsCompareTwoFake( + case1=case1, + run_one_suffix=run_one_suffix, + run_two_suffix=run_two_suffix, + multisubmit=True, + ) + + mytest.run() + + expected_calls = [ + Call(METHOD_case_one_custom_prerun_action, {}), + Call(METHOD_run_indv, {"CASEROOT": case1root, "suffix": "base"}), + Call(METHOD_case_one_custom_postrun_action, {}), + ] + + self.assertEqual(expected_calls, mytest.log) + + # Emulate a rerun ensure phase 1 still runs + case1.set_value("IS_FIRST_RUN", True) + case1.set_value("RESUBMIT", 0) + + # Reset the log + mytest.log = [] + + mytest.run() + + expected_calls = [ + Call(METHOD_case_one_custom_prerun_action, {}), + Call(METHOD_run_indv, {"CASEROOT": case1root, "suffix": "base"}), + Call(METHOD_case_one_custom_postrun_action, {}), + ] + + self.assertEqual(expected_calls, mytest.log)
+ + +
+[docs] + def test_run1_fails(self): + # Make sure that a failure in run1 is reported correctly + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake(case1, run_one_should_pass=False) + + # Exercise + try: + mytest.run() + except Exception: + pass + + # Verify + self.assertEqual( + test_status.TEST_FAIL_STATUS, + mytest._test_status.get_status(test_status.RUN_PHASE), + )
+ + +
+[docs] + def test_run2_fails(self): + # Make sure that a failure in run2 is reported correctly + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake(case1, run_two_should_pass=False) + + # Exercise + try: + mytest.run() + except Exception: + pass + + # Verify + self.assertEqual( + test_status.TEST_FAIL_STATUS, + mytest._test_status.get_status(test_status.RUN_PHASE), + )
+ + +
+[docs] + def test_compare_passes(self): + # Make sure that a pass in the comparison is reported correctly + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake(case1, compare_should_pass=True) + + # Exercise + mytest.run() + + # Verify + compare_phase_name = self.get_compare_phase_name(mytest) + self.assertEqual( + test_status.TEST_PASS_STATUS, + mytest._test_status.get_status(compare_phase_name), + )
+ + +
+[docs] + def test_compare_fails(self): + # Make sure that a failure in the comparison is reported correctly + + # Setup + case1root = os.path.join(self.tempdir, "case1") + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake(case1, compare_should_pass=False) + + # Exercise + mytest.run() + + # Verify + compare_phase_name = self.get_compare_phase_name(mytest) + self.assertEqual( + test_status.TEST_FAIL_STATUS, + mytest._test_status.get_status(compare_phase_name), + )
+
+ + + +if __name__ == "__main__": + unittest.main(verbosity=2, catchbreak=True) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_config.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_config.html new file mode 100644 index 00000000000..f37ca48ada3 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_config.html @@ -0,0 +1,272 @@ + + + + + + CIME.tests.test_unit_config — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_config

+import os
+import unittest
+import tempfile
+
+from CIME.config import Config
+
+
+
+[docs] +class TestConfig(unittest.TestCase): +
+[docs] + def test_class_external(self): + with tempfile.TemporaryDirectory() as tempdir: + complex_file = os.path.join(tempdir, "01_complex.py") + + with open(complex_file, "w") as fd: + fd.write( + """ +class TestComplex: + def do_something(self): + print("Something complex") + """ + ) + + test_file = os.path.join(tempdir, "02_test.py") + + with open(test_file, "w") as fd: + fd.write( + """ +from CIME.customize import TestComplex + +use_feature1 = True +use_feature2 = False + +def prerun_provenance(case, **kwargs): + print("prerun_provenance") + + external = TestComplex() + + external.do_something() + + return True + """ + ) + + config = Config.load(tempdir) + + assert config.use_feature1 + assert not config.use_feature2 + assert config.prerun_provenance + assert config.prerun_provenance("test") + + with self.assertRaises(AttributeError): + config.postrun_provenance("test")
+ + +
+[docs] + def test_class(self): + with tempfile.TemporaryDirectory() as tempdir: + test_file = os.path.join(tempdir, "test.py") + + with open(test_file, "w") as fd: + fd.write( + """ +use_feature1 = True +use_feature2 = False + +class TestComplex: + def do_something(self): + print("Something complex") + +def prerun_provenance(case, **kwargs): + print("prerun_provenance") + + external = TestComplex() + + external.do_something() + + return True + """ + ) + + config = Config.load(tempdir) + + assert config.use_feature1 + assert not config.use_feature2 + assert config.prerun_provenance + assert config.prerun_provenance("test") + + with self.assertRaises(AttributeError): + config.postrun_provenance("test")
+ + +
+[docs] + def test_load(self): + with tempfile.TemporaryDirectory() as tempdir: + test_file = os.path.join(tempdir, "test.py") + + with open(test_file, "w") as fd: + fd.write( + """ +use_feature1 = True +use_feature2 = False + +def prerun_provenance(case, **kwargs): + print("prerun_provenance") + + return True + """ + ) + + config = Config.load(tempdir) + + assert config.use_feature1 + assert not config.use_feature2 + assert config.prerun_provenance + assert config.prerun_provenance("test") + + with self.assertRaises(AttributeError): + config.postrun_provenance("test")
+ + +
+[docs] + def test_overwrite(self): + with tempfile.TemporaryDirectory() as tempdir: + test_file = os.path.join(tempdir, "test.py") + + with open(test_file, "w") as fd: + fd.write( + """ +use_feature1 = True +use_feature2 = False + +def prerun_provenance(case, **kwargs): + print("prerun_provenance") + + return True + """ + ) + + Config.use_feature1 = False + + config = Config.load(tempdir) + + assert config.use_feature1
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_cs_status.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_cs_status.html new file mode 100644 index 00000000000..efc006f9534 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_cs_status.html @@ -0,0 +1,462 @@ + + + + + + CIME.tests.test_unit_cs_status — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_cs_status

+#!/usr/bin/env python3
+
+import io
+import unittest
+import shutil
+import os
+import tempfile
+import re
+from CIME.cs_status import cs_status
+from CIME import test_status
+from CIME.tests.custom_assertions_test_status import CustomAssertionsTestStatus
+
+
+
+[docs] +class TestCsStatus(CustomAssertionsTestStatus): + + # ------------------------------------------------------------------------ + # Test helper functions + # ------------------------------------------------------------------------ + + # An arbitrary phase we can use when we want to work with a non-core phase + _NON_CORE_PHASE = test_status.MEMLEAK_PHASE + + # Another arbitrary phase if we need two different non-core phases + _NON_CORE_PHASE2 = test_status.BASELINE_PHASE + +
+[docs] + def setUp(self): + self._testroot = tempfile.mkdtemp() + self._output = io.StringIO()
+ + +
+[docs] + def tearDown(self): + self._output.close() + shutil.rmtree(self._testroot, ignore_errors=True)
+ + +
+[docs] + def create_test_dir(self, test_dir): + """Creates the given test directory under testroot. + + Returns the full path to the created test directory. + """ + fullpath = os.path.join(self._testroot, test_dir) + os.makedirs(fullpath) + return fullpath
+ + +
+[docs] + @staticmethod + def create_test_status_core_passes(test_dir_path, test_name): + """Creates a TestStatus file in the given path, with PASS status + for all core phases""" + with test_status.TestStatus(test_dir=test_dir_path, test_name=test_name) as ts: + for phase in test_status.CORE_PHASES: + ts.set_status(phase, test_status.TEST_PASS_STATUS)
+ + +
+[docs] + def set_last_core_phase_to_fail(self, test_dir_path, test_name): + """Sets the last core phase to FAIL + + Returns the name of this phase""" + fail_phase = test_status.CORE_PHASES[-1] + self.set_phase_to_status( + test_dir_path=test_dir_path, + test_name=test_name, + phase=fail_phase, + status=test_status.TEST_FAIL_STATUS, + ) + return fail_phase
+ + +
+[docs] + @staticmethod + def set_phase_to_status(test_dir_path, test_name, phase, status): + """Sets the given phase to the given status for this test""" + with test_status.TestStatus(test_dir=test_dir_path, test_name=test_name) as ts: + ts.set_status(phase, status)
+ + + # ------------------------------------------------------------------------ + # Begin actual tests + # ------------------------------------------------------------------------ + +
+[docs] + def test_force_rebuild(self): + test_name = "my.test.name" + test_dir = "my.test.name.testid" + test_dir_path = self.create_test_dir(test_dir) + self.create_test_status_core_passes(test_dir_path, test_name) + cs_status( + [os.path.join(test_dir_path, "TestStatus")], + force_rebuild=True, + out=self._output, + ) + self.assert_status_of_phase( + self._output.getvalue(), + test_status.TEST_PEND_STATUS, + test_status.SHAREDLIB_BUILD_PHASE, + test_name, + )
+ + +
+[docs] + def test_single_test(self): + """cs_status for a single test should include some minimal expected output""" + test_name = "my.test.name" + test_dir = "my.test.name.testid" + test_dir_path = self.create_test_dir(test_dir) + self.create_test_status_core_passes(test_dir_path, test_name) + cs_status([os.path.join(test_dir_path, "TestStatus")], out=self._output) + self.assert_core_phases(self._output.getvalue(), test_name, fails=[])
+ + +
+[docs] + def test_two_tests(self): + """cs_status for two tests (one with a FAIL) should include some minimal expected output""" + test_name1 = "my.test.name1" + test_name2 = "my.test.name2" + test_dir1 = test_name1 + ".testid" + test_dir2 = test_name2 + ".testid" + test_dir_path1 = self.create_test_dir(test_dir1) + test_dir_path2 = self.create_test_dir(test_dir2) + self.create_test_status_core_passes(test_dir_path1, test_name1) + self.create_test_status_core_passes(test_dir_path2, test_name2) + test2_fail_phase = self.set_last_core_phase_to_fail(test_dir_path2, test_name2) + cs_status( + [ + os.path.join(test_dir_path1, "TestStatus"), + os.path.join(test_dir_path2, "TestStatus"), + ], + out=self._output, + ) + self.assert_core_phases(self._output.getvalue(), test_name1, fails=[]) + self.assert_core_phases( + self._output.getvalue(), test_name2, fails=[test2_fail_phase] + )
+ + +
+[docs] + def test_fails_only(self): + """With fails_only flag, only fails and pends should appear in the output""" + test_name = "my.test.name" + test_dir = "my.test.name.testid" + test_dir_path = self.create_test_dir(test_dir) + self.create_test_status_core_passes(test_dir_path, test_name) + fail_phase = self.set_last_core_phase_to_fail(test_dir_path, test_name) + pend_phase = self._NON_CORE_PHASE + self.set_phase_to_status( + test_dir_path, + test_name, + phase=pend_phase, + status=test_status.TEST_PEND_STATUS, + ) + cs_status( + [os.path.join(test_dir_path, "TestStatus")], + fails_only=True, + out=self._output, + ) + self.assert_status_of_phase( + output=self._output.getvalue(), + status=test_status.TEST_FAIL_STATUS, + phase=fail_phase, + test_name=test_name, + ) + self.assert_status_of_phase( + output=self._output.getvalue(), + status=test_status.TEST_PEND_STATUS, + phase=pend_phase, + test_name=test_name, + ) + for phase in test_status.CORE_PHASES: + if phase != fail_phase: + self.assert_phase_absent( + output=self._output.getvalue(), phase=phase, test_name=test_name + ) + self.assertNotRegex(self._output.getvalue(), r"Overall:")
+ + +
+[docs] + def test_count_fails(self): + """Test the count of fails with three tests + + For first phase of interest: First test FAILs, second PASSes, + third FAILs; count should be 2, and this phase should not appear + individually for each test. + + For second phase of interest: First test PASSes, second PASSes, + third FAILs; count should be 1, and this phase should not appear + individually for each test. + """ + # Note that this test does NOT cover: + # - combining count_fails_phase_list with fails_only: currently, + # this wouldn't cover any additional code/logic + # - ensuring that PENDs are also counted: currently, this + # wouldn't cover any additional code/logic + phase_of_interest1 = self._NON_CORE_PHASE + phase_of_interest2 = self._NON_CORE_PHASE2 + statuses1 = [ + test_status.TEST_FAIL_STATUS, + test_status.TEST_PASS_STATUS, + test_status.TEST_FAIL_STATUS, + ] + statuses2 = [ + test_status.TEST_PASS_STATUS, + test_status.TEST_PASS_STATUS, + test_status.TEST_FAIL_STATUS, + ] + test_paths = [] + test_names = [] + for testnum in range(3): + test_name = "my.test.name" + str(testnum) + test_names.append(test_name) + test_dir = test_name + ".testid" + test_dir_path = self.create_test_dir(test_dir) + self.create_test_status_core_passes(test_dir_path, test_name) + self.set_phase_to_status( + test_dir_path, + test_name, + phase=phase_of_interest1, + status=statuses1[testnum], + ) + self.set_phase_to_status( + test_dir_path, + test_name, + phase=phase_of_interest2, + status=statuses2[testnum], + ) + test_paths.append(os.path.join(test_dir_path, "TestStatus")) + + cs_status( + test_paths, + count_fails_phase_list=[phase_of_interest1, phase_of_interest2], + out=self._output, + ) + + for testnum in range(3): + self.assert_phase_absent( + output=self._output.getvalue(), + phase=phase_of_interest1, + test_name=test_names[testnum], + ) + self.assert_phase_absent( + output=self._output.getvalue(), + phase=phase_of_interest2, + test_name=test_names[testnum], + ) + count_regex1 = r"{} +non-passes: +2".format(re.escape(phase_of_interest1)) + self.assertRegex(self._output.getvalue(), count_regex1) + count_regex2 = r"{} +non-passes: +1".format(re.escape(phase_of_interest2)) + self.assertRegex(self._output.getvalue(), count_regex2)
+ + +
+[docs] + def test_expected_fails(self): + """With the expected_fails_file flag, expected failures should be flagged as such""" + test_name1 = "my.test.name1" + test_name2 = "my.test.name2" + test_dir1 = test_name1 + ".testid" + test_dir2 = test_name2 + ".testid" + test_dir_path1 = self.create_test_dir(test_dir1) + test_dir_path2 = self.create_test_dir(test_dir2) + self.create_test_status_core_passes(test_dir_path1, test_name1) + self.create_test_status_core_passes(test_dir_path2, test_name2) + test1_fail_phase = self.set_last_core_phase_to_fail(test_dir_path1, test_name1) + test2_fail_phase = self.set_last_core_phase_to_fail(test_dir_path2, test_name2) + + # One phase is labeled as an expected failure for test1, nothing for test2: + expected_fails_contents = """<?xml version= "1.0"?> +<expectedFails version="1.1"> + <test name="{test_name1}"> + <phase name="{test1_fail_phase}"> + <status>{fail_status}</status> + </phase> + </test> +</expectedFails> +""".format( + test_name1=test_name1, + test1_fail_phase=test1_fail_phase, + fail_status=test_status.TEST_FAIL_STATUS, + ) + expected_fails_filepath = os.path.join(self._testroot, "ExpectedFails.xml") + with open(expected_fails_filepath, "w") as expected_fails_file: + expected_fails_file.write(expected_fails_contents) + + cs_status( + [ + os.path.join(test_dir_path1, "TestStatus"), + os.path.join(test_dir_path2, "TestStatus"), + ], + expected_fails_filepath=expected_fails_filepath, + out=self._output, + ) + + # Both test1 and test2 should have a failure for one phase, but this should be + # marked as expected only for test1. + self.assert_core_phases( + self._output.getvalue(), test_name1, fails=[test1_fail_phase] + ) + self.assert_status_of_phase( + self._output.getvalue(), + test_status.TEST_FAIL_STATUS, + test1_fail_phase, + test_name1, + xfail="expected", + ) + self.assert_core_phases( + self._output.getvalue(), test_name2, fails=[test2_fail_phase] + ) + self.assert_status_of_phase( + self._output.getvalue(), + test_status.TEST_FAIL_STATUS, + test2_fail_phase, + test_name2, + xfail="no", + ) + # Make sure that no other phases are mistakenly labeled as expected failures: + self.assert_num_expected_unexpected_fails( + self._output.getvalue(), num_expected=1, num_unexpected=0 + )
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_custom_assertions_test_status.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_custom_assertions_test_status.html new file mode 100644 index 00000000000..9bcca8c1a60 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_custom_assertions_test_status.html @@ -0,0 +1,426 @@ + + + + + + CIME.tests.test_unit_custom_assertions_test_status — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_custom_assertions_test_status

+#!/usr/bin/env python3
+
+"""
+This module contains unit tests of CustomAssertionsTestStatus
+"""
+
+import unittest
+from CIME import test_status
+from CIME.tests.custom_assertions_test_status import CustomAssertionsTestStatus
+
+
+
+[docs] +class TestCustomAssertions(CustomAssertionsTestStatus): + + _UNEXPECTED_COMMENT = test_status.TEST_UNEXPECTED_FAILURE_COMMENT_START + " blah)" + +
+[docs] + @staticmethod + def output_line(status, test_name, phase, extra=""): + output = status + " " + test_name + " " + phase + if extra: + output += " " + extra + output += "\n" + return output
+ + +
+[docs] + def test_assertPhaseAbsent_passes(self): + """assert_phase_absent should pass when the phase is absent for + the given test_name""" + test_name1 = "my.test.name1" + test_name2 = "my.test.name2" + output = self.output_line("PASS", test_name1, "PHASE1") + output += self.output_line("PASS", test_name2, "PHASE2") + + self.assert_phase_absent(output, "PHASE2", test_name1) + self.assert_phase_absent(output, "PHASE1", test_name2)
+ + +
+[docs] + def test_assertPhaseAbsent_fails(self): + """assert_phase_absent should fail when the phase is present for + the given test_name""" + test_name = "my.test.name" + output = self.output_line("PASS", test_name, "PHASE1") + + with self.assertRaises(AssertionError): + self.assert_phase_absent(output, "PHASE1", test_name)
+ + +
+[docs] + def test_assertCorePhases_passes(self): + """assert_core_phases passes when it should""" + output = "" + fails = [test_status.CORE_PHASES[1]] + test_name = "my.test.name" + for phase in test_status.CORE_PHASES: + if phase in fails: + status = test_status.TEST_FAIL_STATUS + else: + status = test_status.TEST_PASS_STATUS + output = output + self.output_line(status, test_name, phase) + + self.assert_core_phases(output, test_name, fails)
+ + +
+[docs] + def test_assertCorePhases_missingPhase_fails(self): + """assert_core_phases fails if there is a missing phase""" + output = "" + test_name = "my.test.name" + for phase in test_status.CORE_PHASES: + if phase != test_status.CORE_PHASES[1]: + output = output + self.output_line( + test_status.TEST_PASS_STATUS, test_name, phase + ) + + with self.assertRaises(AssertionError): + self.assert_core_phases(output, test_name, fails=[])
+ + +
+[docs] + def test_assertCorePhases_wrongStatus_fails(self): + """assert_core_phases fails if a phase has the wrong status""" + output = "" + test_name = "my.test.name" + for phase in test_status.CORE_PHASES: + output = output + self.output_line( + test_status.TEST_PASS_STATUS, test_name, phase + ) + + with self.assertRaises(AssertionError): + self.assert_core_phases( + output, test_name, fails=[test_status.CORE_PHASES[1]] + )
+ + +
+[docs] + def test_assertCorePhases_wrongName_fails(self): + """assert_core_phases fails if the test name is wrong""" + output = "" + test_name = "my.test.name" + for phase in test_status.CORE_PHASES: + output = output + self.output_line( + test_status.TEST_PASS_STATUS, test_name, phase + ) + + with self.assertRaises(AssertionError): + self.assert_core_phases(output, "my.test", fails=[])
+ + + # Note: Basic functionality of assert_status_of_phase is covered sufficiently via + # tests of assert_core_phases. Below we just cover some other aspects that aren't + # already covered. + +
+[docs] + def test_assertStatusOfPhase_withExtra_passes(self): + """Make sure assert_status_of_phase passes when there is some extra text at the + end of the line""" + test_name = "my.test.name" + output = self.output_line( + test_status.TEST_FAIL_STATUS, + test_name, + test_status.CORE_PHASES[0], + extra=test_status.TEST_EXPECTED_FAILURE_COMMENT, + ) + self.assert_status_of_phase( + output, test_status.TEST_FAIL_STATUS, test_status.CORE_PHASES[0], test_name + )
+ + +
+[docs] + def test_assertStatusOfPhase_xfailNo_passes(self): + """assert_status_of_phase should pass when xfail='no' and there is no + EXPECTED/UNEXPECTED on the line""" + test_name = "my.test.name" + output = self.output_line( + test_status.TEST_FAIL_STATUS, test_name, test_status.CORE_PHASES[0] + ) + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + test_status.CORE_PHASES[0], + test_name, + xfail="no", + ) + # While we're at it, also test assert_num_expected_unexpected_fails + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=0 + )
+ + +
+[docs] + def test_assertStatusOfPhase_xfailNo_fails(self): + """assert_status_of_phase should fail when xfail='no' but the line contains the + EXPECTED comment""" + test_name = "my.test.name" + output = self.output_line( + test_status.TEST_FAIL_STATUS, + test_name, + test_status.CORE_PHASES[0], + extra=test_status.TEST_EXPECTED_FAILURE_COMMENT, + ) + + with self.assertRaises(AssertionError): + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + test_status.CORE_PHASES[0], + test_name, + xfail="no", + ) + # While we're at it, also test assert_num_expected_unexpected_fails + self.assert_num_expected_unexpected_fails( + output, num_expected=1, num_unexpected=0 + )
+ + +
+[docs] + def test_assertStatusOfPhase_xfailExpected_passes(self): + """assert_status_of_phase should pass when xfail='expected' and the line contains + the EXPECTED comment""" + test_name = "my.test.name" + output = self.output_line( + test_status.TEST_FAIL_STATUS, + test_name, + test_status.CORE_PHASES[0], + extra=test_status.TEST_EXPECTED_FAILURE_COMMENT, + ) + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + test_status.CORE_PHASES[0], + test_name, + xfail="expected", + ) + # While we're at it, also test assert_num_expected_unexpected_fails + self.assert_num_expected_unexpected_fails( + output, num_expected=1, num_unexpected=0 + )
+ + +
+[docs] + def test_assertStatusOfPhase_xfailExpected_fails(self): + """assert_status_of_phase should fail when xfail='expected' but the line does NOT contain + the EXPECTED comment""" + test_name = "my.test.name" + # Note that the line contains the UNEXPECTED comment, but not the EXPECTED comment + # (we assume that if the assertion correctly fails in this case, then it will also + # correctly handle the case where neither the EXPECTED nor UNEXPECTED comment is + # present). + output = self.output_line( + test_status.TEST_FAIL_STATUS, + test_name, + test_status.CORE_PHASES[0], + extra=self._UNEXPECTED_COMMENT, + ) + + with self.assertRaises(AssertionError): + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + test_status.CORE_PHASES[0], + test_name, + xfail="expected", + ) + # While we're at it, also test assert_num_expected_unexpected_fails + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=1 + )
+ + +
+[docs] + def test_assertStatusOfPhase_xfailUnexpected_passes(self): + """assert_status_of_phase should pass when xfail='unexpected' and the line contains + the UNEXPECTED comment""" + test_name = "my.test.name" + output = self.output_line( + test_status.TEST_FAIL_STATUS, + test_name, + test_status.CORE_PHASES[0], + extra=self._UNEXPECTED_COMMENT, + ) + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + test_status.CORE_PHASES[0], + test_name, + xfail="unexpected", + ) + # While we're at it, also test assert_num_expected_unexpected_fails + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=1 + )
+ + +
+[docs] + def test_assertStatusOfPhase_xfailUnexpected_fails(self): + """assert_status_of_phase should fail when xfail='unexpected' but the line does NOT + contain the UNEXPECTED comment""" + test_name = "my.test.name" + # Note that the line contains the EXPECTED comment, but not the UNEXPECTED comment + # (we assume that if the assertion correctly fails in this case, then it will also + # correctly handle the case where neither the EXPECTED nor UNEXPECTED comment is + # present). + output = self.output_line( + test_status.TEST_FAIL_STATUS, + test_name, + test_status.CORE_PHASES[0], + extra=test_status.TEST_EXPECTED_FAILURE_COMMENT, + ) + + with self.assertRaises(AssertionError): + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + test_status.CORE_PHASES[0], + test_name, + xfail="unexpected", + ) + # While we're at it, also test assert_num_expected_unexpected_fails + self.assert_num_expected_unexpected_fails( + output, num_expected=1, num_unexpected=0 + )
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_doctest.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_doctest.html new file mode 100644 index 00000000000..380c35b1bd7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_doctest.html @@ -0,0 +1,167 @@ + + + + + + CIME.tests.test_unit_doctest — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_doctest

+#!/usr/bin/env python3
+
+import glob
+import re
+import os
+import stat
+import doctest
+import sys
+import pkgutil
+import unittest
+import functools
+
+import CIME
+from CIME import utils
+from CIME.tests import base
+
+
+
+[docs] +class TestDocs(base.BaseTestCase): +
+[docs] + def test_lib_docs(self): + cime_root = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")) + + ignore_patterns = [ + "/tests/", + "mvk.py", + "pgn.py", + "tsc.py", + ] + + for dirpath, _, filenames in os.walk(os.path.join(cime_root, "CIME")): + for filepath in map(lambda x: os.path.join(dirpath, x), filenames): + if not filepath.endswith(".py") or any( + [x in filepath for x in ignore_patterns] + ): + continue + + # Couldn't use doctest.DocFileSuite due to sys.path issue + self.run_cmd_assert_result( + f"PYTHONPATH={cime_root}:$PYTHONPATH python3 -m doctest {filepath} 2>&1", + from_dir=cime_root, + )
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_expected_fails_file.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_expected_fails_file.html new file mode 100644 index 00000000000..f42b5ed1411 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_expected_fails_file.html @@ -0,0 +1,259 @@ + + + + + + CIME.tests.test_unit_expected_fails_file — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_expected_fails_file

+#!/usr/bin/env python3
+
+import unittest
+import os
+import shutil
+import tempfile
+from CIME.XML.expected_fails_file import ExpectedFailsFile
+from CIME.utils import CIMEError
+from CIME.expected_fails import ExpectedFails
+
+
+
+[docs] +class TestExpectedFailsFile(unittest.TestCase): +
+[docs] + def setUp(self): + self._workdir = tempfile.mkdtemp() + self._xml_filepath = os.path.join(self._workdir, "expected_fails.xml")
+ + +
+[docs] + def tearDown(self): + shutil.rmtree(self._workdir)
+ + +
+[docs] + def test_basic(self): + """Basic test of the parsing of an expected fails file""" + contents = """<?xml version= "1.0"?> +<expectedFails version="1.1"> + <test name="my.test.1"> + <phase name="RUN"> + <status>FAIL</status> + <issue>#404</issue> + </phase> + <phase name="COMPARE_base_rest"> + <status>PEND</status> + <issue>#404</issue> + <comment>Because of the RUN failure, this phase is listed as PEND</comment> + </phase> + </test> + <test name="my.test.2"> + <phase name="GENERATE"> + <status>FAIL</status> + <issue>ESMCI/cime#2917</issue> + </phase> + <phase name="BASELINE"> + <status>FAIL</status> + <issue>ESMCI/cime#2917</issue> + </phase> + </test> +</expectedFails> +""" + with open(self._xml_filepath, "w") as xml_file: + xml_file.write(contents) + expected_fails_file = ExpectedFailsFile(self._xml_filepath) + xfails = expected_fails_file.get_expected_fails() + + expected_test1 = ExpectedFails() + expected_test1.add_failure("RUN", "FAIL") + expected_test1.add_failure("COMPARE_base_rest", "PEND") + expected_test2 = ExpectedFails() + expected_test2.add_failure("GENERATE", "FAIL") + expected_test2.add_failure("BASELINE", "FAIL") + expected = {"my.test.1": expected_test1, "my.test.2": expected_test2} + + self.assertEqual(xfails, expected)
+ + +
+[docs] + def test_same_test_appears_twice(self): + """If the same test appears twice, its information should be appended. + + This is not the typical, expected layout of the file, but it should be handled + correctly in case the file is written this way. + """ + contents = """<?xml version= "1.0"?> +<expectedFails version="1.1"> + <test name="my.test.1"> + <phase name="RUN"> + <status>FAIL</status> + <issue>#404</issue> + </phase> + </test> + <test name="my.test.1"> + <phase name="COMPARE_base_rest"> + <status>PEND</status> + <issue>#404</issue> + <comment>Because of the RUN failure, this phase is listed as PEND</comment> + </phase> + </test> +</expectedFails> +""" + with open(self._xml_filepath, "w") as xml_file: + xml_file.write(contents) + expected_fails_file = ExpectedFailsFile(self._xml_filepath) + xfails = expected_fails_file.get_expected_fails() + + expected_test1 = ExpectedFails() + expected_test1.add_failure("RUN", "FAIL") + expected_test1.add_failure("COMPARE_base_rest", "PEND") + expected = {"my.test.1": expected_test1} + + self.assertEqual(xfails, expected)
+ + +
+[docs] + def test_invalid_file(self): + """Given an invalid file, an exception should be raised in schema validation""" + + # This file is missing a <status> element in the <phase> block. + # + # It's important to have the expectedFails version number be greater than 1, + # because schema validation isn't done in cime for files with a version of 1. + contents = """<?xml version= "1.0"?> +<expectedFails version="1.1"> + <test name="my.test.1"> + <phase name="RUN"> + </phase> + </test> +</expectedFails> +""" + with open(self._xml_filepath, "w") as xml_file: + xml_file.write(contents) + + with self.assertRaisesRegex(CIMEError, "Schemas validity error"): + _ = ExpectedFailsFile(self._xml_filepath)
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_grids.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_grids.html new file mode 100644 index 00000000000..3c58d87f214 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_grids.html @@ -0,0 +1,664 @@ + + + + + + CIME.tests.test_unit_grids — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_grids

+#!/usr/bin/env python3
+
+"""
+This module tests *some* functionality of CIME.XML.grids
+"""
+
+# Ignore privacy concerns for unit tests, so that unit tests can access
+# protected members of the system under test
+#
+# pylint:disable=protected-access
+
+# Also ignore too-long lines, since these are common in unit tests
+#
+# pylint:disable=line-too-long
+
+import unittest
+import os
+import shutil
+import string
+import tempfile
+from CIME.XML.grids import Grids, _ComponentGrids, _add_grid_info, _strip_grid_from_name
+from CIME.utils import CIMEError
+
+
+
+[docs] +class TestGrids(unittest.TestCase): + """Tests some functionality of CIME.XML.grids + + Note that much of the functionality of CIME.XML.grids is NOT covered here + """ + + _CONFIG_GRIDS_TEMPLATE = string.Template( + """<?xml version="1.0"?> + +<grid_data version="2.1" xmlns:xi="http://www.w3.org/2001/XInclude"> + <help> + </help> + + <grids> + <model_grid_defaults> + <grid name="atm" compset="." >atm_default_grid</grid> + <grid name="lnd" compset="." >lnd_default_grid</grid> + <grid name="ocnice" compset="." >ocnice_default_grid</grid> + <grid name="rof" compset="." >rof_default_grid</grid> + <grid name="glc" compset="." >glc_default_grid</grid> + <grid name="wav" compset="." >wav_default_grid</grid> + <grid name="iac" compset="." >null</grid> + </model_grid_defaults> + +$MODEL_GRID_ENTRIES + </grids> + + <domains> + <domain name="null"> + <!-- null grid --> + <nx>0</nx> <ny>0</ny> + <file>unset</file> + <desc>null is no grid: </desc> + </domain> + +$DOMAIN_ENTRIES + </domains> + + <required_gridmaps> + <required_gridmap grid1="atm_grid" grid2="ocn_grid">ATM2OCN_FMAPNAME</required_gridmap> + <required_gridmap grid1="atm_grid" grid2="ocn_grid">OCN2ATM_FMAPNAME</required_gridmap> +$EXTRA_REQUIRED_GRIDMAPS + </required_gridmaps> + + <gridmaps> +$GRIDMAP_ENTRIES + </gridmaps> +</grid_data> +""" + ) + + _MODEL_GRID_F09_G17 = """ + <model_grid alias="f09_g17"> + <grid name="atm">0.9x1.25</grid> + <grid name="lnd">0.9x1.25</grid> + <grid name="ocnice">gx1v7</grid> + <mask>gx1v7</mask> + </model_grid> +""" + + # For testing multiple GLC grids + _MODEL_GRID_F09_G17_3GLC = """ + <model_grid alias="f09_g17_3glc"> + <grid name="atm">0.9x1.25</grid> + <grid name="lnd">0.9x1.25</grid> + <grid name="ocnice">gx1v7</grid> + <grid name="glc">ais8:gris4:lis12</grid> + <mask>gx1v7</mask> + </model_grid> +""" + + _DOMAIN_F09 = """ + <domain name="0.9x1.25"> + <nx>288</nx> <ny>192</ny> + <mesh>fv0.9x1.25_ESMFmesh.nc</mesh> + <desc>0.9x1.25 is FV 1-deg grid:</desc> + </domain> +""" + + _DOMAIN_G17 = """ + <domain name="gx1v7"> + <nx>320</nx> <ny>384</ny> + <mesh>gx1v7_ESMFmesh.nc</mesh> + <desc>gx1v7 is displaced Greenland pole 1-deg grid with Caspian as a land feature:</desc> + </domain> +""" + + _DOMAIN_GRIS4 = """ + <domain name="gris4"> + <nx>416</nx> <ny>704</ny> + <mesh>greenland_4km_ESMFmesh.nc</mesh> + <desc>4-km Greenland grid</desc> + </domain> +""" + + _DOMAIN_AIS8 = """ + <domain name="ais8"> + <nx>704</nx> <ny>576</ny> + <mesh>antarctica_8km_ESMFmesh.nc</mesh> + <desc>8-km Antarctica grid</desc> + </domain> +""" + + _DOMAIN_LIS12 = """ + <domain name="lis12"> + <nx>123</nx> <ny>456</ny> + <mesh>laurentide_12km_ESMFmesh.nc</mesh> + <desc>12-km Laurentide grid</desc> + </domain> +""" + + _GRIDMAP_F09_G17 = """ + <!-- The following entries are here to make sure that the code skips gridmap entries with the wrong grids. + These use the wrong atm grid but the correct ocn grid. --> + <gridmap atm_grid="foo" ocn_grid="gx1v7"> + <map name="ATM2OCN_FMAPNAME">map_foo_TO_gx1v7_aave.nc</map> + <map name="OCN2ATM_FMAPNAME">map_gx1v7_TO_foo_aave.nc</map> + <map name="OCN2ATM_SHOULDBEABSENT">map_gx1v7_TO_foo_xxx.nc</map> + </gridmap> + + <!-- Here are the gridmaps that should actually be used. --> + <gridmap atm_grid="0.9x1.25" ocn_grid="gx1v7"> + <map name="ATM2OCN_FMAPNAME">map_fv0.9x1.25_TO_gx1v7_aave.nc</map> + <map name="OCN2ATM_FMAPNAME">map_gx1v7_TO_fv0.9x1.25_aave.nc</map> + </gridmap> + + <!-- The following entries are here to make sure that the code skips gridmap entries with the wrong grids. + These use the wrong ocn grid but the correct atm grid. --> + <gridmap atm_grid="0.9x1.25" ocn_grid="foo"> + <map name="ATM2OCN_FMAPNAME">map_fv0.9x1.25_TO_foo_aave.nc</map> + <map name="OCN2ATM_FMAPNAME">map_foo_TO_fv0.9x1.25_aave.nc</map> + <map name="OCN2ATM_SHOULDBEABSENT">map_foo_TO_fv0.9x1.25_xxx.nc</map> + </gridmap> +""" + + _GRIDMAP_GRIS4_G17 = """ + <gridmap ocn_grid="gx1v7" glc_grid="gris4" > + <map name="GLC2OCN_LIQ_RMAPNAME">map_gris4_to_gx1v7_liq.nc</map> + <map name="GLC2OCN_ICE_RMAPNAME">map_gris4_to_gx1v7_ice.nc</map> + </gridmap> +""" + + _GRIDMAP_AIS8_G17 = """ + <gridmap ocn_grid="gx1v7" glc_grid="ais8" > + <map name="GLC2OCN_LIQ_RMAPNAME">map_ais8_to_gx1v7_liq.nc</map> + <map name="GLC2OCN_ICE_RMAPNAME">map_ais8_to_gx1v7_ice.nc</map> + </gridmap> +""" + + _GRIDMAP_LIS12_G17 = """ + <gridmap ocn_grid="gx1v7" glc_grid="lis12" > + <map name="GLC2OCN_LIQ_RMAPNAME">map_lis12_to_gx1v7_liq.nc</map> + <map name="GLC2OCN_ICE_RMAPNAME">map_lis12_to_gx1v7_ice.nc</map> + </gridmap> +""" + +
+[docs] + def setUp(self): + self._workdir = tempfile.mkdtemp() + self._xml_filepath = os.path.join(self._workdir, "config_grids.xml")
+ + +
+[docs] + def tearDown(self): + shutil.rmtree(self._workdir)
+ + + def _create_grids_xml( + self, + model_grid_entries, + domain_entries, + gridmap_entries, + extra_required_gridmaps="", + ): + grids_xml = self._CONFIG_GRIDS_TEMPLATE.substitute( + { + "MODEL_GRID_ENTRIES": model_grid_entries, + "DOMAIN_ENTRIES": domain_entries, + "EXTRA_REQUIRED_GRIDMAPS": extra_required_gridmaps, + "GRIDMAP_ENTRIES": gridmap_entries, + } + ) + with open(self._xml_filepath, "w", encoding="UTF-8") as xml_file: + xml_file.write(grids_xml) + +
+[docs] + def assert_grid_info_f09_g17(self, grid_info): + """Asserts that expected grid info is present and correct when using _MODEL_GRID_F09_G17""" + self.assertEqual(grid_info["ATM_NX"], 288) + self.assertEqual(grid_info["ATM_NY"], 192) + self.assertEqual(grid_info["ATM_GRID"], "0.9x1.25") + self.assertEqual(grid_info["ATM_DOMAIN_MESH"], "fv0.9x1.25_ESMFmesh.nc") + + self.assertEqual(grid_info["LND_NX"], 288) + self.assertEqual(grid_info["LND_NY"], 192) + self.assertEqual(grid_info["LND_GRID"], "0.9x1.25") + self.assertEqual(grid_info["LND_DOMAIN_MESH"], "fv0.9x1.25_ESMFmesh.nc") + + self.assertEqual(grid_info["OCN_NX"], 320) + self.assertEqual(grid_info["OCN_NY"], 384) + self.assertEqual(grid_info["OCN_GRID"], "gx1v7") + self.assertEqual(grid_info["OCN_DOMAIN_MESH"], "gx1v7_ESMFmesh.nc") + + self.assertEqual(grid_info["ICE_NX"], 320) + self.assertEqual(grid_info["ICE_NY"], 384) + self.assertEqual(grid_info["ICE_GRID"], "gx1v7") + self.assertEqual(grid_info["ICE_DOMAIN_MESH"], "gx1v7_ESMFmesh.nc") + + self.assertEqual( + grid_info["ATM2OCN_FMAPNAME"], "map_fv0.9x1.25_TO_gx1v7_aave.nc" + ) + self.assertEqual( + grid_info["OCN2ATM_FMAPNAME"], "map_gx1v7_TO_fv0.9x1.25_aave.nc" + ) + self.assertFalse("OCN2ATM_SHOULDBEABSENT" in grid_info)
+ + +
+[docs] + def assert_grid_info_f09_g17_3glc(self, grid_info): + """Asserts that all domain info is present & correct for _MODEL_GRID_F09_G17_3GLC""" + self.assert_grid_info_f09_g17(grid_info) + + # Note that we don't assert GLC_NX and GLC_NY here: these are unused for this + # multi-grid case, so we don't care what arbitrary values they have. + self.assertEqual(grid_info["GLC_GRID"], "ais8:gris4:lis12") + self.assertEqual( + grid_info["GLC_DOMAIN_MESH"], + "antarctica_8km_ESMFmesh.nc:greenland_4km_ESMFmesh.nc:laurentide_12km_ESMFmesh.nc", + ) + self.assertEqual( + grid_info["GLC2OCN_LIQ_RMAPNAME"], + "map_ais8_to_gx1v7_liq.nc:map_gris4_to_gx1v7_liq.nc:map_lis12_to_gx1v7_liq.nc", + ) + self.assertEqual( + grid_info["GLC2OCN_ICE_RMAPNAME"], + "map_ais8_to_gx1v7_ice.nc:map_gris4_to_gx1v7_ice.nc:map_lis12_to_gx1v7_ice.nc", + )
+ + +
+[docs] + def test_get_grid_info_basic(self): + """Basic test of get_grid_info""" + model_grid_entries = self._MODEL_GRID_F09_G17 + domain_entries = self._DOMAIN_F09 + self._DOMAIN_G17 + gridmap_entries = self._GRIDMAP_F09_G17 + self._create_grids_xml( + model_grid_entries=model_grid_entries, + domain_entries=domain_entries, + gridmap_entries=gridmap_entries, + ) + + grids = Grids(self._xml_filepath) + grid_info = grids.get_grid_info( + name="f09_g17", + compset="NOT_IMPORTANT", + driver="nuopc", + ) + + self.assert_grid_info_f09_g17(grid_info)
+ + +
+[docs] + def test_get_grid_info_extra_required_gridmaps(self): + """Test of get_grid_info with some extra required gridmaps""" + model_grid_entries = self._MODEL_GRID_F09_G17 + domain_entries = self._DOMAIN_F09 + self._DOMAIN_G17 + gridmap_entries = self._GRIDMAP_F09_G17 + # These are some extra required gridmaps that aren't explicitly specified + extra_required_gridmaps = """ + <required_gridmap grid1="atm_grid" grid2="ocn_grid">ATM2OCN_EXTRA</required_gridmap> + <required_gridmap grid1="ocn_grid" grid2="atm_grid">OCN2ATM_EXTRA</required_gridmap> +""" + self._create_grids_xml( + model_grid_entries=model_grid_entries, + domain_entries=domain_entries, + gridmap_entries=gridmap_entries, + extra_required_gridmaps=extra_required_gridmaps, + ) + + grids = Grids(self._xml_filepath) + grid_info = grids.get_grid_info( + name="f09_g17", + compset="NOT_IMPORTANT", + driver="nuopc", + ) + + self.assert_grid_info_f09_g17(grid_info) + self.assertEqual(grid_info["ATM2OCN_EXTRA"], "unset") + self.assertEqual(grid_info["OCN2ATM_EXTRA"], "unset")
+ + +
+[docs] + def test_get_grid_info_extra_gridmaps(self): + """Test of get_grid_info with some extra gridmaps""" + model_grid_entries = self._MODEL_GRID_F09_G17 + domain_entries = self._DOMAIN_F09 + self._DOMAIN_G17 + gridmap_entries = self._GRIDMAP_F09_G17 + # These are some extra gridmaps that aren't in the required list + gridmap_entries += """ + <gridmap atm_grid="0.9x1.25" ocn_grid="gx1v7"> + <map name="ATM2OCN_EXTRA">map_fv0.9x1.25_TO_gx1v7_extra.nc</map> + <map name="OCN2ATM_EXTRA">map_gx1v7_TO_fv0.9x1.25_extra.nc</map> + </gridmap> +""" + self._create_grids_xml( + model_grid_entries=model_grid_entries, + domain_entries=domain_entries, + gridmap_entries=gridmap_entries, + ) + + grids = Grids(self._xml_filepath) + grid_info = grids.get_grid_info( + name="f09_g17", + compset="NOT_IMPORTANT", + driver="nuopc", + ) + + self.assert_grid_info_f09_g17(grid_info) + self.assertEqual(grid_info["ATM2OCN_EXTRA"], "map_fv0.9x1.25_TO_gx1v7_extra.nc") + self.assertEqual(grid_info["OCN2ATM_EXTRA"], "map_gx1v7_TO_fv0.9x1.25_extra.nc")
+ + +
+[docs] + def test_get_grid_info_3glc(self): + """Test of get_grid_info with 3 glc grids""" + model_grid_entries = self._MODEL_GRID_F09_G17_3GLC + domain_entries = ( + self._DOMAIN_F09 + + self._DOMAIN_G17 + + self._DOMAIN_GRIS4 + + self._DOMAIN_AIS8 + + self._DOMAIN_LIS12 + ) + gridmap_entries = ( + self._GRIDMAP_F09_G17 + + self._GRIDMAP_GRIS4_G17 + + self._GRIDMAP_AIS8_G17 + + self._GRIDMAP_LIS12_G17 + ) + # Claim that a glc2atm gridmap is required in order to test the logic that handles + # an unset required gridmap for a component with multiple grids. + extra_required_gridmaps = """ + <required_gridmap grid1="glc_grid" grid2="atm_grid">GLC2ATM_EXTRA</required_gridmap> +""" + self._create_grids_xml( + model_grid_entries=model_grid_entries, + domain_entries=domain_entries, + gridmap_entries=gridmap_entries, + extra_required_gridmaps=extra_required_gridmaps, + ) + + grids = Grids(self._xml_filepath) + grid_info = grids.get_grid_info( + name="f09_g17_3glc", + compset="NOT_IMPORTANT", + driver="nuopc", + ) + + self.assert_grid_info_f09_g17_3glc(grid_info) + self.assertEqual(grid_info["GLC2ATM_EXTRA"], "unset")
+
+ + + +
+[docs] +class TestComponentGrids(unittest.TestCase): + """Tests the _ComponentGrids helper class defined in CIME.XML.grids""" + + # A valid grid long name used in a lot of these tests; there are two rof grids and + # three glc grids, and one grid for each other component + _GRID_LONGNAME = "a%0.9x1.25_l%0.9x1.25_oi%gx1v7_r%r05:r01_g%ais8:gris4:lis12_w%ww3a_z%null_m%gx1v7" + + # ------------------------------------------------------------------------ + # Tests of check_num_elements + # + # These tests cover a lot of the code in _ComponentGrids + # + # We don't cover all of the branches in check_num_elements because many of the + # branches that lead to a successful pass are already covered by unit tests in the + # TestGrids class. + # ------------------------------------------------------------------------ + +
+[docs] + def test_check_num_elements_right_ndomains(self): + """With the right number of domains for a component, check_num_elements should pass""" + component_grids = _ComponentGrids(self._GRID_LONGNAME) + gridinfo = {"GLC_DOMAIN_MESH": "foo:bar:baz"} + + # The test passes as long as the following call doesn't generate any errors + component_grids.check_num_elements(gridinfo)
+ + +
+[docs] + def test_check_num_elements_wrong_ndomains(self): + """With the wrong number of domains for a component, check_num_elements should fail""" + component_grids = _ComponentGrids(self._GRID_LONGNAME) + # In the following, there should be 3 elements, but we only specify 2 + gridinfo = {"GLC_DOMAIN_MESH": "foo:bar"} + + self.assertRaisesRegex( + CIMEError, + "Unexpected number of colon-delimited elements", + component_grids.check_num_elements, + gridinfo, + )
+ + +
+[docs] + def test_check_num_elements_right_nmaps(self): + """With the right number of maps between two components, check_num_elements should pass""" + component_grids = _ComponentGrids(self._GRID_LONGNAME) + gridinfo = {"GLC2ROF_RMAPNAME": "map1:map2:map3:map4:map5:map6"} + + # The test passes as long as the following call doesn't generate any errors + component_grids.check_num_elements(gridinfo)
+ + +
+[docs] + def test_check_num_elements_wrong_nmaps(self): + """With the wrong number of maps between two components, check_num_elements should fail""" + component_grids = _ComponentGrids(self._GRID_LONGNAME) + # In the following, there should be 6 elements, but we only specify 5 + gridinfo = {"GLC2ROF_RMAPNAME": "map1:map2:map3:map4:map5"} + + self.assertRaisesRegex( + CIMEError, + "Unexpected number of colon-delimited elements", + component_grids.check_num_elements, + gridinfo, + )
+
+ + + +
+[docs] +class TestGridsFunctions(unittest.TestCase): + """Tests helper functions defined in CIME.XML.grids + + These tests are in a separate class to avoid the unnecessary setUp and tearDown + function of the main test class. + + """ + + # ------------------------------------------------------------------------ + # Tests of _add_grid_info + # ------------------------------------------------------------------------ + +
+[docs] + def test_add_grid_info_initial(self): + """Test of _add_grid_info for the initial add of a given key""" + grid_info = {"foo": "a"} + _add_grid_info(grid_info, "bar", "b") + self.assertEqual(grid_info, {"foo": "a", "bar": "b"})
+ + +
+[docs] + def test_add_grid_info_existing(self): + """Test of _add_grid_info when the given key already exists""" + grid_info = {"foo": "bar"} + _add_grid_info(grid_info, "foo", "baz") + self.assertEqual(grid_info, {"foo": "bar:baz"})
+ + +
+[docs] + def test_add_grid_info_existing_with_value_for_multiple(self): + """Test of _add_grid_info when the given key already exists and value_for_multiple is provided""" + grid_info = {"foo": 1} + _add_grid_info(grid_info, "foo", 2, value_for_multiple=0) + self.assertEqual(grid_info, {"foo": 0})
+ + + # ------------------------------------------------------------------------ + # Tests of strip_grid_from_name + # ------------------------------------------------------------------------ + +
+[docs] + def test_strip_grid_from_name_basic(self): + """Basic test of _strip_grid_from_name""" + result = _strip_grid_from_name("atm_grid") + self.assertEqual(result, "atm")
+ + +
+[docs] + def test_strip_grid_from_name_badname(self): + """_strip_grid_from_name should raise an exception for a name not ending with _grid""" + self.assertRaisesRegex( + CIMEError, "does not end with _grid", _strip_grid_from_name, name="atm" + )
+
+ + + # ------------------------------------------------------------------------ + # Tests of _check_grid_info_component_counts + # ------------------------------------------------------------------------ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_hist_utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_hist_utils.html new file mode 100644 index 00000000000..d9c64b985a6 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_hist_utils.html @@ -0,0 +1,196 @@ + + + + + + CIME.tests.test_unit_hist_utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_hist_utils

+import io
+import unittest
+from unittest import mock
+
+from CIME.hist_utils import copy_histfiles
+from CIME.XML.archive import Archive
+
+
+
+[docs] +class TestHistUtils(unittest.TestCase): +
+[docs] + @mock.patch("CIME.hist_utils.safe_copy") + def test_copy_histfiles_exclude(self, safe_copy): + case = mock.MagicMock() + + case.get_env.return_value.get_latest_hist_files.side_effect = [ + ["/tmp/testing.cpl.hi.nc"], + ["/tmp/testing.atm.hi.nc"], + ] + + case.get_env.return_value.exclude_testing.side_effect = [True, False] + + case.get_value.side_effect = [ + "/tmp", # RUNDIR + None, # RUN_REFCASE + "testing", # CASE + True, # TEST + True, # TEST + ] + + case.get_compset_components.return_value = ["atm"] + + test_files = [ + "testing.cpl.hi.nc", + ] + + with mock.patch("os.listdir", return_value=test_files): + comments, num_copied = copy_histfiles(case, "base") + + assert num_copied == 1
+ + +
+[docs] + @mock.patch("CIME.hist_utils.safe_copy") + def test_copy_histfiles(self, safe_copy): + case = mock.MagicMock() + + case.get_env.return_value.get_latest_hist_files.return_value = [ + "/tmp/testing.cpl.hi.nc", + ] + + case.get_env.return_value.exclude_testing.return_value = False + + case.get_value.side_effect = [ + "/tmp", # RUNDIR + None, # RUN_REFCASE + "testing", # CASE + True, # TEST + ] + + case.get_compset_components.return_value = [] + + test_files = [ + "testing.cpl.hi.nc", + ] + + with mock.patch("os.listdir", return_value=test_files): + comments, num_copied = copy_histfiles(case, "base") + + assert num_copied == 1
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_nmlgen.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_nmlgen.html new file mode 100644 index 00000000000..4503223c098 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_nmlgen.html @@ -0,0 +1,186 @@ + + + + + + CIME.tests.test_unit_nmlgen — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_nmlgen

+from collections import OrderedDict
+import tempfile
+import unittest
+from unittest import mock
+
+from CIME.nmlgen import NamelistGenerator
+
+# pylint: disable=protected-access
+
+[docs] +class TestNamelistGenerator(unittest.TestCase): +
+[docs] + def test_init_defaults(self): + test_nml_infile = b"""&test +test1 = 'test1_updated' +/""" + + test_data = """<?xml version="1.0"?> +<?xml-stylesheet type="text/xsl" href="http://www.cgd.ucar.edu/~cam/namelist/namelist_definition.xsl"?> + +<entry_id version="2.0"> + <entry id="test1"> + <type>char</type> + <category>test</category> + <group>test_nml</group> + <valid_values>test1_value,test1_updated</valid_values> + <values> + <value>test1_value</value> + </values> + </entry> + <entry id="test2"> + <type>char</type> + <category>test</category> + <group>test_nml</group> + <values> + <value>test2_value</value> + </values> + </entry> +</entry_id>""" + + with tempfile.NamedTemporaryFile() as temp, tempfile.NamedTemporaryFile() as temp2: + temp.write(test_data.encode()) + temp.flush() + + temp2.write(test_nml_infile) + temp2.flush() + + case = mock.MagicMock() + + nmlgen = NamelistGenerator(case, [temp.name]) + + nmlgen.init_defaults([temp2.name], None) + + expected_groups = OrderedDict( + {"test_nml": {"test1": ["'test1_updated'"], "test2": ['"test2_value"']}} + ) + + assert nmlgen._namelist._groups == expected_groups
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_paramgen.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_paramgen.html new file mode 100644 index 00000000000..524ba0d7c92 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_paramgen.html @@ -0,0 +1,652 @@ + + + + + + CIME.tests.test_unit_paramgen — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_paramgen

+#!/usr/bin/env python3
+
+"""
+This module tests *some* functionality of CIME.ParamGen.paramgen's ParamGen class
+"""
+
+# Ignore privacy concerns for unit tests, so that unit tests can access
+# protected members of the system under test
+#
+# pylint:disable=protected-access
+
+# Also ignore too-long lines, since these are common in unit tests
+#
+# pylint:disable=line-too-long
+
+import unittest
+import tempfile
+from CIME.ParamGen.paramgen import ParamGen
+
+###############
+# Example inputs
+###############
+
+_MOM_INPUT_YAML = """
+Global:
+    INPUTDIR:
+        value: ${DIN_LOC_ROOT}/ocn/mom/${OCN_GRID}
+    RESTORE_SALINITY:
+        value:
+            $OCN_GRID == "tx0.66v1" and $COMP_ATM == "datm": True  # for C and G compsets on tx0.66v1
+            else: False
+    INIT_LAYERS_FROM_Z_FILE:
+        value:
+            $OCN_GRID == "gx1v6": True
+            $OCN_GRID == "tx0.66v1": True
+            $OCN_GRID == "tx0.25v1": True
+    TEMP_SALT_Z_INIT_FILE:
+        value:
+            $OCN_GRID == "gx1v6": "WOA05_pottemp_salt.nc"
+            $OCN_GRID == "tx0.66v1": "woa18_04_initial_conditions.nc"
+            $OCN_GRID == "tx0.25v1": "MOM6_IC_TS.nc"
+"""
+
+_MOM_INPUT_DATA_LIST_YAML = """
+mom.input_data_list:
+    ocean_hgrid:
+        $OCN_GRID == "gx1v6":    "${INPUTDIR}/ocean_hgrid.nc"
+        $OCN_GRID == "tx0.66v1": "${INPUTDIR}/ocean_hgrid_180829.nc"
+        $OCN_GRID == "tx0.25v1": "${INPUTDIR}/ocean_hgrid.nc"
+    tempsalt:
+        $OCN_GRID in ["gx1v6", "tx0.66v1", "tx0.25v1"]:
+            $INIT_LAYERS_FROM_Z_FILE == "True":
+                "${INPUTDIR}/${TEMP_SALT_Z_INIT_FILE}"
+"""
+
+_MY_TEMPLATE_XML = """<?xml version="1.0"?>
+
+<entry_id_pg version="0.1">
+
+  <entry id="foo">
+    <type>string</type>
+    <group>test_nml</group>
+    <desc>a dummy parameter for testing single key=value guards</desc>
+    <values>
+      <value>alpha</value>
+      <value cice_mode="thermo_only">beta</value>
+      <value cice_mode="prescribed">gamma</value>
+    </values>
+  </entry>
+
+  <entry id="bar">
+    <type>string</type>
+    <group>test_nml</group>
+    <desc>another dummy parameter for multiple key=value guards mixed with explicit (flexible) guards</desc>
+    <values>
+      <value some_int="2" some_bool="True" some_float="3.1415">delta</value>
+      <value guard='$ICE_GRID .startswith("gx1v")'>epsilon</value>
+    </values>
+  </entry>
+
+  <entry id="baz">
+    <type>string</type>
+    <group>test_nml</group>
+    <desc>parameter to test the case where there is no match</desc>
+    <values>
+      <value some_int="-9999">zeta</value>
+      <value guard='not $ICE_GRID .startswith("gx1v")'>eta</value>
+    </values>
+  </entry>
+
+  </entry_id_pg>
+"""
+
+_DUPLICATE_IDS_XML = """<?xml version="1.0"?>
+
+<entry_id_pg version="0.1">
+
+  <entry id="foo">
+    <type>string</type>
+    <group>test_nml</group>
+    <desc>a dummy parameter for testing single key=value guards</desc>
+    <values>
+      <value>alpha</value>
+      <value cice_mode="thermo_only">beta</value>
+      <value cice_mode="prescribed">gamma</value>
+    </values>
+  </entry>
+
+  <entry id="foo">
+    <type>string</type>
+    <group>test_nml</group>
+    <desc>another dummy parameter for multiple key=value guards mixed with explicit (flexible) guards</desc>
+    <values>
+      <value some_int="2" some_bool="True" some_float="3.1415">delta</value>
+      <value guard='$ICE_GRID .startswith("gx1v")'>epsilon</value>
+    </values>
+  </entry>
+
+  </entry_id_pg>
+"""
+
+############################
+# Dummy functions and classes
+############################
+
+
+
+[docs] +class DummyCase: + """A dummy Case class that mimics CIME class objects' get_value method.""" + +
+[docs] + def get_value(self, varname): + d = { + "DIN_LOC_ROOT": "/foo/inputdata", + "OCN_GRID": "tx0.66v1", + "COMP_ATM": "datm", + } + return d[varname] if varname in d else None
+
+ + + +case = DummyCase() + +##### + + +def _expand_func_demo(varname): + return { + "ICE_GRID": "gx1v6", + "DIN_LOC_ROOT": "/glade/p/cesmdata/cseg/inputdata", + "cice_mode": "thermo_only", + "some_bool": "True", + "some_int": 2, + "some_float": "3.1415", + }[varname] + + +################ +# Unitest classes +################ + + +
+[docs] +class TestParamGen(unittest.TestCase): + """ + Tests some basic functionality of the + CIME.ParamGen.paramgen's ParamGen class + """ + +
+[docs] + def test_init_data(self): + """Tests the ParamGen initializer with and without an initial data.""" + # empty + _ = ParamGen({}) + # with data + data_dict = {"a": 1, "b": 2} + _ = ParamGen(data_dict)
+ + +
+[docs] + def test_reduce(self): + """Tests the reduce method of ParamGen on data with explicit guards (True or False).""" + data_dict = {"False": 1, "True": 2} + obj = ParamGen(data_dict) + obj.reduce() + self.assertEqual(obj.data, 2)
+ + +
+[docs] + def test_nested_reduce(self): + """Tests the reduce method of ParamGen on data with nested guards.""" + data_dict = {"False": 1, "True": {"2>3": 0, "2<3": 2}} + obj = ParamGen(data_dict) + obj.reduce() + self.assertEqual(obj.data, 2)
+ + +
+[docs] + def test_outer_guards(self): + """Tests the reduce method on data with outer guards enclosing parameter definitions.""" + data_dict = { + "False": {"param": "foo"}, + "True": {"param": "bar"}, + } + obj = ParamGen(data_dict) + obj.reduce() + self.assertEqual(obj.data, {"param": "bar"})
+ + +
+[docs] + def test_match(self): + """Tests the default behavior of returning the last match and the optional behavior of returning the + first match.""" + + data_dict = { + "1<2": "foo", + "2<3": "bar", + "3<4": "baz", + } + + obj = ParamGen(data_dict) # by default, match='last' + obj.reduce() + self.assertEqual(obj.data, "baz") + + obj = ParamGen(data_dict, match="first") + obj.reduce() + self.assertEqual(obj.data, "foo")
+ + +
+[docs] + def test_undefined_var(self): + """Tests the reduce method of ParamGen on nested guards where an undefined expandable var is specified + below a guard that evaluates to False. The undefined var should not lead to an error since the enclosing + guard evaluates to false.""" + + # define an expansion function, i.e., a mapping for expandable var names to their values + test_map = {"alpha": 1, "beta": False} + expand_func = lambda var: test_map[var] + + # define a data dict + data_dict = {"param": {"$alpha >= 1": "foo", "${beta}": {"${zeta}": "bar"}}} + + # Instantiate a ParamGen object and reduce its data to obtain the final parameter set + obj = ParamGen(data_dict) + obj.reduce(expand_func) + self.assertEqual(obj.data, {"param": "foo"})
+ + +
+[docs] + def test_expandable_vars(self): + """Tests the reduce method of ParamGen expandable vars in guards.""" + + # define an expansion function, i.e., a mapping for expandable var names to their values + test_map = {"alpha": 1, "beta": False, "gamma": "xyz"} + expand_func = lambda var: test_map[var] + + # define a data dict + data_dict = { + "param": {"$alpha > 1": "foo", "${beta}": "bar", '"x" in $gamma': "baz"} + } + + # Instantiate a ParamGen object and reduce its data to obtain the final parameter set + obj = ParamGen(data_dict) + obj.reduce(expand_func) + self.assertEqual(obj.data, {"param": "baz"})
+ + +
+[docs] + def test_formula_expansion(self): + """Tests the formula expansion feature of ParamGen.""" + + # define an expansion function, i.e., a mapping for expandable var names to their values + test_map = {"alpha": 3} + expand_func = lambda var: test_map[var] + + # define a data dict + data_dict = {"x": "= $alpha **2", "y": "= [i for i in range(3)]"} + + # Instantiate a ParamGen object and reduce its data to obtain the final parameter set + obj = ParamGen(data_dict) + obj.reduce(expand_func) + self.assertEqual(obj.data["x"], 9) + self.assertEqual(obj.data["y"], [0, 1, 2])
+
+ + + +##### + + +
+[docs] +class TestParamGenYamlConstructor(unittest.TestCase): + """A unit test class for testing ParamGen's yaml constructor.""" + +
+[docs] + def test_mom_input(self): + """Test MOM_input file generation via a subset of original MOM_input.yaml""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MOM_INPUT_YAML.encode()) + temp.flush() + + # Open YAML file using ParamGen: + mom_input = ParamGen.from_yaml(temp.name) + + # Define a local ParamGen reducing function: + def input_data_list_expand_func(varname): + val = case.get_value(varname) + if val == None: + val = str(mom_input.data["Global"][varname]["value"]).strip() + if val == None: + raise RuntimeError("Cannot determine the value of variable: " + varname) + return val + + # Reduce ParamGen entries: + mom_input.reduce(input_data_list_expand_func) + + # Check output: + self.assertEqual( + mom_input.data, + { + "Global": { + "INPUTDIR": {"value": "/foo/inputdata/ocn/mom/tx0.66v1"}, + "RESTORE_SALINITY": {"value": True}, + "INIT_LAYERS_FROM_Z_FILE": {"value": True}, + "TEMP_SALT_Z_INIT_FILE": { + "value": "woa18_04_initial_conditions.nc" + }, + } + }, + )
+ + +
+[docs] + def test_input_data_list(self): + """Test mom.input_data_list file generation via a subset of original input_data_list.yaml""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MOM_INPUT_YAML.encode()) + temp.flush() + + # Open YAML file using ParamGen: + mom_input = ParamGen.from_yaml(temp.name) + + # Define a local ParamGen reducing function: + def input_data_list_expand_func(varname): + val = case.get_value(varname) + if val == None: + val = str(mom_input.data["Global"][varname]["value"]).strip() + if val == None: + raise RuntimeError("Cannot determine the value of variable: " + varname) + return val + + # Reduce ParamGen entries: + mom_input.reduce(input_data_list_expand_func) + + # Create a second temporary YAML file: + with tempfile.NamedTemporaryFile() as temp2: + temp2.write(_MOM_INPUT_DATA_LIST_YAML.encode()) + temp2.flush() + + # Open second YAML file using ParamGen: + input_data_list = ParamGen.from_yaml(temp2.name) + + # Reduce ParamGen entries: + input_data_list.reduce(input_data_list_expand_func) + + # Check output: + self.assertEqual( + input_data_list.data, + { + "mom.input_data_list": { + "ocean_hgrid": "/foo/inputdata/ocn/mom/tx0.66v1/ocean_hgrid_180829.nc", + "tempsalt": "/foo/inputdata/ocn/mom/tx0.66v1/woa18_04_initial_conditions.nc", + } + }, + )
+
+ + + +##### + + +
+[docs] +class TestParamGenXmlConstructor(unittest.TestCase): + """A unit test class for testing ParamGen's xml constructor.""" + +
+[docs] + def test_single_key_val_guard(self): + """Test xml entry values with single key=value guards""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MY_TEMPLATE_XML.encode()) + temp.flush() + + # Open XML file using ParamGen: + pg = ParamGen.from_xml_nml(temp.name) + + # Reduce ParamGen entries: + pg.reduce(_expand_func_demo) + + # Check output: + self.assertEqual(pg.data["test_nml"]["foo"]["values"], "beta")
+ + +
+[docs] + def test_mixed_guard(self): + """Tests multiple key=value guards mixed with explicit (flexible) guards.""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MY_TEMPLATE_XML.encode()) + temp.flush() + + # Open XML file using ParamGen: + pg = ParamGen.from_xml_nml(temp.name) + + # Reduce ParamGen entries: + pg.reduce(_expand_func_demo) + + # Check output: + self.assertEqual(pg.data["test_nml"]["bar"]["values"], "epsilon")
+ + +
+[docs] + def test_mixed_guard_first(self): + """Tests multiple key=value guards mixed with explicit (flexible) guards + with match=first option.""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MY_TEMPLATE_XML.encode()) + temp.flush() + + # Open XML file using ParamGen: + pg = ParamGen.from_xml_nml(temp.name, match="first") + + # Reduce ParamGen entries: + pg.reduce(_expand_func_demo) + + # Check output: + self.assertEqual(pg.data["test_nml"]["bar"]["values"], "delta")
+ + +
+[docs] + def test_no_match(self): + """Tests an xml entry with no match, i.e., no guards evaluating to True.""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MY_TEMPLATE_XML.encode()) + temp.flush() + + # Open XML file using ParamGen: + pg = ParamGen.from_xml_nml(temp.name) + + # Reduce ParamGen entries: + pg.reduce(_expand_func_demo) + + # Check output: + self.assertEqual(pg.data["test_nml"]["baz"]["values"], None)
+ + +
+[docs] + def test_default_var(self): + """Test to check if default val is assigned when all guards eval to False""" + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_MY_TEMPLATE_XML.encode()) + temp.flush() + + # Open XML file using ParamGen: + pg = ParamGen.from_xml_nml(temp.name) + + # Reduce ParamGen entries: + pg.reduce(lambda varname: "_") + + # Check output: + self.assertEqual(pg.data["test_nml"]["foo"]["values"], "alpha")
+ + +
+[docs] + def test_duplicate_entry_error(self): + """ + Test to make sure duplicate ids raise the correct error + when the "no_duplicates" flag is True. + """ + with self.assertRaises(ValueError) as verr: + + # Create temporary YAML file: + with tempfile.NamedTemporaryFile() as temp: + temp.write(_DUPLICATE_IDS_XML.encode()) + temp.flush() + + _ = ParamGen.from_xml_nml(temp.name, no_duplicates=True) + + emsg = "Entry id 'foo' listed twice in file:\n'./xml_test_files/duplicate_ids.xml'" + self.assertEqual(emsg, str(verr.exception))
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_system_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_system_tests.html new file mode 100644 index 00000000000..707d88fc50e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_system_tests.html @@ -0,0 +1,806 @@ + + + + + + CIME.tests.test_unit_system_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_system_tests

+#!/usr/bin/env python3
+
+import os
+import tempfile
+import gzip
+import re
+from re import A
+import unittest
+from unittest import mock
+from pathlib import Path
+
+from CIME.config import Config
+from CIME.SystemTests.system_tests_common import SystemTestsCommon
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.SystemTests.system_tests_compare_n import SystemTestsCompareN
+
+CPLLOG = """
+ tStamp_write: model date =   00010102       0 wall clock = 2023-09-19 19:39:42 avg dt =     0.33 dt =     0.33
+ memory_write: model date =   00010102       0 memory =    1673.89 MB (highwater)        387.77 MB (usage)  (pe=    0 comps= cpl ATM LND ICE OCN GLC ROF WAV IAC ESP)
+ tStamp_write: model date =   00010103       0 wall clock = 2023-09-19 19:39:42 avg dt =     0.33 dt =     0.33
+ memory_write: model date =   00010103       0 memory =    1673.89 MB (highwater)        390.09 MB (usage)  (pe=    0 comps= cpl ATM LND ICE OCN GLC ROF WAV IAC ESP)
+ tStamp_write: model date =   00010104       0 wall clock = 2023-09-19 19:39:42 avg dt =     0.33 dt =     0.33
+ memory_write: model date =   00010104       0 memory =    1673.89 MB (highwater)        391.64 MB (usage)  (pe=    0 comps= cpl ATM LND ICE OCN GLC ROF WAV IAC ESP)
+ tStamp_write: model date =   00010105       0 wall clock = 2023-09-19 19:39:43 avg dt =     0.33 dt =     0.33
+ memory_write: model date =   00010105       0 memory =    1673.89 MB (highwater)        392.67 MB (usage)  (pe=    0 comps= cpl ATM LND ICE OCN GLC ROF WAV IAC ESP)
+ tStamp_write: model date =   00010106       0 wall clock = 2023-09-19 19:39:43 avg dt =     0.33 dt =     0.33
+ memory_write: model date =   00010106       0 memory =    1673.89 MB (highwater)        393.44 MB (usage)  (pe=    0 comps= cpl ATM LND ICE OCN GLC ROF WAV IAC ESP)
+
+(seq_mct_drv): ===============          SUCCESSFUL TERMINATION OF CPL7-e3sm ===============
+(seq_mct_drv): ===============        at YMD,TOD =   00010106       0       ===============
+(seq_mct_drv): ===============  # simulated days (this run) =        5.000  ===============
+(seq_mct_drv): ===============  compute time (hrs)          =        0.000  ===============
+(seq_mct_drv): ===============  # simulated years / cmp-day =      719.635  ===============
+(seq_mct_drv): ===============  pes min memory highwater  (MB)     851.957  ===============
+(seq_mct_drv): ===============  pes max memory highwater  (MB)    1673.891  ===============
+(seq_mct_drv): ===============  pes min memory last usage (MB)     182.742  ===============
+(seq_mct_drv): ===============  pes max memory last usage (MB)     393.441  ===============
+"""
+
+
+
+[docs] +def create_mock_case(tempdir, idx=None, cpllog_data=None): + if idx is None: + idx = 0 + + case = mock.MagicMock() + + caseroot = Path(tempdir, str(idx), "caseroot") + baseline_root = caseroot.parent / "baselines" + run_dir = caseroot / "run" + run_dir.mkdir(parents=True, exist_ok=False) + + if cpllog_data is not None: + cpllog = run_dir / "cpl.log.gz" + + with gzip.open(cpllog, "w") as fd: + fd.write(cpllog_data.encode("utf-8")) + + case.get_latest_cpl_log.return_value = str(cpllog) + + hist_file = run_dir / "cpl.hi.2023-01-01.nc" + hist_file.touch() + + case.get_env.return_value.get_latest_hist_files.return_value = [str(hist_file)] + + case.get_compset_components.return_value = [] + + return case, caseroot, baseline_root, run_dir
+ + + +
+[docs] +class TestUnitSystemTests(unittest.TestCase): +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.load_coupler_customization") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + @mock.patch("CIME.SystemTests.system_tests_common.perf_get_memory_list") + @mock.patch("CIME.SystemTests.system_tests_common.get_latest_cpl_logs") + def test_check_for_memleak_runtime_error( + self, + get_latest_cpl_logs, + perf_get_memory_list, + append_testlog, + load_coupler_customization, + ): + load_coupler_customization.return_value.perf_check_for_memory_leak.side_effect = ( + AttributeError + ) + + perf_get_memory_list.side_effect = RuntimeError + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + rundir = caseroot / "run" + rundir.mkdir(parents=True, exist_ok=False) + + cpllog = rundir / "cpl.log.gz" + + get_latest_cpl_logs.return_value = [ + str(cpllog), + ] + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + 0.01, + ) + + common = SystemTestsCommon(case) + + common._test_status = mock.MagicMock() + + common._check_for_memleak() + + common._test_status.set_status.assert_any_call( + "MEMLEAK", "PASS", comments="insufficient data for memleak test" + ) + + append_testlog.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.load_coupler_customization") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + @mock.patch("CIME.SystemTests.system_tests_common.perf_get_memory_list") + @mock.patch("CIME.SystemTests.system_tests_common.get_latest_cpl_logs") + def test_check_for_memleak_not_enough_samples( + self, + get_latest_cpl_logs, + perf_get_memory_list, + append_testlog, + load_coupler_customization, + ): + load_coupler_customization.return_value.perf_check_for_memory_leak.side_effect = ( + AttributeError + ) + + perf_get_memory_list.return_value = [ + (1, 1000.0), + (2, 0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + rundir = caseroot / "run" + rundir.mkdir(parents=True, exist_ok=False) + + cpllog = rundir / "cpl.log.gz" + + get_latest_cpl_logs.return_value = [ + str(cpllog), + ] + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + 0.01, + ) + + common = SystemTestsCommon(case) + + common._test_status = mock.MagicMock() + + common._check_for_memleak() + + common._test_status.set_status.assert_any_call( + "MEMLEAK", "PASS", comments="data for memleak test is insufficient" + ) + + append_testlog.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.load_coupler_customization") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + @mock.patch("CIME.SystemTests.system_tests_common.perf_get_memory_list") + @mock.patch("CIME.SystemTests.system_tests_common.get_latest_cpl_logs") + def test_check_for_memleak_found( + self, + get_latest_cpl_logs, + perf_get_memory_list, + append_testlog, + load_coupler_customization, + ): + load_coupler_customization.return_value.perf_check_for_memory_leak.side_effect = ( + AttributeError + ) + + perf_get_memory_list.return_value = [ + (1, 1000.0), + (2, 2000.0), + (3, 3000.0), + (4, 3000.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + rundir = caseroot / "run" + rundir.mkdir(parents=True, exist_ok=False) + + cpllog = rundir / "cpl.log.gz" + + get_latest_cpl_logs.return_value = [ + str(cpllog), + ] + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + 0.01, + ) + + common = SystemTestsCommon(case) + + common._test_status = mock.MagicMock() + + common._check_for_memleak() + + expected_comment = "memleak detected, memory went from 2000.000000 to 3000.000000 in 2 days" + + common._test_status.set_status.assert_any_call( + "MEMLEAK", "FAIL", comments=expected_comment + ) + + append_testlog.assert_any_call(expected_comment, str(caseroot))
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.load_coupler_customization") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + @mock.patch("CIME.SystemTests.system_tests_common.perf_get_memory_list") + @mock.patch("CIME.SystemTests.system_tests_common.get_latest_cpl_logs") + def test_check_for_memleak( + self, + get_latest_cpl_logs, + perf_get_memory_list, + append_testlog, + load_coupler_customization, + ): + load_coupler_customization.return_value.perf_check_for_memory_leak.side_effect = ( + AttributeError + ) + + perf_get_memory_list.return_value = [ + (1, 3040.0), + (2, 3002.0), + (3, 3030.0), + (4, 3008.0), + ] + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + rundir = caseroot / "run" + rundir.mkdir(parents=True, exist_ok=False) + + cpllog = rundir / "cpl.log.gz" + + get_latest_cpl_logs.return_value = [ + str(cpllog), + ] + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + 0.01, + ) + + common = SystemTestsCommon(case) + + common._test_status = mock.MagicMock() + + common._check_for_memleak() + + common._test_status.set_status.assert_any_call( + "MEMLEAK", "PASS", comments="" + ) + + append_testlog.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.perf_compare_throughput_baseline") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + def test_compare_throughput(self, append_testlog, perf_compare_throughput_baseline): + perf_compare_throughput_baseline.return_value = ( + True, + "TPUTCOMP: Computation time changed by 2.00% relative to baseline", + ) + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(Path(tempdir) / "caseroot"), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + ) + + common = SystemTestsCommon(case) + + common._compare_throughput() + + assert common._test_status.get_overall_test_status() == ("PASS", None) + + append_testlog.assert_any_call( + "TPUTCOMP: Computation time changed by 2.00% relative to baseline", + str(caseroot), + )
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.perf_compare_throughput_baseline") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + def test_compare_throughput_error_diff( + self, append_testlog, perf_compare_throughput_baseline + ): + perf_compare_throughput_baseline.return_value = (None, "Error diff value") + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(Path(tempdir) / "caseroot"), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + ) + + common = SystemTestsCommon(case) + + common._compare_throughput() + + assert common._test_status.get_overall_test_status() == ("PASS", None) + + append_testlog.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.perf_compare_throughput_baseline") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + def test_compare_throughput_fail( + self, append_testlog, perf_compare_throughput_baseline + ): + perf_compare_throughput_baseline.return_value = ( + False, + "Error: TPUTCOMP: Computation time increase > 5% from baseline", + ) + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(Path(tempdir) / "caseroot"), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + ) + + common = SystemTestsCommon(case) + + common._compare_throughput() + + assert common._test_status.get_overall_test_status() == ("PASS", None) + + append_testlog.assert_any_call( + "Error: TPUTCOMP: Computation time increase > 5% from baseline", + str(caseroot), + )
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.perf_compare_memory_baseline") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + def test_compare_memory(self, append_testlog, perf_compare_memory_baseline): + perf_compare_memory_baseline.return_value = ( + True, + "MEMCOMP: Memory usage highwater has changed by 2.00% relative to baseline", + ) + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + ) + + common = SystemTestsCommon(case) + + common._compare_memory() + + assert common._test_status.get_overall_test_status() == ("PASS", None) + + append_testlog.assert_any_call( + "MEMCOMP: Memory usage highwater has changed by 2.00% relative to baseline", + str(caseroot), + )
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.perf_compare_memory_baseline") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + def test_compare_memory_erorr_diff( + self, append_testlog, perf_compare_memory_baseline + ): + perf_compare_memory_baseline.return_value = (None, "Error diff value") + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + ) + + common = SystemTestsCommon(case) + + common._compare_memory() + + assert common._test_status.get_overall_test_status() == ("PASS", None) + + append_testlog.assert_not_called()
+ + +
+[docs] + @mock.patch("CIME.SystemTests.system_tests_common.perf_compare_memory_baseline") + @mock.patch("CIME.SystemTests.system_tests_common.append_testlog") + def test_compare_memory_erorr_fail( + self, append_testlog, perf_compare_memory_baseline + ): + perf_compare_memory_baseline.return_value = ( + False, + "Error: Memory usage increase >5% from baseline's 1000.000000 to 1002.000000", + ) + + with tempfile.TemporaryDirectory() as tempdir: + caseroot = Path(tempdir) / "caseroot" + caseroot.mkdir(parents=True, exist_ok=False) + + case = mock.MagicMock() + case.get_value.side_effect = ( + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + ) + + common = SystemTestsCommon(case) + + common._compare_memory() + + assert common._test_status.get_overall_test_status() == ("PASS", None) + + append_testlog.assert_any_call( + "Error: Memory usage increase >5% from baseline's 1000.000000 to 1002.000000", + str(caseroot), + )
+ + +
+[docs] + def test_generate_baseline(self): + with tempfile.TemporaryDirectory() as tempdir: + case, caseroot, baseline_root, run_dir = create_mock_case( + tempdir, cpllog_data=CPLLOG + ) + + get_value_calls = [ + str(caseroot), + "ERIO.ne30_g16_rx1.A.docker_gnu", + "mct", + str(run_dir), + "case.std", + str(baseline_root), + "master/ERIO.ne30_g16_rx1.A.docker_gnu", + "ERIO.ne30_g16_rx1.A.docker_gnu.G.20230919_193255_z9hg2w", + "mct", + str(run_dir), + "ERIO", + "ERIO.ne30_g16_rx1.A.docker_gnu", + "master/ERIO.ne30_g16_rx1.A.docker_gnu", + str(baseline_root), + "master/ERIO.ne30_g16_rx1.A.docker_gnu", + str(run_dir), + "mct", + "/tmp/components/cpl", + str(run_dir), + "mct", + str(run_dir), + "mct", + ] + + if Config.instance().create_bless_log: + get_value_calls.insert(12, os.getcwd()) + + case.get_value.side_effect = get_value_calls + + common = SystemTestsCommon(case) + + common._generate_baseline() + + baseline_dir = baseline_root / "master" / "ERIO.ne30_g16_rx1.A.docker_gnu" + + assert (baseline_dir / "cpl.log.gz").exists() + assert (baseline_dir / "cpl-tput.log").exists() + assert (baseline_dir / "cpl-mem.log").exists() + assert (baseline_dir / "cpl.hi.2023-01-01.nc").exists() + + with open(baseline_dir / "cpl-tput.log") as fd: + lines = fd.readlines() + + assert len(lines) == 1 + assert re.match("sha:.* date:.* (\d+\.\d+)", lines[0]) + + with open(baseline_dir / "cpl-mem.log") as fd: + lines = fd.readlines() + + assert len(lines) == 1 + assert re.match("sha:.* date:.* (\d+\.\d+)", lines[0])
+ + +
+[docs] + def test_kwargs(self): + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + _ = SystemTestsCommon(case, something="random") + + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + orig1 = SystemTestsCompareTwo._get_caseroot + orig2 = SystemTestsCompareTwo._get_caseroot2 + + SystemTestsCompareTwo._get_caseroot = mock.MagicMock() + SystemTestsCompareTwo._get_caseroot2 = mock.MagicMock() + + _ = SystemTestsCompareTwo(case, something="random") + + SystemTestsCompareTwo._get_caseroot = orig1 + SystemTestsCompareTwo._get_caseroot2 = orig2 + + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + orig = SystemTestsCompareN._get_caseroots + + SystemTestsCompareN._get_caseroots = mock.MagicMock() + + _ = SystemTestsCompareN(case, something="random") + + SystemTestsCompareN._get_caseroots = orig
+ + +
+[docs] + def test_dry_run(self): + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + orig = SystemTestsCompareTwo._setup_cases_if_not_yet_done + + SystemTestsCompareTwo._setup_cases_if_not_yet_done = mock.MagicMock() + + system_test = SystemTestsCompareTwo(case, dry_run=True) + + system_test._setup_cases_if_not_yet_done.assert_not_called() + + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + system_test = SystemTestsCompareTwo(case) + + system_test._setup_cases_if_not_yet_done.assert_called() + + SystemTestsCompareTwo._setup_cases_if_not_yet_done = orig + + orig = SystemTestsCompareN._setup_cases_if_not_yet_done + + SystemTestsCompareN._setup_cases_if_not_yet_done = mock.MagicMock() + + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + system_test = SystemTestsCompareN(case, dry_run=True) + + system_test._setup_cases_if_not_yet_done.assert_not_called() + + case = mock.MagicMock() + + case.get_value.side_effect = ( + "/caseroot", + "SMS.f19_g16.S", + "cpl", + "/caseroot", + "SMS.f19_g16.S", + ) + + system_test = SystemTestsCompareN(case) + + system_test._setup_cases_if_not_yet_done.assert_called() + + SystemTestsCompareN._setup_cases_if_not_yet_done = orig
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_test_status.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_test_status.html new file mode 100644 index 00000000000..9a3582c111c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_test_status.html @@ -0,0 +1,316 @@ + + + + + + CIME.tests.test_unit_test_status — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_test_status

+#!/usr/bin/env python3
+
+import unittest
+import os
+from CIME import test_status
+from CIME import expected_fails
+from CIME.tests.custom_assertions_test_status import CustomAssertionsTestStatus
+
+
+
+[docs] +class TestTestStatus(CustomAssertionsTestStatus): + + _TESTNAME = "fake_test" + + # An arbitrary phase we can use when we want to work with a non-core phase + _NON_CORE_PHASE = test_status.MEMLEAK_PHASE + +
+[docs] + def setUp(self): + self._ts = test_status.TestStatus( + test_dir=os.path.join("nonexistent", "path"), + test_name=self._TESTNAME, + no_io=True, + ) + self._set_core_phases_to_pass()
+ + + def _set_core_phases_to_pass(self): + """Set all core phases of self._ts to pass status""" + with self._ts: + for phase in test_status.CORE_PHASES: + self._ts.set_status(phase, test_status.TEST_PASS_STATUS) + + def _set_last_core_phase_to_fail(self): + """Sets the last core phase to FAIL + + Returns the name of this phase""" + fail_phase = test_status.CORE_PHASES[-1] + self._set_phase_to_status(fail_phase, test_status.TEST_FAIL_STATUS) + return fail_phase + + def _set_phase_to_status(self, phase, status): + """Set given phase to given status""" + with self._ts: + self._ts.set_status(phase, status) + +
+[docs] + def test_get_latest_phase(self): + assert self._ts.get_latest_phase() == test_status.RUN_PHASE
+ + +
+[docs] + def test_current_is(self): + assert self._ts.current_is(test_status.RUN_PHASE, test_status.TEST_PASS_STATUS) + + assert not self._ts.current_is( + test_status.RUN_PHASE, test_status.TEST_PEND_STATUS + ) + + assert not self._ts.current_is( + test_status.SUBMIT_PHASE, test_status.TEST_PASS_STATUS + )
+ + + # ------------------------------------------------------------------------ + # Tests of TestStatus.phase_statuses_dump + # ------------------------------------------------------------------------ + +
+[docs] + def test_psdump_corePhasesPass(self): + output = self._ts.phase_statuses_dump() + self.assert_core_phases(output, self._TESTNAME, fails=[]) + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=0 + )
+ + +
+[docs] + def test_psdump_oneCorePhaseFails(self): + fail_phase = self._set_last_core_phase_to_fail() + output = self._ts.phase_statuses_dump() + self.assert_core_phases(output, self._TESTNAME, fails=[fail_phase]) + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=0 + )
+ + +
+[docs] + def test_psdump_oneCorePhaseFailsAbsentFromXFails(self): + """One phase fails. There is an expected fails list, but that phase is not in it.""" + fail_phase = self._set_last_core_phase_to_fail() + xfails = expected_fails.ExpectedFails() + xfails.add_failure( + phase=self._NON_CORE_PHASE, expected_status=test_status.TEST_FAIL_STATUS + ) + output = self._ts.phase_statuses_dump(xfails=xfails) + self.assert_status_of_phase( + output, test_status.TEST_FAIL_STATUS, fail_phase, self._TESTNAME, xfail="no" + ) + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=0 + )
+ + +
+[docs] + def test_psdump_oneCorePhaseFailsInXFails(self): + """One phase fails. That phase is in the expected fails list.""" + fail_phase = self._set_last_core_phase_to_fail() + xfails = expected_fails.ExpectedFails() + xfails.add_failure( + phase=fail_phase, expected_status=test_status.TEST_FAIL_STATUS + ) + output = self._ts.phase_statuses_dump(xfails=xfails) + self.assert_status_of_phase( + output, + test_status.TEST_FAIL_STATUS, + fail_phase, + self._TESTNAME, + xfail="expected", + ) + self.assert_num_expected_unexpected_fails( + output, num_expected=1, num_unexpected=0 + )
+ + +
+[docs] + def test_psdump_oneCorePhasePassesInXFails(self): + """One phase passes despite being in the expected fails list.""" + xfail_phase = test_status.CORE_PHASES[-1] + xfails = expected_fails.ExpectedFails() + xfails.add_failure( + phase=xfail_phase, expected_status=test_status.TEST_FAIL_STATUS + ) + output = self._ts.phase_statuses_dump(xfails=xfails) + self.assert_status_of_phase( + output, + test_status.TEST_PASS_STATUS, + xfail_phase, + self._TESTNAME, + xfail="unexpected", + ) + self.assert_num_expected_unexpected_fails( + output, num_expected=0, num_unexpected=1 + )
+ + +
+[docs] + def test_psdump_skipPasses(self): + """With the skip_passes argument, only non-passes should appear""" + fail_phase = self._set_last_core_phase_to_fail() + output = self._ts.phase_statuses_dump(skip_passes=True) + self.assert_status_of_phase( + output, test_status.TEST_FAIL_STATUS, fail_phase, self._TESTNAME, xfail="no" + ) + for phase in test_status.CORE_PHASES: + if phase != fail_phase: + self.assert_phase_absent(output, phase, self._TESTNAME)
+ + +
+[docs] + def test_psdump_unexpectedPass_shouldBePresent(self): + """Even with the skip_passes argument, an unexpected PASS should be present""" + xfail_phase = test_status.CORE_PHASES[-1] + xfails = expected_fails.ExpectedFails() + xfails.add_failure( + phase=xfail_phase, expected_status=test_status.TEST_FAIL_STATUS + ) + output = self._ts.phase_statuses_dump(skip_passes=True, xfails=xfails) + self.assert_status_of_phase( + output, + test_status.TEST_PASS_STATUS, + xfail_phase, + self._TESTNAME, + xfail="unexpected", + ) + for phase in test_status.CORE_PHASES: + if phase != xfail_phase: + self.assert_phase_absent(output, phase, self._TESTNAME)
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_two_link_to_case2_output.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_two_link_to_case2_output.html new file mode 100644 index 00000000000..0446ecb5232 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_two_link_to_case2_output.html @@ -0,0 +1,303 @@ + + + + + + CIME.tests.test_unit_two_link_to_case2_output — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_two_link_to_case2_output

+#!/usr/bin/env python3
+
+"""
+This module contains unit tests of the method
+SystemTestsCompareTwo._link_to_case2_output
+"""
+
+# Ignore privacy concerns for unit tests, so that unit tests can access
+# protected members of the system under test
+#
+# pylint:disable=protected-access
+
+import unittest
+import os
+import shutil
+import tempfile
+from CIME.SystemTests.system_tests_compare_two import SystemTestsCompareTwo
+from CIME.tests.case_fake import CaseFake
+
+# ========================================================================
+# Fake version of SystemTestsCompareTwo that overrides some functionality for
+# the sake of unit testing
+# ========================================================================
+
+
+
+[docs] +class SystemTestsCompareTwoFake(SystemTestsCompareTwo): + def __init__(self, case1, run_two_suffix="test"): + + SystemTestsCompareTwo.__init__( + self, case1, separate_builds=False, run_two_suffix=run_two_suffix + ) + + # ------------------------------------------------------------------------ + # Stubs of methods called by SystemTestsCommon.__init__ that interact with + # the system or case object in ways we want to avoid here + # ------------------------------------------------------------------------ + + def _init_environment(self, caseroot): + pass + + def _init_locked_files(self, caseroot, expected): + pass + + def _init_case_setup(self): + pass + + # ------------------------------------------------------------------------ + # Stubs of methods that are typically provided by the individual test + # ------------------------------------------------------------------------ + + def _case_one_setup(self): + pass + + def _case_two_setup(self): + pass
+ + + +# ======================================================================== +# Test class itself +# ======================================================================== + + +
+[docs] +class TestLinkToCase2Output(unittest.TestCase): + + # ======================================================================== + # Test helper functions + # ======================================================================== + +
+[docs] + def setUp(self): + self.original_wd = os.getcwd() + # Create a sandbox in which case directories can be created + self.tempdir = tempfile.mkdtemp()
+ + +
+[docs] + def tearDown(self): + # Some tests trigger a chdir call in the SUT; make sure we return to the + # original directory at the end of the test + os.chdir(self.original_wd) + + shutil.rmtree(self.tempdir, ignore_errors=True)
+ + +
+[docs] + def setup_test_and_directories(self, casename1, run2_suffix): + """ + Returns test object + """ + + case1root = os.path.join(self.tempdir, casename1) + case1 = CaseFake(case1root) + mytest = SystemTestsCompareTwoFake(case1, run_two_suffix=run2_suffix) + mytest._case1.make_rundir() # pylint: disable=maybe-no-member + mytest._case2.make_rundir() # pylint: disable=maybe-no-member + + return mytest
+ + +
+[docs] + def create_file_in_rundir2(self, mytest, core_filename, run2_suffix): + """ + Creates a file in rundir2 named CASE2.CORE_FILENAME.nc.RUN2_SUFFIX + (where CASE2 is the casename of case2) + + Returns full path to the file created + """ + filename = "{}.{}.nc.{}".format( + mytest._case2.get_value("CASE"), core_filename, run2_suffix + ) + filepath = os.path.join(mytest._case2.get_value("RUNDIR"), filename) + open(filepath, "w").close() + return filepath
+ + + # ======================================================================== + # Begin actual tests + # ======================================================================== + +
+[docs] + def test_basic(self): + # Setup + casename1 = "mytest" + run2_suffix = "run2" + + mytest = self.setup_test_and_directories(casename1, run2_suffix) + filepath1 = self.create_file_in_rundir2(mytest, "clm2.h0", run2_suffix) + filepath2 = self.create_file_in_rundir2(mytest, "clm2.h1", run2_suffix) + + # Exercise + mytest._link_to_case2_output() + + # Verify + expected_link_filename1 = "{}.clm2.h0.nc.{}".format(casename1, run2_suffix) + expected_link_filepath1 = os.path.join( + mytest._case1.get_value("RUNDIR"), expected_link_filename1 + ) + self.assertTrue(os.path.islink(expected_link_filepath1)) + self.assertEqual(filepath1, os.readlink(expected_link_filepath1)) + + expected_link_filename2 = "{}.clm2.h1.nc.{}".format(casename1, run2_suffix) + expected_link_filepath2 = os.path.join( + mytest._case1.get_value("RUNDIR"), expected_link_filename2 + ) + self.assertTrue(os.path.islink(expected_link_filepath2)) + self.assertEqual(filepath2, os.readlink(expected_link_filepath2))
+ + + +
+ + + # (No verification: Test passes if no exception was raised) + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_user_mod_support.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_user_mod_support.html new file mode 100644 index 00000000000..98dd046a3c0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_user_mod_support.html @@ -0,0 +1,363 @@ + + + + + + CIME.tests.test_unit_user_mod_support — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_user_mod_support

+#!/usr/bin/env python3
+
+import unittest
+import shutil
+import tempfile
+import os
+from CIME.user_mod_support import apply_user_mods
+from CIME.utils import CIMEError
+
+# ========================================================================
+# Define some parameters
+# ========================================================================
+
+_SOURCEMODS = os.path.join("SourceMods", "src.drv")
+
+
+
+[docs] +class TestUserModSupport(unittest.TestCase): + + # ======================================================================== + # Test helper functions + # ======================================================================== + +
+[docs] + def setUp(self): + self._caseroot = tempfile.mkdtemp() + self._caseroot_sourcemods = os.path.join(self._caseroot, _SOURCEMODS) + os.makedirs(self._caseroot_sourcemods) + self._user_mods_parent_dir = tempfile.mkdtemp()
+ + +
+[docs] + def tearDown(self): + shutil.rmtree(self._caseroot, ignore_errors=True) + shutil.rmtree(self._user_mods_parent_dir, ignore_errors=True)
+ + +
+[docs] + def createUserMod(self, name, include_dirs=None): + """Create a user_mods directory with the given name. + + This directory is created within self._user_mods_parent_dir + + For name='foo', it will contain: + + - A user_nl_cpl file with contents: + foo + + - A shell_commands file with contents: + echo foo >> /PATH/TO/CASEROOT/shell_commands_result + + - A file in _SOURCEMODS named myfile.F90 with contents: + foo + + If include_dirs is given, it should be a list of strings, giving names + of other user_mods directories to include. e.g., if include_dirs is + ['foo1', 'foo2'], then this will create a file 'include_user_mods' that + contains paths to the 'foo1' and 'foo2' user_mods directories, one per + line. + """ + + mod_dir = os.path.join(self._user_mods_parent_dir, name) + os.makedirs(mod_dir) + mod_dir_sourcemods = os.path.join(mod_dir, _SOURCEMODS) + os.makedirs(mod_dir_sourcemods) + + with open(os.path.join(mod_dir, "user_nl_cpl"), "w") as user_nl_cpl: + user_nl_cpl.write(name + "\n") + with open(os.path.join(mod_dir, "shell_commands"), "w") as shell_commands: + command = "echo {} >> {}/shell_commands_result\n".format( + name, self._caseroot + ) + shell_commands.write(command) + with open(os.path.join(mod_dir_sourcemods, "myfile.F90"), "w") as f90_file: + f90_file.write(name + "\n") + + if include_dirs: + with open( + os.path.join(mod_dir, "include_user_mods"), "w" + ) as include_user_mods: + for one_include in include_dirs: + include_user_mods.write( + os.path.join(self._user_mods_parent_dir, one_include) + "\n" + )
+ + +
+[docs] + def assertResults( + self, + expected_user_nl_cpl, + expected_shell_commands_result, + expected_sourcemod, + msg="", + ): + """Asserts that the contents of the files in self._caseroot match expectations + + If msg is provided, it is printed for some failing assertions + """ + + path_to_user_nl_cpl = os.path.join(self._caseroot, "user_nl_cpl") + self.assertTrue( + os.path.isfile(path_to_user_nl_cpl), + msg=msg + ": user_nl_cpl does not exist", + ) + with open(path_to_user_nl_cpl, "r") as user_nl_cpl: + contents = user_nl_cpl.read() + self.assertEqual(expected_user_nl_cpl, contents) + + path_to_shell_commands_result = os.path.join( + self._caseroot, "shell_commands_result" + ) + self.assertTrue( + os.path.isfile(path_to_shell_commands_result), + msg=msg + ": shell_commands_result does not exist", + ) + with open(path_to_shell_commands_result, "r") as shell_commands_result: + contents = shell_commands_result.read() + self.assertEqual(expected_shell_commands_result, contents) + + path_to_sourcemod = os.path.join(self._caseroot_sourcemods, "myfile.F90") + self.assertTrue( + os.path.isfile(path_to_sourcemod), + msg=msg + ": sourcemod file does not exist", + ) + with open(path_to_sourcemod, "r") as sourcemod: + contents = sourcemod.read() + self.assertEqual(expected_sourcemod, contents)
+ + + # ======================================================================== + # Begin actual tests + # ======================================================================== + +
+[docs] + def test_basic(self): + self.createUserMod("foo") + apply_user_mods(self._caseroot, os.path.join(self._user_mods_parent_dir, "foo")) + self.assertResults( + expected_user_nl_cpl="foo\n", + expected_shell_commands_result="foo\n", + expected_sourcemod="foo\n", + msg="test_basic", + )
+ + +
+[docs] + def test_keepexe(self): + self.createUserMod("foo") + with self.assertRaisesRegex(CIMEError, "cannot have any source mods"): + apply_user_mods( + self._caseroot, + os.path.join(self._user_mods_parent_dir, "foo"), + keepexe=True, + )
+ + +
+[docs] + def test_two_applications(self): + """If apply_user_mods is called twice, the second should appear after the first so that it takes precedence.""" + + self.createUserMod("foo1") + self.createUserMod("foo2") + apply_user_mods( + self._caseroot, os.path.join(self._user_mods_parent_dir, "foo1") + ) + apply_user_mods( + self._caseroot, os.path.join(self._user_mods_parent_dir, "foo2") + ) + self.assertResults( + expected_user_nl_cpl="foo1\nfoo2\n", + expected_shell_commands_result="foo1\nfoo2\n", + expected_sourcemod="foo2\n", + msg="test_two_applications", + )
+ + +
+[docs] + def test_include(self): + """If there is an included mod, the main one should appear after the included one so that it takes precedence.""" + + self.createUserMod("base") + self.createUserMod("derived", include_dirs=["base"]) + + apply_user_mods( + self._caseroot, os.path.join(self._user_mods_parent_dir, "derived") + ) + + self.assertResults( + expected_user_nl_cpl="base\nderived\n", + expected_shell_commands_result="base\nderived\n", + expected_sourcemod="derived\n", + msg="test_include", + )
+ + +
+[docs] + def test_duplicate_includes(self): + """Test multiple includes, where both include the same base mod. + + The base mod should only be included once. + """ + + self.createUserMod("base") + self.createUserMod("derived1", include_dirs=["base"]) + self.createUserMod("derived2", include_dirs=["base"]) + self.createUserMod("derived_combo", include_dirs=["derived1", "derived2"]) + + apply_user_mods( + self._caseroot, os.path.join(self._user_mods_parent_dir, "derived_combo") + ) + + # NOTE(wjs, 2017-04-15) The ordering of derived1 vs. derived2 is not + # critical here: If this aspect of the behavior changes, the + # expected_contents can be changed to match the new behavior in this + # respect. + expected_contents = """base +derived2 +derived1 +derived_combo +""" + self.assertResults( + expected_user_nl_cpl=expected_contents, + expected_shell_commands_result=expected_contents, + expected_sourcemod="derived_combo\n", + msg="test_duplicate_includes", + )
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_user_nl_utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_user_nl_utils.html new file mode 100644 index 00000000000..ea8b6b6a0ea --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_user_nl_utils.html @@ -0,0 +1,306 @@ + + + + + + CIME.tests.test_unit_user_nl_utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_user_nl_utils

+#!/usr/bin/env python3
+
+import unittest
+import os
+import shutil
+import tempfile
+from CIME.SystemTests.test_utils import user_nl_utils
+
+
+
+[docs] +class TestUserNLCopier(unittest.TestCase): + + # ======================================================================== + # Test helper functions + # ======================================================================== + +
+[docs] + def setUp(self): + self._caseroot = tempfile.mkdtemp()
+ + +
+[docs] + def tearDown(self): + shutil.rmtree(self._caseroot, ignore_errors=True)
+ + +
+[docs] + def write_user_nl_file(self, component, contents, suffix=""): + """Write contents to a user_nl file in the case directory. Returns the + basename (i.e., not the full path) of the file that is created. + + For a component foo, with the default suffix of '', the file name will + be user_nl_foo + + If the suffix is '_0001', the file name will be user_nl_foo_0001 + """ + + filename = "user_nl_" + component + suffix + + with open(os.path.join(self._caseroot, filename), "w") as user_nl_file: + user_nl_file.write(contents) + + return filename
+ + +
+[docs] + def assertFileContentsEqual(self, expected, filepath, msg=None): + """Asserts that the contents of the file given by 'filepath' are equal to + the string given by 'expected'. 'msg' gives an optional message to be + printed if the assertion fails.""" + + with open(filepath, "r") as myfile: + contents = myfile.read() + + self.assertEqual(expected, contents, msg=msg)
+ + + # ======================================================================== + # Begin actual tests + # ======================================================================== + +
+[docs] + def test_append(self): + # Define some variables + component = "foo" + # deliberately exclude new line from file contents, to make sure that's + # handled correctly + orig_contents = "bar = 42" + contents_to_append = "baz = 101" + + # Setup + filename = self.write_user_nl_file(component, orig_contents) + + # Exercise + user_nl_utils.append_to_user_nl_files( + caseroot=self._caseroot, component=component, contents=contents_to_append + ) + + # Verify + expected_contents = orig_contents + "\n" + contents_to_append + "\n" + self.assertFileContentsEqual( + expected_contents, os.path.join(self._caseroot, filename) + )
+ + +
+[docs] + def test_append_list(self): + # Define some variables + component = "foo" + # deliberately exclude new line from file contents, to make sure that's + # handled correctly + orig_contents = "bar = 42" + contents_to_append_1 = "baz = 101" + contents_to_append_2 = "qux = 987" + contents_to_append = [ + contents_to_append_1, + contents_to_append_2, + ] + + # Setup + filename = self.write_user_nl_file(component, orig_contents) + + # Exercise + user_nl_utils.append_to_user_nl_files( + caseroot=self._caseroot, component=component, contents=contents_to_append + ) + + # Verify + expected_contents = ( + orig_contents + + "\n" + + contents_to_append_1 + + "\n" + + contents_to_append_2 + + "\n" + ) + self.assertFileContentsEqual( + expected_contents, os.path.join(self._caseroot, filename) + )
+ + +
+[docs] + def test_append_multiple_files(self): + # Simulates a multi-instance test + component = "foo" + orig_contents1 = "bar = 42" + orig_contents2 = "bar = 17" + contents_to_append = "baz = 101" + + # Setup + filename1 = self.write_user_nl_file(component, orig_contents1, suffix="_0001") + filename2 = self.write_user_nl_file(component, orig_contents2, suffix="_0002") + + # Exercise + user_nl_utils.append_to_user_nl_files( + caseroot=self._caseroot, component=component, contents=contents_to_append + ) + + # Verify + expected_contents1 = orig_contents1 + "\n" + contents_to_append + "\n" + expected_contents2 = orig_contents2 + "\n" + contents_to_append + "\n" + self.assertFileContentsEqual( + expected_contents1, os.path.join(self._caseroot, filename1) + ) + self.assertFileContentsEqual( + expected_contents2, os.path.join(self._caseroot, filename2) + )
+ + +
+[docs] + def test_append_without_files_raises_exception(self): + # This test verifies that you get an exception if you call + # append_to_user_nl_files when there are no user_nl files of interest + + # Define some variables + component_exists = "foo" + component_for_append = "bar" + + # Setup + # Create file in caseroot for component_exists, but not for component_for_append + self.write_user_nl_file(component_exists, "irrelevant contents") + + self.assertRaisesRegex( + RuntimeError, + "No user_nl files found", + user_nl_utils.append_to_user_nl_files, + caseroot=self._caseroot, + component=component_for_append, + contents="irrelevant contents to append", + )
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_utils.html new file mode 100644 index 00000000000..04dcce4f01f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_utils.html @@ -0,0 +1,685 @@ + + + + + + CIME.tests.test_unit_utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.test_unit_utils

+#!/usr/bin/env python3
+
+import os
+import stat
+import shutil
+import sys
+import tempfile
+
+import unittest
+from unittest import mock
+from CIME.utils import (
+    indent_string,
+    run_and_log_case_status,
+    import_from_file,
+    _line_defines_python_function,
+    file_contains_python_function,
+    copy_globs,
+    import_and_run_sub_or_cmd,
+)
+
+
+
+[docs] +class TestIndentStr(unittest.TestCase): + """Test the indent_string function.""" + +
+[docs] + def test_indent_string_singleline(self): + """Test the indent_string function with a single-line string""" + mystr = "foo" + result = indent_string(mystr, 4) + expected = " foo" + self.assertEqual(expected, result)
+ + +
+[docs] + def test_indent_string_multiline(self): + """Test the indent_string function with a multi-line string""" + mystr = """hello +hi +goodbye +""" + result = indent_string(mystr, 2) + expected = """ hello + hi + goodbye +""" + self.assertEqual(expected, result)
+
+ + + +
+[docs] +class TestLineDefinesPythonFunction(unittest.TestCase): + """Tests of _line_defines_python_function""" + + # ------------------------------------------------------------------------ + # Tests of _line_defines_python_function that should return True + # ------------------------------------------------------------------------ + +
+[docs] + def test_def_foo(self): + """Test of a def of the function of interest""" + line = "def foo():" + self.assertTrue(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_def_foo_space(self): + """Test of a def of the function of interest, with an extra space before the parentheses""" + line = "def foo ():" + self.assertTrue(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_foo(self): + """Test of an import of the function of interest""" + line = "from bar.baz import foo" + self.assertTrue(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_foo_space(self): + """Test of an import of the function of interest, with trailing spaces""" + line = "from bar.baz import foo " + self.assertTrue(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_foo_then_others(self): + """Test of an import of the function of interest, along with others""" + line = "from bar.baz import foo, bar" + self.assertTrue(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_others_then_foo(self): + """Test of an import of the function of interest, after others""" + line = "from bar.baz import bar, foo" + self.assertTrue(_line_defines_python_function(line, "foo"))
+ + + # ------------------------------------------------------------------------ + # Tests of _line_defines_python_function that should return False + # ------------------------------------------------------------------------ + +
+[docs] + def test_def_barfoo(self): + """Test of a def of a different function""" + line = "def barfoo():" + self.assertFalse(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_def_foobar(self): + """Test of a def of a different function""" + line = "def foobar():" + self.assertFalse(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_def_foo_indented(self): + """Test of a def of the function of interest, but indented""" + line = " def foo():" + self.assertFalse(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_def_foo_no_parens(self): + """Test of a def of the function of interest, but without parentheses""" + line = "def foo:" + self.assertFalse(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_foo_indented(self): + """Test of an import of the function of interest, but indented""" + line = " from bar.baz import foo" + self.assertFalse(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_barfoo(self): + """Test of an import of a different function""" + line = "from bar.baz import barfoo" + self.assertFalse(_line_defines_python_function(line, "foo"))
+ + +
+[docs] + def test_import_foobar(self): + """Test of an import of a different function""" + line = "from bar.baz import foobar" + self.assertFalse(_line_defines_python_function(line, "foo"))
+
+ + + +
+[docs] +class TestFileContainsPythonFunction(unittest.TestCase): + """Tests of file_contains_python_function""" + +
+[docs] + def setUp(self): + self._workdir = tempfile.mkdtemp()
+ + +
+[docs] + def tearDown(self): + shutil.rmtree(self._workdir, ignore_errors=True)
+ + +
+[docs] + def create_test_file(self, contents): + """Creates a test file with the given contents, and returns the path to that file""" + + filepath = os.path.join(self._workdir, "testfile") + with open(filepath, "w") as fd: + fd.write(contents) + + return filepath
+ + +
+[docs] + def test_contains_correct_def_and_others(self): + """Test file_contains_python_function with a correct def mixed with other defs""" + contents = """ +def bar(): +def foo(): +def baz(): +""" + filepath = self.create_test_file(contents) + self.assertTrue(file_contains_python_function(filepath, "foo"))
+ + +
+[docs] + def test_does_not_contain_correct_def(self): + """Test file_contains_python_function without the correct def""" + contents = """ +def bar(): +def notfoo(): +def baz(): +""" + filepath = self.create_test_file(contents) + self.assertFalse(file_contains_python_function(filepath, "foo"))
+
+ + + +
+[docs] +class MockTime(object): + def __init__(self): + self._old = None + + def __enter__(self): + self._old = getattr(sys.modules["time"], "strftime") + setattr(sys.modules["time"], "strftime", lambda *args: "00:00:00 ") + + def __exit__(self, *args, **kwargs): + setattr(sys.modules["time"], "strftime", self._old)
+ + + +
+[docs] +def match_all_lines(data, lines): + for line in data: + for i, x in enumerate(lines): + if x == line: + lines.pop(i) + + continue + + if len(lines) == 0: + return True, [] + + return False, lines
+ + + +
+[docs] +class TestUtils(unittest.TestCase): +
+[docs] + def setUp(self): + self.base_func = lambda *args: None + + # pylint: disable=unused-argument + def _error_func(*args): + raise Exception("Something went wrong") + + self.error_func = _error_func
+ + +
+[docs] + def test_import_and_run_sub_or_cmd(self): + with self.assertRaisesRegex( + Exception, "ERROR: Could not find buildnml file for component test" + ): + import_and_run_sub_or_cmd( + "/tmp/buildnml", + "arg1 arg2 -vvv", + "buildnml", + (self, "arg1"), + "/tmp", + "test", + )
+ + +
+[docs] + @mock.patch("importlib.import_module") + def test_import_and_run_sub_or_cmd_cime_py(self, importmodule): + importmodule.side_effect = Exception("Module has a problem") + + with self.assertRaisesRegex(Exception, "Module has a problem") as e: + import_and_run_sub_or_cmd( + "/tmp/buildnml", + "arg1, arg2 -vvv", + "buildnml", + (self, "arg1"), + "/tmp", + "test", + ) + + # check that we avoid exception chaining + self.assertTrue(e.exception.__context__ is None)
+ + +
+[docs] + @mock.patch("importlib.import_module") + def test_import_and_run_sub_or_cmd_import(self, importmodule): + importmodule.side_effect = Exception("I am being imported") + + with self.assertRaisesRegex(Exception, "I am being imported") as e: + import_and_run_sub_or_cmd( + "/tmp/buildnml", + "arg1 arg2 -vvv", + "buildnml", + (self, "arg1"), + "/tmp", + "test", + ) + + # check that we avoid exception chaining + self.assertTrue(e.exception.__context__ is None)
+ + +
+[docs] + @mock.patch("os.path.isfile") + @mock.patch("CIME.utils.run_sub_or_cmd") + def test_import_and_run_sub_or_cmd_run(self, func, isfile): + isfile.return_value = True + + func.side_effect = Exception( + "ERROR: /tmp/buildnml arg1 arg2 -vvv FAILED, see above" + ) + + with self.assertRaisesRegex( + Exception, "ERROR: /tmp/buildnml arg1 arg2 -vvv FAILED, see above" + ): + import_and_run_sub_or_cmd( + "/tmp/buildnml", + "arg1 arg2 -vvv", + "buildnml", + (self, "arg1"), + "/tmp", + "test", + )
+ + +
+[docs] + @mock.patch("glob.glob") + @mock.patch("CIME.utils.safe_copy") + def test_copy_globs(self, safe_copy, glob): + glob.side_effect = [ + [], + ["/src/run/test.sh", "/src/run/.hidden.sh"], + [ + "/src/bld/test.nc", + ], + ] + + copy_globs(["CaseDocs/*", "run/*.sh", "bld/*.nc"], "/storage/output", "uid") + + safe_copy.assert_any_call( + "/src/run/test.sh", "/storage/output/test.sh.uid", preserve_meta=False + ) + safe_copy.assert_any_call( + "/src/run/.hidden.sh", "/storage/output/hidden.sh.uid", preserve_meta=False + ) + safe_copy.assert_any_call( + "/src/bld/test.nc", "/storage/output/test.nc.uid", preserve_meta=False + )
+ + +
+[docs] + def assertMatchAllLines(self, tempdir, test_lines): + with open(os.path.join(tempdir, "CaseStatus")) as fd: + data = fd.readlines() + + result, missing = match_all_lines(data, test_lines) + + error = [] + + if len(missing) != 0: + error.extend(["Missing Lines", ""]) + error.extend([x.rstrip("\n") for x in missing]) + error.extend(["", "Tempfile contents", ""]) + error.extend([x.rstrip("\n") for x in data]) + + self.assertTrue(result, msg="\n".join(error))
+ + +
+[docs] + def test_import_from_file(self): + with tempfile.NamedTemporaryFile() as fd: + fd.writelines( + [ + b"def test():\n", + b" return 'value'", + ] + ) + + fd.flush() + + module = import_from_file("test.py", fd.name) + + assert module.test() == "value"
+ + +
+[docs] + def test_run_and_log_case_status(self): + test_lines = [ + "00:00:00 default starting \n", + "00:00:00 default success \n", + ] + + with tempfile.TemporaryDirectory() as tempdir, MockTime(): + run_and_log_case_status(self.base_func, "default", caseroot=tempdir) + + self.assertMatchAllLines(tempdir, test_lines)
+ + +
+[docs] + def test_run_and_log_case_status_case_submit_on_batch(self): + test_lines = [ + "00:00:00 case.submit starting \n", + "00:00:00 case.submit success \n", + ] + + with tempfile.TemporaryDirectory() as tempdir, MockTime(): + run_and_log_case_status( + self.base_func, "case.submit", caseroot=tempdir, is_batch=True + ) + + self.assertMatchAllLines(tempdir, test_lines)
+ + +
+[docs] + def test_run_and_log_case_status_case_submit_no_batch(self): + test_lines = [ + "00:00:00 case.submit starting \n", + "00:00:00 case.submit success \n", + ] + + with tempfile.TemporaryDirectory() as tempdir, MockTime(): + run_and_log_case_status( + self.base_func, "case.submit", caseroot=tempdir, is_batch=False + ) + + self.assertMatchAllLines(tempdir, test_lines)
+ + +
+[docs] + def test_run_and_log_case_status_case_submit_error_on_batch(self): + test_lines = [ + "00:00:00 case.submit starting \n", + "00:00:00 case.submit error \n", + "Something went wrong\n", + ] + + with tempfile.TemporaryDirectory() as tempdir, MockTime(): + with self.assertRaises(Exception): + run_and_log_case_status( + self.error_func, "case.submit", caseroot=tempdir, is_batch=True + ) + + self.assertMatchAllLines(tempdir, test_lines)
+ + +
+[docs] + def test_run_and_log_case_status_custom_msg(self): + test_lines = [ + "00:00:00 default starting starting extra\n", + "00:00:00 default success success extra\n", + ] + + starting_func = mock.MagicMock(return_value="starting extra") + success_func = mock.MagicMock(return_value="success extra") + + def normal_func(): + return "data" + + with tempfile.TemporaryDirectory() as tempdir, MockTime(): + run_and_log_case_status( + normal_func, + "default", + custom_starting_msg_functor=starting_func, + custom_success_msg_functor=success_func, + caseroot=tempdir, + ) + + self.assertMatchAllLines(tempdir, test_lines) + + starting_func.assert_called_with() + success_func.assert_called_with("data")
+ + +
+[docs] + def test_run_and_log_case_status_custom_msg_error_on_batch(self): + test_lines = [ + "00:00:00 default starting starting extra\n", + "00:00:00 default success success extra\n", + ] + + starting_func = mock.MagicMock(return_value="starting extra") + success_func = mock.MagicMock(return_value="success extra") + + def error_func(): + raise Exception("Error") + + with tempfile.TemporaryDirectory() as tempdir, MockTime(), self.assertRaises( + Exception + ): + run_and_log_case_status( + error_func, + "default", + custom_starting_msg_functor=starting_func, + custom_success_msg_functor=success_func, + caseroot=tempdir, + ) + + self.assertMatchAllLines(tempdir, test_lines) + + starting_func.assert_called_with() + success_func.assert_not_called()
+ + +
+[docs] + def test_run_and_log_case_status_error(self): + test_lines = [ + "00:00:00 default starting \n", + "00:00:00 default error \n", + "Something went wrong\n", + ] + + with tempfile.TemporaryDirectory() as tempdir, MockTime(): + with self.assertRaises(Exception): + run_and_log_case_status(self.error_func, "default", caseroot=tempdir) + + self.assertMatchAllLines(tempdir, test_lines)
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_archive_base.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_archive_base.html new file mode 100644 index 00000000000..e0bfdbcc236 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_archive_base.html @@ -0,0 +1,320 @@ + + + + + + CIME.tests.test_unit_xml_archive_base — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_xml_archive_base

+#!/usr/bin/env python3
+
+import os
+import io
+import unittest
+import tempfile
+from contextlib import contextmanager
+from pathlib import Path
+from unittest import mock
+
+from CIME.XML.archive_base import ArchiveBase
+
+TEST_CONFIG = """<components version="2.0">
+  <comp_archive_spec compname="eam" compclass="atm">
+    <hist_file_extension>unique\.name\.unique.*</hist_file_extension>
+  </comp_archive_spec>
+</components>"""
+
+EXACT_TEST_CONFIG = """<components version="2.0">
+  <comp_archive_spec compname="eam" compclass="atm">
+    <hist_file_extension>unique\.name\.unique.nc</hist_file_extension>
+  </comp_archive_spec>
+</components>"""
+
+EXCLUDE_TEST_CONFIG = """<components version="2.0">
+  <comp_archive_spec compname="eam" compclass="atm">
+    <hist_file_extension>unique\.name\.unique.nc</hist_file_extension>
+  </comp_archive_spec>
+  <comp_archive_spec compname="cpl" compclass="drv" exclude_testing="True">
+    <hist_file_extension>unique\.name\.unique.nc</hist_file_extension>
+  </comp_archive_spec>
+  <comp_archive_spec compname="mpasso" compclass="drv" exclude_testing="False">
+    <hist_file_extension>unique\.name\.unique.nc</hist_file_extension>
+  </comp_archive_spec>
+</components>"""
+
+
+
+[docs] +class TestXMLArchiveBase(unittest.TestCase): + @contextmanager + def _setup_environment(self, test_files): + with tempfile.TemporaryDirectory() as temp_dir: + for x in test_files: + Path(temp_dir, x).touch() + + yield temp_dir + +
+[docs] + def test_exclude_testing(self): + archiver = ArchiveBase() + + archiver.read_fd(io.StringIO(EXCLUDE_TEST_CONFIG)) + + # no attribute + assert not archiver.exclude_testing("eam") + + # not in config + assert not archiver.exclude_testing("mpassi") + + # set false + assert not archiver.exclude_testing("mpasso") + + # set true + assert archiver.exclude_testing("cpl")
+ + +
+[docs] + def test_match_files(self): + archiver = ArchiveBase() + + archiver.read_fd(io.StringIO(TEST_CONFIG)) + + fail_files = [ + "othername.eam.unique.name.unique.0001-01-01-0000.nc", # casename mismatch + "casename.satm.unique.name.unique.0001-01-01-0000.nc", # model (component?) mismatch + "casename.eam.0001-01-01-0000.nc", # missing hist_file_extension + "casename.eam.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc", + ] + + test_files = [ + "casename.eam1.unique.name.unique.0001-01-01-0000.nc", + "casename.eam1_.unique.name.unique.0001-01-01-0000.nc", + "casename.eam_.unique.name.unique.0001-01-01-0000.nc", + "casename.eam1990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam_1990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam1_1990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam11990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.0001-01-01-0000.nc.base", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc.base", + ] + + with self._setup_environment(fail_files + test_files) as temp_dir: + hist_files = archiver.get_all_hist_files( + "casename", "eam", from_dir=temp_dir + ) + + test_files.sort() + hist_files.sort() + + assert len(hist_files) == len(test_files) + + # assert all match except first + for x, y in zip(test_files, hist_files): + assert x == y, f"{x} != {y}"
+ + +
+[docs] + def test_extension_included(self): + archiver = ArchiveBase() + + archiver.read_fd(io.StringIO(EXACT_TEST_CONFIG)) + + fail_files = [ + "othername.eam.unique.name.unique.0001-01-01-0000.nc", # casename mismatch + "casename.satm.unique.name.unique.0001-01-01-0000.nc", # model (component?) mismatch + "casename.eam.0001-01-01-0000.nc", # missing hist_file_extension + "casename.eam.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.0001-01-01-0000.nc.base", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc.base", + ] + + test_files = [ + "casename.eam1.unique.name.unique.nc", + "casename.eam1_.unique.name.unique.nc", + "casename.eam_.unique.name.unique.nc", + "casename.eam1990.unique.name.unique.nc", + "casename.eam_1990.unique.name.unique.nc", + "casename.eam1_1990.unique.name.unique.nc", + "casename.eam11990.unique.name.unique.nc", + "casename.eam.unique.name.unique.nc", + ] + + with self._setup_environment(fail_files + test_files) as temp_dir: + hist_files = archiver.get_all_hist_files( + "casename", "eam", suffix="nc", from_dir=temp_dir + ) + + test_files.sort() + hist_files.sort() + + assert len(hist_files) == len(test_files) + + # assert all match except first + for x, y in zip(test_files, hist_files): + assert x == y, f"{x} != {y}"
+ + +
+[docs] + def test_suffix(self): + archiver = ArchiveBase() + + archiver.read_fd(io.StringIO(TEST_CONFIG)) + + fail_files = [ + "othername.eam.unique.name.unique.0001-01-01-0000.nc", # casename mismatch + "casename.satm.unique.name.unique.0001-01-01-0000.nc", # model (component?) mismatch + "casename.eam.0001-01-01-0000.nc", # missing hist_file_extension + "casename.eam.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc", + # ensure these do not match when suffix is provided + "casename.eam1.unique.name.unique.0001-01-01-0000.nc", + "casename.eam1_.unique.name.unique.0001-01-01-0000.nc", + "casename.eam_.unique.name.unique.0001-01-01-0000.nc", + "casename.eam1990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam_1990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam1_1990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam11990.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.0001-01-01-0000.nc", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc", + ] + + test_files = [ + "casename.eam.unique.name.unique.0001-01-01-0000.nc.base", + "casename.eam.unique.name.unique.some.extra.0001-01-01-0000.nc.base", + ] + + with self._setup_environment(fail_files + test_files) as temp_dir: + hist_files = archiver.get_all_hist_files( + "casename", "eam", suffix="base", from_dir=temp_dir + ) + + assert len(hist_files) == len(test_files) + + hist_files.sort() + test_files.sort() + + for x, y in zip(hist_files, test_files): + assert x == y, f"{x} != {y}"
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_env_batch.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_env_batch.html new file mode 100644 index 00000000000..8d185da9ad8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_env_batch.html @@ -0,0 +1,939 @@ + + + + + + CIME.tests.test_unit_xml_env_batch — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_xml_env_batch

+#!/usr/bin/env python3
+
+import os
+import unittest
+import tempfile
+from unittest import mock
+
+from CIME.utils import CIMEError
+from CIME.XML.env_batch import EnvBatch, get_job_deps
+
+# pylint: disable=unused-argument
+
+
+
+[docs] +class TestXMLEnvBatch(unittest.TestCase): +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch._submit_single_job") + def test_submit_jobs(self, _submit_single_job): + case = mock.MagicMock() + + case.get_value.side_effect = [ + False, + ] + + env_batch = EnvBatch() + + with self.assertRaises(CIMEError): + env_batch.submit_jobs(case)
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.os.path.isfile") + @mock.patch("CIME.XML.env_batch.get_batch_script_for_job") + @mock.patch("CIME.XML.env_batch.EnvBatch._submit_single_job") + def test_submit_jobs_dependency( + self, _submit_single_job, get_batch_script_for_job, isfile + ): + case = mock.MagicMock() + + case.get_env.return_value.get_jobs.return_value = [ + "case.build", + "case.run", + ] + + case.get_env.return_value.get_value.side_effect = [ + None, + "", + None, + "case.build", + ] + + case.get_value.side_effect = [ + False, + ] + + _submit_single_job.side_effect = ["0", "1"] + + isfile.return_value = True + + get_batch_script_for_job.side_effect = [".case.build", ".case.run"] + + env_batch = EnvBatch() + + depid = env_batch.submit_jobs(case) + + _submit_single_job.assert_any_call( + case, + "case.build", + skip_pnl=False, + resubmit_immediate=False, + dep_jobs=[], + allow_fail=False, + no_batch=False, + mail_user=None, + mail_type=None, + batch_args=None, + dry_run=False, + workflow=True, + ) + _submit_single_job.assert_any_call( + case, + "case.run", + skip_pnl=False, + resubmit_immediate=False, + dep_jobs=[ + "0", + ], + allow_fail=False, + no_batch=False, + mail_user=None, + mail_type=None, + batch_args=None, + dry_run=False, + workflow=True, + ) + assert depid == {"case.build": "0", "case.run": "1"}
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.os.path.isfile") + @mock.patch("CIME.XML.env_batch.get_batch_script_for_job") + @mock.patch("CIME.XML.env_batch.EnvBatch._submit_single_job") + def test_submit_jobs_single( + self, _submit_single_job, get_batch_script_for_job, isfile + ): + case = mock.MagicMock() + + case.get_env.return_value.get_jobs.return_value = [ + "case.run", + ] + + case.get_env.return_value.get_value.return_value = None + + case.get_value.side_effect = [ + False, + ] + + _submit_single_job.return_value = "0" + + isfile.return_value = True + + get_batch_script_for_job.side_effect = [ + ".case.run", + ] + + env_batch = EnvBatch() + + depid = env_batch.submit_jobs(case) + + _submit_single_job.assert_any_call( + case, + "case.run", + skip_pnl=False, + resubmit_immediate=False, + dep_jobs=[], + allow_fail=False, + no_batch=False, + mail_user=None, + mail_type=None, + batch_args=None, + dry_run=False, + workflow=True, + ) + assert depid == {"case.run": "0"}
+ + +
+[docs] + def test_get_job_deps(self): + # no jobs + job_deps = get_job_deps("", {}) + + assert job_deps == [] + + # dependency doesn't exist + job_deps = get_job_deps("case.run", {}) + + assert job_deps == [] + + job_deps = get_job_deps("case.run", {"case.run": 0}) + + assert job_deps == [ + "0", + ] + + job_deps = get_job_deps( + "case.run case.post_run_io", {"case.run": 0, "case.post_run_io": 1} + ) + + assert job_deps == ["0", "1"] + + # old syntax + job_deps = get_job_deps("case.run and case.post_run_io", {"case.run": 0}) + + assert job_deps == [ + "0", + ] + + # old syntax + job_deps = get_job_deps( + "(case.run and case.post_run_io) or case.test", {"case.run": 0} + ) + + assert job_deps == [ + "0", + ] + + job_deps = get_job_deps("", {}, user_prereq="2") + + assert job_deps == [ + "2", + ] + + job_deps = get_job_deps("", {}, prev_job="1") + + assert job_deps == [ + "1", + ]
+ + +
+[docs] + def test_get_submit_args_job_queue(self): + with tempfile.NamedTemporaryFile() as tfile: + tfile.write( + b"""<?xml version="1.0"?> +<file id="env_batch.xml" version="2.0"> + <header> + These variables may be changed anytime during a run, they + control arguments to the batch submit command. + </header> + <group id="config_batch"> + <entry id="BATCH_SYSTEM" value="slurm"> + <type>char</type> + <valid_values>miller_slurm,nersc_slurm,lc_slurm,moab,pbs,lsf,slurm,cobalt,cobalt_theta,none</valid_values> + <desc>The batch system type to use for this machine.</desc> + </entry> + </group> + <group id="job_submission"> + <entry id="PROJECT_REQUIRED" value="FALSE"> + <type>logical</type> + <valid_values>TRUE,FALSE</valid_values> + <desc>whether the PROJECT value is required on this machine</desc> + </entry> + </group> + <batch_system MACH="docker" type="slurm"> + <submit_args> + <argument>-w default</argument> + <argument job_queue="short">-w short</argument> + <argument job_queue="long">-w long</argument> + <argument>-A $VARIABLE_THAT_DOES_NOT_EXIST</argument> + </submit_args> + <queues> + <queue walltimemax="01:00:00" nodemax="1">long</queue> + <queue walltimemax="00:30:00" nodemax="1" default="true">short</queue> + </queues> + </batch_system> +</file> +""" + ) + + tfile.seek(0) + + batch = EnvBatch(infile=tfile.name) + + case = mock.MagicMock() + + case.get_value.side_effect = ("long", "long", None) + + case.get_resolved_value.return_value = None + + case.filename = mock.PropertyMock(return_value=tfile.name) + + submit_args = batch.get_submit_args(case, ".case.run") + + expected_args = " -w default -w long" + assert submit_args == expected_args
+ + +
+[docs] + @mock.patch.dict(os.environ, {"TEST": "GOOD"}) + def test_get_submit_args(self): + with tempfile.NamedTemporaryFile() as tfile: + tfile.write( + b"""<?xml version="1.0"?> +<file id="env_batch.xml" version="2.0"> + <header> + These variables may be changed anytime during a run, they + control arguments to the batch submit command. + </header> + <group id="config_batch"> + <entry id="BATCH_SYSTEM" value="slurm"> + <type>char</type> + <valid_values>miller_slurm,nersc_slurm,lc_slurm,moab,pbs,lsf,slurm,cobalt,cobalt_theta,none</valid_values> + <desc>The batch system type to use for this machine.</desc> + </entry> + </group> + <group id="job_submission"> + <entry id="PROJECT_REQUIRED" value="FALSE"> + <type>logical</type> + <valid_values>TRUE,FALSE</valid_values> + <desc>whether the PROJECT value is required on this machine</desc> + </entry> + </group> + <batch_system type="slurm"> + <batch_query per_job_arg="-j">squeue</batch_query> + <batch_submit>sbatch</batch_submit> + <batch_cancel>scancel</batch_cancel> + <batch_directive>#SBATCH</batch_directive> + <jobid_pattern>(\d+)$</jobid_pattern> + <depend_string>--dependency=afterok:jobid</depend_string> + <depend_allow_string>--dependency=afterany:jobid</depend_allow_string> + <depend_separator>:</depend_separator> + <walltime_format>%H:%M:%S</walltime_format> + <batch_mail_flag>--mail-user</batch_mail_flag> + <batch_mail_type_flag>--mail-type</batch_mail_type_flag> + <batch_mail_type>none, all, begin, end, fail</batch_mail_type> + <submit_args> + <arg flag="--time" name="$JOB_WALLCLOCK_TIME"/> + <arg flag="-p" name="$JOB_QUEUE"/> + <arg flag="--account" name="$PROJECT"/> + <arg flag="--no-arg" /> + <arg flag="--path" name="$$ENV{TEST}" /> + </submit_args> + <directives> + <directive> --job-name={{ job_id }}</directive> + <directive> --nodes={{ num_nodes }}</directive> + <directive> --output={{ job_id }}.%j </directive> + <directive> --exclusive </directive> + </directives> + </batch_system> + <batch_system MACH="docker" type="slurm"> + <submit_args> + <argument>-w docker</argument> + </submit_args> + <queues> + <queue walltimemax="01:00:00" nodemax="1">long</queue> + <queue walltimemax="00:30:00" nodemax="1" default="true">short</queue> + </queues> + </batch_system> +</file> +""" + ) + + tfile.seek(0) + + batch = EnvBatch(infile=tfile.name) + + case = mock.MagicMock() + + case.get_value.side_effect = [ + os.path.dirname(tfile.name), + "00:30:00", + "long", + "CIME", + "/test", + ] + + def my_get_resolved_value(val): + return val + + # value for --path + case.get_resolved_value.side_effect = my_get_resolved_value + + case.filename = mock.PropertyMock(return_value=tfile.name) + + submit_args = batch.get_submit_args(case, ".case.run") + + expected_args = " --time 00:30:00 -p long --account CIME --no-arg --path /test -w docker" + + assert submit_args == expected_args
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.get") + def test_get_queue_specs(self, get): + node = mock.MagicMock() + + batch = EnvBatch() + + get.side_effect = [ + "1", + "1", + None, + None, + "case.run", + "08:00:00", + "05:00:00", + "12:00:00", + "false", + ] + + ( + nodemin, + nodemax, + jobname, + walltimedef, + walltimemin, + walltimemax, + jobmin, + jobmax, + strict, + ) = batch.get_queue_specs(node) + + self.assertTrue(nodemin == 1) + self.assertTrue(nodemax == 1) + self.assertTrue(jobname == "case.run") + self.assertTrue(walltimedef == "08:00:00") + self.assertTrue(walltimemin == "05:00:00") + self.assertTrue(walltimemax == "12:00:00") + self.assertTrue(jobmin == None) + self.assertTrue(jobmax == None) + self.assertFalse(strict)
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemin, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + "10:00:00", + "08:00:00", + "12:00:00", + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults_honor_walltimemax( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return "20:00:00" + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "20:00:00", subgroup="case.run" + )
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemin, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + "10:00:00", + "08:00:00", + "12:00:00", + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults_honor_walltimemin( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return "05:00:00" + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "05:00:00", subgroup="case.run" + )
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + "10:00:00", + "08:00:00", + "12:00:00", + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults_user_walltime( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return "10:00:00" + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "10:00:00", subgroup="case.run" + )
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + "10:00:00", + "05:00:00", + None, + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults_walltimemax_none( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return "08:00:00" + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "08:00:00", subgroup="case.run" + )
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + "10:00:00", + None, + "12:00:00", + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults_walltimemin_none( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return "08:00:00" + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "08:00:00", subgroup="case.run" + )
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + "10:00:00", + "08:00:00", + "12:00:00", + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults_walltimedef( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return None + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "10:00:00", subgroup="case.run" + )
+ + +
+[docs] + @mock.patch("CIME.XML.env_batch.EnvBatch.text", return_value="default") + # nodemin, nodemax, jobname, walltimemax, jobmin, jobmax, strict + @mock.patch( + "CIME.XML.env_batch.EnvBatch.get_queue_specs", + return_value=[ + 1, + 1, + "case.run", + None, + "08:00:00", + "12:00:00", + 1, + 1, + False, + ], + ) + @mock.patch("CIME.XML.env_batch.EnvBatch.select_best_queue") + @mock.patch("CIME.XML.env_batch.EnvBatch.get_default_queue") + def test_set_job_defaults( + self, get_default_queue, select_best_queue, get_queue_specs, text + ): + case = mock.MagicMock() + + batch_jobs = [ + ( + "case.run", + { + "template": "template.case.run", + "prereq": "$BUILD_COMPLETE and not $TEST", + }, + ) + ] + + def get_value(*args, **kwargs): + if args[0] == "USER_REQUESTED_WALLTIME": + return None + + return mock.MagicMock() + + case.get_value = get_value + + case.get_env.return_value.get_jobs.return_value = ["case.run"] + + batch = EnvBatch() + + batch.set_job_defaults(batch_jobs, case) + + env_workflow = case.get_env.return_value + + env_workflow.set_value.assert_any_call( + "JOB_QUEUE", "default", subgroup="case.run", ignore_type=False + ) + env_workflow.set_value.assert_any_call( + "JOB_WALLCLOCK_TIME", "12:00:00", subgroup="case.run" + )
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_env_mach_specific.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_env_mach_specific.html new file mode 100644 index 00000000000..bbec27527d3 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_env_mach_specific.html @@ -0,0 +1,493 @@ + + + + + + CIME.tests.test_unit_xml_env_mach_specific — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_xml_env_mach_specific

+#!/usr/bin/env python3
+
+import unittest
+import tempfile
+from unittest import mock
+
+from CIME import utils
+from CIME.XML.env_mach_specific import EnvMachSpecific
+
+# pylint: disable=unused-argument
+
+
+
+[docs] +class TestXMLEnvMachSpecific(unittest.TestCase): +
+[docs] + def test_aprun_get_args(self): + with tempfile.NamedTemporaryFile() as temp: + temp.write( + b"""<?xml version="1.0"?> +<file id="env_mach_specific.xml" version="2.0"> + <header> + These variables control the machine dependent environment including + the paths to compilers and libraries external to cime such as netcdf, + environment variables for use in the running job should also be set here. + </header> + <group id="compliant_values"> + <entry id="run_exe" value="${EXEROOT}/e3sm.exe "> + <type>char</type> + <desc>executable name</desc> + </entry> + <entry id="run_misc_suffix" value=" &gt;&gt; e3sm.log.$LID 2&gt;&amp;1 "> + <type>char</type> + <desc>redirect for job output</desc> + </entry> + </group> + <module_system type="none"/> + <environment_variables> + <env name="OMPI_ALLOW_RUN_AS_ROOT">1</env> + <env name="OMPI_ALLOW_RUN_AS_ROOT_CONFIRM">1</env> + </environment_variables> + <mpirun mpilib="openmpi"> + <aprun_mode>override</aprun_mode> + <executable>aprun</executable> + <arguments> + <arg name="default_per">-j 10</arg> + <arg name="ntasks" position="global">-n {{ total_tasks }}</arg> + <arg name="oversubscribe" position="per">--oversubscribe</arg> + </arguments> + </mpirun> +</file> +""" + ) + temp.seek(0) + + mach_specific = EnvMachSpecific(infile=temp.name) + + attribs = {"compiler": "gnu", "mpilib": "openmpi", "threaded": False} + + case = mock.MagicMock() + + type(case).total_tasks = mock.PropertyMock(return_value=4) + + extra_args = mach_specific.get_aprun_args(case, attribs, "case.run") + + expected_args = { + "-j 10": {"position": "per"}, + "--oversubscribe": {"position": "per"}, + "-n 4": {"position": "global"}, + } + + assert extra_args == expected_args
+ + +
+[docs] + def test_get_aprun_mode_not_valid(self): + with tempfile.NamedTemporaryFile() as temp: + temp.write( + b"""<?xml version="1.0"?> +<file id="env_mach_specific.xml" version="2.0"> + <header> + These variables control the machine dependent environment including + the paths to compilers and libraries external to cime such as netcdf, + environment variables for use in the running job should also be set here. + </header> + <group id="compliant_values"> + <entry id="run_exe" value="${EXEROOT}/e3sm.exe "> + <type>char</type> + <desc>executable name</desc> + </entry> + <entry id="run_misc_suffix" value=" &gt;&gt; e3sm.log.$LID 2&gt;&amp;1 "> + <type>char</type> + <desc>redirect for job output</desc> + </entry> + </group> + <module_system type="none"/> + <environment_variables> + <env name="OMPI_ALLOW_RUN_AS_ROOT">1</env> + <env name="OMPI_ALLOW_RUN_AS_ROOT_CONFIRM">1</env> + </environment_variables> + <mpirun mpilib="openmpi"> + <aprun_mode>custom</aprun_mode> + <executable>aprun</executable> + <arguments> + <arg name="ntasks">-n {{ total_tasks }}</arg> + <arg name="oversubscribe">--oversubscribe</arg> + </arguments> + </mpirun> +</file> +""" + ) + temp.seek(0) + + mach_specific = EnvMachSpecific(infile=temp.name) + + attribs = {"compiler": "gnu", "mpilib": "openmpi", "threaded": False} + + with self.assertRaises(utils.CIMEError) as e: + mach_specific.get_aprun_mode(attribs) + + assert ( + str(e.exception) + == "ERROR: Value 'custom' for \"aprun_mode\" is not valid, options are 'ignore, default, override'" + )
+ + +
+[docs] + def test_get_aprun_mode_user_defined(self): + with tempfile.NamedTemporaryFile() as temp: + temp.write( + b"""<?xml version="1.0"?> +<file id="env_mach_specific.xml" version="2.0"> + <header> + These variables control the machine dependent environment including + the paths to compilers and libraries external to cime such as netcdf, + environment variables for use in the running job should also be set here. + </header> + <group id="compliant_values"> + <entry id="run_exe" value="${EXEROOT}/e3sm.exe "> + <type>char</type> + <desc>executable name</desc> + </entry> + <entry id="run_misc_suffix" value=" &gt;&gt; e3sm.log.$LID 2&gt;&amp;1 "> + <type>char</type> + <desc>redirect for job output</desc> + </entry> + </group> + <module_system type="none"/> + <environment_variables> + <env name="OMPI_ALLOW_RUN_AS_ROOT">1</env> + <env name="OMPI_ALLOW_RUN_AS_ROOT_CONFIRM">1</env> + </environment_variables> + <mpirun mpilib="openmpi"> + <aprun_mode>default</aprun_mode> + <executable>aprun</executable> + <arguments> + <arg name="ntasks">-n {{ total_tasks }}</arg> + <arg name="oversubscribe">--oversubscribe</arg> + </arguments> + </mpirun> +</file> +""" + ) + temp.seek(0) + + mach_specific = EnvMachSpecific(infile=temp.name) + + attribs = {"compiler": "gnu", "mpilib": "openmpi", "threaded": False} + + aprun_mode = mach_specific.get_aprun_mode(attribs) + + assert aprun_mode == "default"
+ + +
+[docs] + def test_get_aprun_mode_default(self): + with tempfile.NamedTemporaryFile() as temp: + temp.write( + b"""<?xml version="1.0"?> +<file id="env_mach_specific.xml" version="2.0"> + <header> + These variables control the machine dependent environment including + the paths to compilers and libraries external to cime such as netcdf, + environment variables for use in the running job should also be set here. + </header> + <group id="compliant_values"> + <entry id="run_exe" value="${EXEROOT}/e3sm.exe "> + <type>char</type> + <desc>executable name</desc> + </entry> + <entry id="run_misc_suffix" value=" &gt;&gt; e3sm.log.$LID 2&gt;&amp;1 "> + <type>char</type> + <desc>redirect for job output</desc> + </entry> + </group> + <module_system type="none"/> + <environment_variables> + <env name="OMPI_ALLOW_RUN_AS_ROOT">1</env> + <env name="OMPI_ALLOW_RUN_AS_ROOT_CONFIRM">1</env> + </environment_variables> + <mpirun mpilib="openmpi"> + <executable>aprun</executable> + <arguments> + <arg name="ntasks">-n {{ total_tasks }}</arg> + <arg name="oversubscribe">--oversubscribe</arg> + </arguments> + </mpirun> +</file> +""" + ) + temp.seek(0) + + mach_specific = EnvMachSpecific(infile=temp.name) + + attribs = {"compiler": "gnu", "mpilib": "openmpi", "threaded": False} + + aprun_mode = mach_specific.get_aprun_mode(attribs) + + assert aprun_mode == "default"
+ + +
+[docs] + def test_find_best_mpirun_match(self): + with tempfile.NamedTemporaryFile() as temp: + temp.write( + b"""<?xml version="1.0"?> +<file id="env_mach_specific.xml" version="2.0"> + <header> + These variables control the machine dependent environment including + the paths to compilers and libraries external to cime such as netcdf, + environment variables for use in the running job should also be set here. + </header> + <group id="compliant_values"> + <entry id="run_exe" value="${EXEROOT}/e3sm.exe "> + <type>char</type> + <desc>executable name</desc> + </entry> + <entry id="run_misc_suffix" value=" &gt;&gt; e3sm.log.$LID 2&gt;&amp;1 "> + <type>char</type> + <desc>redirect for job output</desc> + </entry> + </group> + <module_system type="none"/> + <environment_variables> + <env name="OMPI_ALLOW_RUN_AS_ROOT">1</env> + <env name="OMPI_ALLOW_RUN_AS_ROOT_CONFIRM">1</env> + </environment_variables> + <mpirun mpilib="openmpi"> + <executable>aprun</executable> + <arguments> + <arg name="ntasks">-n {{ total_tasks }}</arg> + <arg name="oversubscribe">--oversubscribe</arg> + </arguments> + </mpirun> + <mpirun mpilib="openmpi" compiler="gnu"> + <executable>srun</executable> + </mpirun> +</file> +""" + ) + temp.seek(0) + + mach_specific = EnvMachSpecific(infile=temp.name) + + mock_case = mock.MagicMock() + + type(mock_case).total_tasks = mock.PropertyMock(return_value=4) + + attribs = {"compiler": "gnu", "mpilib": "openmpi", "threaded": False} + + executable, args, run_exe, run_misc_suffix = mach_specific.get_mpirun( + mock_case, attribs, "case.run" + ) + + assert executable == "srun" + assert args == [] + assert run_exe is None + assert run_misc_suffix is None
+ + +
+[docs] + def test_get_mpirun(self): + with tempfile.NamedTemporaryFile() as temp: + temp.write( + b"""<?xml version="1.0"?> +<file id="env_mach_specific.xml" version="2.0"> + <header> + These variables control the machine dependent environment including + the paths to compilers and libraries external to cime such as netcdf, + environment variables for use in the running job should also be set here. + </header> + <group id="compliant_values"> + <entry id="run_exe" value="${EXEROOT}/e3sm.exe "> + <type>char</type> + <desc>executable name</desc> + </entry> + <entry id="run_misc_suffix" value=" &gt;&gt; e3sm.log.$LID 2&gt;&amp;1 "> + <type>char</type> + <desc>redirect for job output</desc> + </entry> + </group> + <module_system type="none"/> + <environment_variables> + <env name="OMPI_ALLOW_RUN_AS_ROOT">1</env> + <env name="OMPI_ALLOW_RUN_AS_ROOT_CONFIRM">1</env> + </environment_variables> + <mpirun mpilib="openmpi"> + <executable>aprun</executable> + <arguments> + <arg name="ntasks">-n {{ total_tasks }}</arg> + <arg name="oversubscribe">--oversubscribe</arg> + </arguments> + </mpirun> +</file> +""" + ) + temp.seek(0) + + mach_specific = EnvMachSpecific(infile=temp.name) + + mock_case = mock.MagicMock() + + type(mock_case).total_tasks = mock.PropertyMock(return_value=4) + + attribs = {"compiler": "gnu", "mpilib": "openmpi", "threaded": False} + + executable, args, run_exe, run_misc_suffix = mach_specific.get_mpirun( + mock_case, attribs, "case.run" + ) + + assert executable == "aprun" + assert args == ["-n 4", "--oversubscribe"] + assert run_exe is None + assert run_misc_suffix is None
+ + +
+[docs] + @mock.patch("CIME.XML.env_mach_specific.EnvMachSpecific.get_optional_child") + @mock.patch("CIME.XML.env_mach_specific.EnvMachSpecific.text") + @mock.patch.dict("os.environ", {"TEST_VALUE": "/testexec"}) + def test_init_path(self, text, get_optional_child): + text.return_value = "$ENV{TEST_VALUE}/init/python" + + mach_specific = EnvMachSpecific() + + value = mach_specific.get_module_system_init_path("python") + + assert value == "/testexec/init/python"
+ + +
+[docs] + @mock.patch("CIME.XML.env_mach_specific.EnvMachSpecific.get_optional_child") + @mock.patch("CIME.XML.env_mach_specific.EnvMachSpecific.text") + @mock.patch.dict("os.environ", {"TEST_VALUE": "/testexec"}) + def test_cmd_path(self, text, get_optional_child): + text.return_value = "$ENV{TEST_VALUE}/python" + + mach_specific = EnvMachSpecific() + + value = mach_specific.get_module_system_cmd_path("python") + + assert value == "/testexec/python"
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_machines.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_machines.html new file mode 100644 index 00000000000..bfb27007666 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_machines.html @@ -0,0 +1,302 @@ + + + + + + CIME.tests.test_unit_xml_machines — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_xml_machines

+import unittest
+import io
+
+from CIME.XML.machines import Machines
+
+MACHINE_TEST_XML = """<config_machines version="2.0">
+  <machine MACH="default">
+    <DESC>Some default machine definition</DESC>
+    <OS>ubuntu</OS>
+    <COMPILERS>gnu,intel</COMPILERS>
+    <MPILIBS>mpi-serial</MPILIBS>
+    <PROJECT>custom</PROJECT>
+    <SAVE_TIMING_DIR>/data/timings</SAVE_TIMING_DIR>
+    <SAVE_TIMING_DIR_PROJECTS>testing</SAVE_TIMING_DIR_PROJECTS>
+    <CIME_OUTPUT_ROOT>/data/scratch</CIME_OUTPUT_ROOT>
+    <DIN_LOC_ROOT>/data/inputdata</DIN_LOC_ROOT>
+    <DIN_LOC_ROOT_CLMFORC>/data/inputdata/atm/datm7</DIN_LOC_ROOT_CLMFORC>
+    <DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>
+    <BASELINE_ROOT>/data/baselines/$COMPILER</BASELINE_ROOT>
+    <CCSM_CPRNC>/data/tools/cprnc</CCSM_CPRNC>
+    <GMAKE_J>8</GMAKE_J>
+    <TESTS>e3sm_developer</TESTS>
+    <NTEST_PARALLEL_JOBS>4</NTEST_PARALLEL_JOBS>
+    <BATCH_SYSTEM>slurm</BATCH_SYSTEM>
+    <SUPPORTED_BY>developers</SUPPORTED_BY>
+    <MAX_TASKS_PER_NODE>8</MAX_TASKS_PER_NODE>
+    <MAX_MPITASKS_PER_NODE>8</MAX_MPITASKS_PER_NODE>
+    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
+    <mpirun mpilib="default">
+      <executable>srun</executable>
+      <arguments>
+        <arg name="num_tasks">-n {{ total_tasks }} -N {{ num_nodes }} --kill-on-bad-exit </arg>
+        <arg name="thread_count">-c $SHELL{echo 128/ {{ tasks_per_node }} |bc}</arg>
+        <arg name="binding"> $SHELL{if [ 128 -ge `./xmlquery --value MAX_MPITASKS_PER_NODE` ]; then echo "--cpu_bind=cores"; else echo "--cpu_bind=threads";fi;} </arg>
+        <arg name="placement">-m plane={{ tasks_per_node }}</arg>
+      </arguments>
+    </mpirun>
+    <module_system type="module">
+      <init_path lang="perl">/opt/ubuntu/pe/modules/default/init/perl.pm</init_path>
+      <init_path lang="python">/opt/ubuntu/pe/modules/default/init/python.py</init_path>
+      <init_path lang="sh">/opt/ubuntu/pe/modules/default/init/sh</init_path>
+      <init_path lang="csh">/opt/ubuntu/pe/modules/default/init/csh</init_path>
+      <cmd_path lang="perl">/opt/ubuntu/pe/modules/default/bin/modulecmd perl</cmd_path>
+      <cmd_path lang="python">/opt/ubuntu/pe/modules/default/bin/modulecmd python</cmd_path>
+      <cmd_path lang="sh">module</cmd_path>
+      <cmd_path lang="csh">module</cmd_path>
+      <modules>
+        <command name="unload">ubuntupe</command>
+        <command name="unload">ubuntu-mpich</command>
+        <command name="unload">ubuntu-parallel-netcdf</command>
+        <command name="unload">ubuntu-hdf5-parallel</command>
+        <command name="unload">ubuntu-hdf5</command>
+        <command name="unload">ubuntu-netcdf</command>
+        <command name="unload">ubuntu-netcdf-hdf5parallel</command>
+        <command name="load">ubuntupe/2.7.15</command>
+      </modules>
+      <modules compiler="gnu">
+        <command name="unload">PrgEnv-ubuntu</command>
+        <command name="unload">PrgEnv-gnu</command>
+        <command name="load">PrgEnv-gnu/8.3.3</command>
+        <command name="swap">gcc/12.1.0</command>
+      </modules>
+      <modules>
+        <command name="load">ubuntu-mpich/8.1.16</command>
+        <command name="load">ubuntu-hdf5-parallel/1.12.1.3</command>
+        <command name="load">ubuntu-netcdf-hdf5parallel/4.8.1.3</command>
+        <command name="load">ubuntu-parallel-netcdf/1.12.2.3</command>
+      </modules>
+    </module_system>
+
+    <RUNDIR>$CIME_OUTPUT_ROOT/$CASE/run</RUNDIR>
+    <EXEROOT>$CIME_OUTPUT_ROOT/$CASE/bld</EXEROOT>
+    <TEST_TPUT_TOLERANCE>0.1</TEST_TPUT_TOLERANCE>
+    <MAX_GB_OLD_TEST_DATA>1000</MAX_GB_OLD_TEST_DATA>
+    <environment_variables>
+      <env name="PERL5LIB">/usr/lib/perl5/5.26.2</env>
+      <env name="NETCDF_C_PATH">/opt/ubuntu/pe/netcdf-hdf5parallel/4.8.1.3/gnu/9.1/</env>
+      <env name="NETCDF_FORTRAN_PATH">/opt/ubuntu/pe/netcdf-hdf5parallel/4.8.1.3/gnu/9.1/</env>
+      <env name="PNETCDF_PATH">$SHELL{dirname $(dirname $(which pnetcdf_version))}</env>
+    </environment_variables>
+    <environment_variables SMP_PRESENT="TRUE">
+      <env name="OMP_STACKSIZE">128M</env>
+    </environment_variables>
+    <environment_variables SMP_PRESENT="TRUE" compiler="gnu">
+      <env name="OMP_PLACES">cores</env>
+    </environment_variables>
+  </machine>
+  <machine MACH="default-no-batch">
+    <DESC>Some default machine definition</DESC>
+    <OS>ubuntu</OS>
+    <COMPILERS>gnu,intel</COMPILERS>
+    <MPILIBS>mpi-serial</MPILIBS>
+    <PROJECT>custom</PROJECT>
+    <SAVE_TIMING_DIR>/data/timings</SAVE_TIMING_DIR>
+    <SAVE_TIMING_DIR_PROJECTS>testing</SAVE_TIMING_DIR_PROJECTS>
+    <CIME_OUTPUT_ROOT>/data/scratch</CIME_OUTPUT_ROOT>
+    <DIN_LOC_ROOT>/data/inputdata</DIN_LOC_ROOT>
+    <DIN_LOC_ROOT_CLMFORC>/data/inputdata/atm/datm7</DIN_LOC_ROOT_CLMFORC>
+    <DOUT_S_ROOT>$CIME_OUTPUT_ROOT/archive/$CASE</DOUT_S_ROOT>
+    <BASELINE_ROOT>/data/baselines/$COMPILER</BASELINE_ROOT>
+    <CCSM_CPRNC>/data/tools/cprnc</CCSM_CPRNC>
+    <GMAKE_J>8</GMAKE_J>
+    <TESTS>e3sm_developer</TESTS>
+    <NTEST_PARALLEL_JOBS>4</NTEST_PARALLEL_JOBS>
+    <BATCH_SYSTEM>none</BATCH_SYSTEM>
+    <SUPPORTED_BY>developers</SUPPORTED_BY>
+    <MAX_TASKS_PER_NODE>8</MAX_TASKS_PER_NODE>
+    <MAX_MPITASKS_PER_NODE>8</MAX_MPITASKS_PER_NODE>
+    <PROJECT_REQUIRED>FALSE</PROJECT_REQUIRED>
+    <mpirun mpilib="default">
+      <executable>srun</executable>
+      <arguments>
+        <arg name="num_tasks">-n {{ total_tasks }} -N {{ num_nodes }} --kill-on-bad-exit </arg>
+        <arg name="thread_count">-c $SHELL{echo 128/ {{ tasks_per_node }} |bc}</arg>
+        <arg name="binding"> $SHELL{if [ 128 -ge `./xmlquery --value MAX_MPITASKS_PER_NODE` ]; then echo "--cpu_bind=cores"; else echo "--cpu_bind=threads";fi;} </arg>
+        <arg name="placement">-m plane={{ tasks_per_node }}</arg>
+      </arguments>
+    </mpirun>
+    <RUNDIR>$CIME_OUTPUT_ROOT/$CASE/run</RUNDIR>
+    <EXEROOT>$CIME_OUTPUT_ROOT/$CASE/bld</EXEROOT>
+    <TEST_TPUT_TOLERANCE>0.1</TEST_TPUT_TOLERANCE>
+    <MAX_GB_OLD_TEST_DATA>1000</MAX_GB_OLD_TEST_DATA>
+    <environment_variables>
+      <env name="PERL5LIB">/usr/lib/perl5/5.26.2</env>
+      <env name="NETCDF_C_PATH">/opt/ubuntu/pe/netcdf-hdf5parallel/4.8.1.3/gnu/9.1/</env>
+      <env name="NETCDF_FORTRAN_PATH">/opt/ubuntu/pe/netcdf-hdf5parallel/4.8.1.3/gnu/9.1/</env>
+      <env name="PNETCDF_PATH">$SHELL{dirname $(dirname $(which pnetcdf_version))}</env>
+    </environment_variables>
+    <environment_variables SMP_PRESENT="TRUE">
+      <env name="OMP_STACKSIZE">128M</env>
+    </environment_variables>
+    <environment_variables SMP_PRESENT="TRUE" compiler="gnu">
+      <env name="OMP_PLACES">cores</env>
+    </environment_variables>
+  </machine>
+</config_machines>
+"""
+
+
+
+[docs] +class TestUnitXMLMachines(unittest.TestCase): +
+[docs] + def setUp(self): + Machines._FILEMAP = {} + # read_only=False for github testing + # MACHINE IS SET BELOW TO USE DEFINITION IN "MACHINE_TEST_XML" + self.machine = Machines() + + self.machine.read_fd(io.StringIO(MACHINE_TEST_XML)) + + self.machine.set_machine("default")
+ + +
+[docs] + def test_has_batch_system(self): + assert self.machine.has_batch_system() + + self.machine.set_machine("default-no-batch") + + assert not self.machine.has_batch_system()
+ + +
+[docs] + def test_is_valid_MPIlib(self): + assert self.machine.is_valid_MPIlib("mpi-serial") + + assert not self.machine.is_valid_MPIlib("mpi-bogus")
+ + +
+[docs] + def test_is_valid_compiler(self): + assert self.machine.is_valid_compiler("gnu") + + assert not self.machine.is_valid_compiler("bogus")
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_namelist_definition.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_namelist_definition.html new file mode 100644 index 00000000000..93bce278b2b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_namelist_definition.html @@ -0,0 +1,169 @@ + + + + + + CIME.tests.test_unit_xml_namelist_definition — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_xml_namelist_definition

+import tempfile
+import unittest
+
+from CIME.XML.namelist_definition import NamelistDefinition
+
+# pylint: disable=protected-access
+
+
+
+[docs] +class TestXMLNamelistDefinition(unittest.TestCase): +
+[docs] + def test_set_nodes(self): + test_data = """<?xml version="1.0"?> +<?xml-stylesheet type="text/xsl" href="http://www.cgd.ucar.edu/~cam/namelist/namelist_definition.xsl"?> + +<entry_id version="2.0"> + <entry id="test1"> + <type>char</type> + <category>test</category> + </entry> + <entry id="test2"> + <type>char</type> + <category>test</category> + </entry> +</entry_id>""" + + with tempfile.NamedTemporaryFile() as temp: + temp.write(test_data.encode()) + temp.flush() + + nmldef = NamelistDefinition(temp.name) + + nmldef.set_nodes() + + assert len(nmldef._entry_nodes) == 2 + assert nmldef._entry_ids == ["test1", "test2"] + assert len(nmldef._nodes) == 2 + assert nmldef._entry_types == {"test1": "char", "test2": "char"} + assert nmldef._valid_values == {"test1": None, "test2": None} + assert nmldef._group_names == {"test1": None, "test2": None}
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_tests.html new file mode 100644 index 00000000000..317092d4377 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/test_unit_xml_tests.html @@ -0,0 +1,227 @@ + + + + + + CIME.tests.test_unit_xml_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + + +
  • +
  • +
+
+
+
+
+ +

Source code for CIME.tests.test_unit_xml_tests

+#!/usr/bin/env python3
+
+import re
+import unittest
+import tempfile
+from pathlib import Path
+from unittest import mock
+
+from CIME.XML.tests import Tests
+
+
+
+[docs] +class TestXMLTests(unittest.TestCase): +
+[docs] + def setUp(self): + # reset file caching + Tests._FILEMAP = {}
+ + + # skip hard to mock function call +
+[docs] + @mock.patch( + "CIME.SystemTests.system_tests_compare_two.SystemTestsCompareTwo._setup_cases_if_not_yet_done" + ) + def test_support_single_exe(self, _setup_cases_if_not_yet_done): + with tempfile.TemporaryDirectory() as tdir: + test_file = Path(tdir) / "sms.py" + + test_file.touch(exist_ok=True) + + caseroot = Path(tdir) / "caseroot1" + + caseroot.mkdir(exist_ok=True) + + case = mock.MagicMock() + + case.get_compset_components.return_value = () + + case.get_value.side_effect = ( + "SMS", + tdir, + f"{caseroot}", + "SMS.f19_g16.S", + "cpl", + "SMS.f19_g16.S", + f"{caseroot}", + "SMS.f19_g16.S", + ) + + tests = Tests() + + tests.support_single_exe(case)
+ + + # skip hard to mock function call +
+[docs] + @mock.patch( + "CIME.SystemTests.system_tests_compare_two.SystemTestsCompareTwo._setup_cases_if_not_yet_done" + ) + def test_support_single_exe_error(self, _setup_cases_if_not_yet_done): + with tempfile.TemporaryDirectory() as tdir: + test_file = Path(tdir) / "erp.py" + + test_file.touch(exist_ok=True) + + caseroot = Path(tdir) / "caseroot1" + + caseroot.mkdir(exist_ok=True) + + case = mock.MagicMock() + + case.get_compset_components.return_value = () + + case.get_value.side_effect = ( + "ERP", + tdir, + f"{caseroot}", + "ERP.f19_g16.S", + "cpl", + "ERP.f19_g16.S", + f"{caseroot}", + "ERP.f19_g16.S", + ) + + tests = Tests() + + with self.assertRaises(Exception) as e: + tests.support_single_exe(case) + + assert ( + re.search( + r"does not support the '--single-exe' option as it requires separate builds", + f"{e.exception}", + ) + is not None + ), f"{e.exception}"
+
+ + + +if __name__ == "__main__": + unittest.main() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/utils.html new file mode 100644 index 00000000000..45015ac437c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/tests/utils.html @@ -0,0 +1,627 @@ + + + + + + CIME.tests.utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.tests.utils

+import io
+import os
+import tempfile
+import signal
+import shutil
+import sys
+import time
+from collections.abc import Iterable
+
+from CIME import utils
+from CIME import test_status
+from CIME.utils import expect
+
+MACRO_PRESERVE_ENV = [
+    "ADDR2LINE",
+    "AR",
+    "AS",
+    "CC",
+    "CC_FOR_BUILD",
+    "CMAKE_ARGS",
+    "CONDA_EXE",
+    "CONDA_PYTHON_EXE",
+    "CPP",
+    "CXX",
+    "CXXFILT",
+    "CXX_FOR_BUILD",
+    "ELFEDIT",
+    "F77",
+    "F90",
+    "F95",
+    "FC",
+    "GCC",
+    "GCC_AR",
+    "GCC_NM",
+    "GCC_RANLIB",
+    "GFORTRAN",
+    "GPROF",
+    "GXX",
+    "LD",
+    "LD_GOLD",
+    "NM",
+    "OBJCOPY",
+    "OBJDUMP",
+    "PATH",
+    "RANLIB",
+    "READELF",
+    "SIZE",
+    "STRINGS",
+    "STRIP",
+]
+
+
+
+[docs] +def parse_test_status(line): + status, test = line.split()[0:2] + return test, status
+ + + +
+[docs] +def make_fake_teststatus(path, testname, status, phase): + expect(phase in test_status.CORE_PHASES, "Bad phase '%s'" % phase) + with test_status.TestStatus(test_dir=path, test_name=testname) as ts: + for core_phase in test_status.CORE_PHASES: + if core_phase == phase: + ts.set_status( + core_phase, + status, + comments=("time=42" if phase == test_status.RUN_PHASE else ""), + ) + break + else: + ts.set_status( + core_phase, + test_status.TEST_PASS_STATUS, + comments=("time=42" if phase == test_status.RUN_PHASE else ""), + )
+ + + +
+[docs] +class MockMachines(object): + """A mock version of the Machines object to simplify testing.""" + + def __init__(self, name, os_): + """Store the name.""" + self.name = name + self.os = os_ + +
+[docs] + def get_machine_name(self): + """Return the name we were given.""" + return self.name
+ + +
+[docs] + def get_value(self, var_name): + """Allow the operating system to be queried.""" + assert var_name == "OS", ( + "Build asked for a value not " "implemented in the testing infrastructure." + ) + return self.os
+ + +
+[docs] + def is_valid_compiler(self, _): # pylint:disable=no-self-use + """Assume all compilers are valid.""" + return True
+ + +
+[docs] + def is_valid_MPIlib(self, _): + """Assume all MPILIB settings are valid.""" + return True
+ + + # pragma pylint: disable=unused-argument +
+[docs] + def get_default_MPIlib(self, attributes=None): + return "mpich2"
+ + +
+[docs] + def get_default_compiler(self): + return "intel"
+
+ + + +
+[docs] +class MakefileTester(object): + + """Helper class for checking Makefile output. + + Public methods: + __init__ + query_var + assert_variable_equals + assert_variable_matches + """ + + # Note that the following is a Makefile and the echo line must begin with a tab + _makefile_template = """ +include Macros +query: +\techo '$({})' > query.out +""" + + def __init__(self, parent, make_string): + """Constructor for Makefile test helper class. + + Arguments: + parent - The TestCase object that is using this item. + make_string - Makefile contents to test. + """ + self.parent = parent + self.make_string = make_string + +
+[docs] + def query_var(self, var_name, env, var): + """Request the value of a variable in the Makefile, as a string. + + Arguments: + var_name - Name of the variable to query. + env - A dict containing extra environment variables to set when calling + make. + var - A dict containing extra make variables to set when calling make. + (The distinction between env and var actually matters only for + CMake, though.) + """ + if env is None: + env = dict() + if var is None: + var = dict() + + # Write the Makefile strings to temporary files. + temp_dir = tempfile.mkdtemp() + macros_file_name = os.path.join(temp_dir, "Macros") + makefile_name = os.path.join(temp_dir, "Makefile") + output_name = os.path.join(temp_dir, "query.out") + + with open(macros_file_name, "w") as macros_file: + macros_file.write(self.make_string) + with open(makefile_name, "w") as makefile: + makefile.write(self._makefile_template.format(var_name)) + + # environment = os.environ.copy() + environment = dict(PATH=os.environ["PATH"]) + environment.update(env) + environment.update(var) + for x in MACRO_PRESERVE_ENV: + if x in os.environ: + environment[x] = os.environ[x] + gmake_exe = self.parent.MACHINE.get_value("GMAKE") + if gmake_exe is None: + gmake_exe = "gmake" + self.parent.run_cmd_assert_result( + "%s query --directory=%s 2>&1" % (gmake_exe, temp_dir), env=environment + ) + + with open(output_name, "r") as output: + query_result = output.read().strip() + + # Clean up the Makefiles. + shutil.rmtree(temp_dir) + + return query_result
+ + +
+[docs] + def assert_variable_equals(self, var_name, value, env=None, var=None): + """Assert that a variable in the Makefile has a given value. + + Arguments: + var_name - Name of variable to check. + value - The string that the variable value should be equal to. + env - Optional. Dict of environment variables to set when calling make. + var - Optional. Dict of make variables to set when calling make. + """ + self.parent.assertEqual(self.query_var(var_name, env, var), value)
+ + +
+[docs] + def assert_variable_matches(self, var_name, regex, env=None, var=None): + """Assert that a variable in the Makefile matches a regex. + + Arguments: + var_name - Name of variable to check. + regex - The regex to match. + env - Optional. Dict of environment variables to set when calling make. + var - Optional. Dict of make variables to set when calling make. + """ + self.parent.assertRegexpMatches(self.query_var(var_name, env, var), regex)
+
+ + + +
+[docs] +class CMakeTester(object): + + """Helper class for checking CMake output. + + Public methods: + __init__ + query_var + assert_variable_equals + assert_variable_matches + """ + + _cmakelists_template = """ +include(./Macros.cmake) +file(WRITE query.out "${{{}}}") +""" + + def __init__(self, parent, cmake_string): + """Constructor for CMake test helper class. + + Arguments: + parent - The TestCase object that is using this item. + cmake_string - CMake contents to test. + """ + self.parent = parent + self.cmake_string = cmake_string + +
+[docs] + def query_var(self, var_name, env, var): + """Request the value of a variable in Macros.cmake, as a string. + + Arguments: + var_name - Name of the variable to query. + env - A dict containing extra environment variables to set when calling + cmake. + var - A dict containing extra CMake variables to set when calling cmake. + """ + if env is None: + env = dict() + if var is None: + var = dict() + + # Write the CMake strings to temporary files. + temp_dir = tempfile.mkdtemp() + macros_file_name = os.path.join(temp_dir, "Macros.cmake") + cmakelists_name = os.path.join(temp_dir, "CMakeLists.txt") + output_name = os.path.join(temp_dir, "query.out") + + with open(macros_file_name, "w") as macros_file: + for key in var: + macros_file.write("set({} {})\n".format(key, var[key])) + macros_file.write(self.cmake_string) + with open(cmakelists_name, "w") as cmakelists: + cmakelists.write(self._cmakelists_template.format(var_name)) + + # environment = os.environ.copy() + environment = dict(PATH=os.environ["PATH"]) + environment.update(env) + for x in MACRO_PRESERVE_ENV: + if x in os.environ: + environment[x] = os.environ[x] + os_ = self.parent.MACHINE.get_value("OS") + # cmake will not work on cray systems without this flag + if os_ == "CNL": + cmake_args = "-DCMAKE_SYSTEM_NAME=Catamount" + else: + cmake_args = "" + + self.parent.run_cmd_assert_result( + "cmake %s . 2>&1" % cmake_args, from_dir=temp_dir, env=environment + ) + + with open(output_name, "r") as output: + query_result = output.read().strip() + + # Clean up the CMake files. + shutil.rmtree(temp_dir) + + return query_result
+ + +
+[docs] + def assert_variable_equals(self, var_name, value, env=None, var=None): + """Assert that a variable in the CMakeLists has a given value. + + Arguments: + var_name - Name of variable to check. + value - The string that the variable value should be equal to. + env - Optional. Dict of environment variables to set when calling cmake. + var - Optional. Dict of CMake variables to set when calling cmake. + """ + self.parent.assertEqual(self.query_var(var_name, env, var), value)
+ + +
+[docs] + def assert_variable_matches(self, var_name, regex, env=None, var=None): + """Assert that a variable in the CMkeLists matches a regex. + + Arguments: + var_name - Name of variable to check. + regex - The regex to match. + env - Optional. Dict of environment variables to set when calling cmake. + var - Optional. Dict of CMake variables to set when calling cmake. + """ + self.parent.assertRegexpMatches(self.query_var(var_name, env, var), regex)
+
+ + + +# TODO after dropping python 2.7 replace with tempfile.TemporaryDirectory +
+[docs] +class TemporaryDirectory(object): + def __init__(self): + self._tempdir = None + + def __enter__(self): + self._tempdir = tempfile.mkdtemp() + return self._tempdir + + def __exit__(self, *args, **kwargs): + if os.path.exists(self._tempdir): + shutil.rmtree(self._tempdir)
+ + + +# TODO replace with actual mock once 2.7 is dropped +
+[docs] +class Mocker: + def __init__(self, ret=None, cmd=None, return_value=None, side_effect=None): + self._orig = [] + self._ret = ret or return_value + self._cmd = cmd + self._calls = [] + + if isinstance(side_effect, (list, tuple)): + self._side_effect = iter(side_effect) + else: + self._side_effect = side_effect + + self._method_calls = {} + + @property + def calls(self): + return self._calls + + @property + def method_calls(self): + return dict((x, y.calls) for x, y in self._method_calls.items()) + + @property + def ret(self): + return self._ret + + @ret.setter + def ret(self, value): + self._ret = value + +
+[docs] + def assert_called(self): + assert len(self.calls) > 0
+ + +
+[docs] + def assert_called_with(self, i=None, args=None, kwargs=None): + if i is None: + i = 0 + + call = self.calls[i] + + if args is not None: + _call_args = set(call["args"]) + _exp_args = set(args) + assert _exp_args <= _call_args, "Got {} missing {}".format( + _call_args, _exp_args - _call_args + ) + + if kwargs is not None: + call_kwargs = call["kwargs"] + + for x, y in kwargs.items(): + assert call_kwargs[x] == y, "Missing {}".format(x)
+ + + def __getattr__(self, name): + if name in self._method_calls: + new_method = self._method_calls[name] + else: + new_method = Mocker(self, cmd=name) + self._method_calls[name] = new_method + + return new_method + + def __call__(self, *args, **kwargs): + self._calls.append({"args": args, "kwargs": kwargs}) + + if self._side_effect is not None and isinstance(self._side_effect, Iterable): + rv = next(self._side_effect) + else: + rv = self._ret + + return rv + + def __del__(self): + self.revert_mocks() + + def __enter__(self): + return self + + def __exit__(self, *args, **kwargs): + self.revert_mocks() + +
+[docs] + def revert_mocks(self): + for m, module, method in self._orig: + if isinstance(module, str): + setattr(sys.modules[module], method, m) + else: + setattr(module, method, m)
+ + +
+[docs] + def patch( + self, module, method=None, ret=None, is_property=False, update_value_only=False + ): + rv = None + if isinstance(module, str): + x = module.split(".") + main = ".".join(x[:-1]) + if not update_value_only: + self._orig.append((getattr(sys.modules[main], x[-1]), main, x[-1])) + if is_property: + setattr(sys.modules[main], x[-1], ret) + else: + rv = Mocker(ret, cmd=x[-1]) + setattr(sys.modules[main], x[-1], rv) + elif method != None: + if not update_value_only: + self._orig.append((getattr(module, method), module, method)) + rv = Mocker(ret) + setattr(module, method, rv) + else: + raise Exception("Could not patch") + + return rv
+
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/user_mod_support.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/user_mod_support.html new file mode 100644 index 00000000000..09ce905d161 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/user_mod_support.html @@ -0,0 +1,294 @@ + + + + + + CIME.user_mod_support — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.user_mod_support

+"""
+user_mod_support.py
+"""
+
+from CIME.XML.standard_module_setup import *
+from CIME.utils import expect, run_cmd_no_fail, safe_copy
+import glob
+
+logger = logging.getLogger(__name__)
+
+
+
+[docs] +def apply_user_mods(caseroot, user_mods_path, keepexe=None): + """ + Recursivlely apply user_mods to caseroot - this includes updating user_nl_xxx, + updating SourceMods and creating case shell_commands and xmlchange_cmds files + + First remove case shell_commands files if any already exist + + If this function is called multiple times, settings from later calls will + take precedence over earlier calls, if there are conflicts. + + keepexe is an optional argument that is needed for cases where apply_user_mods is + called from create_clone + """ + case_shell_command_files = [ + os.path.join(caseroot, "shell_commands"), + os.path.join(caseroot, "xmlchange_cmnds"), + ] + for shell_command_file in case_shell_command_files: + if os.path.isfile(shell_command_file): + os.remove(shell_command_file) + + include_dirs = build_include_dirs_list(user_mods_path) + # If a user_mods dir 'foo' includes 'bar', the include_dirs list returned + # from build_include_dirs has 'foo' before 'bar'. But with the below code, + # directories that occur later in the list take precedence over the earlier + # ones, and we want 'foo' to take precedence over 'bar' in this case (in + # general: we want a given user_mods directory to take precedence over any + # mods that it includes). So we reverse include_dirs to accomplish this. + include_dirs.reverse() + logger.debug("include_dirs are {}".format(include_dirs)) + for include_dir in include_dirs: + # write user_nl_xxx file in caseroot + for user_nl in glob.iglob(os.path.join(include_dir, "user_nl_*")): + with open(os.path.join(include_dir, user_nl), "r") as fd: + newcontents = fd.read() + if len(newcontents) == 0: + continue + case_user_nl = user_nl.replace(include_dir, caseroot) + # If the same variable is set twice in a user_nl file, the later one + # takes precedence. So by appending the new contents, later entries + # in the include_dirs list take precedence over earlier entries. + with open(case_user_nl, "a") as fd: + fd.write(newcontents) + + # update SourceMods in caseroot + for root, _, files in os.walk(include_dir, followlinks=True, topdown=False): + if "src" in os.path.basename(root): + if keepexe is not None: + expect( + False, + "cannot have any source mods in {} if keepexe is an option".format( + user_mods_path + ), + ) + for sfile in files: + source_mods = os.path.join(root, sfile) + case_source_mods = source_mods.replace(include_dir, caseroot) + # We overwrite any existing SourceMods file so that later + # include_dirs take precedence over earlier ones + if os.path.isfile(case_source_mods): + logger.warning( + "WARNING: Overwriting existing SourceMods in {}".format( + case_source_mods + ) + ) + else: + logger.info( + "Adding SourceMod to case {}".format(case_source_mods) + ) + try: + safe_copy(source_mods, case_source_mods) + except Exception: + expect( + False, + "Could not write file {} in caseroot {}".format( + case_source_mods, caseroot + ), + ) + + # create xmlchange_cmnds and shell_commands in caseroot + shell_command_files = glob.glob( + os.path.join(include_dir, "shell_commands") + ) + glob.glob(os.path.join(include_dir, "xmlchange_cmnds")) + for shell_commands_file in shell_command_files: + case_shell_commands = shell_commands_file.replace(include_dir, caseroot) + # add commands from both shell_commands and xmlchange_cmnds to + # the same file (caseroot/shell_commands) + case_shell_commands = case_shell_commands.replace( + "xmlchange_cmnds", "shell_commands" + ) + # Note that use of xmlchange_cmnds has been deprecated and will soon + # be removed altogether, so new tests should rely on shell_commands + if shell_commands_file.endswith("xmlchange_cmnds"): + logger.warning( + "xmlchange_cmnds is deprecated and will be removed " + + "in a future release; please rename {} shell_commands".format( + shell_commands_file + ) + ) + with open(shell_commands_file, "r") as fd: + new_shell_commands = fd.read().replace("xmlchange", "xmlchange --force") + # By appending the new commands to the end, settings from later + # include_dirs take precedence over earlier ones + with open(case_shell_commands, "a") as fd: + fd.write(new_shell_commands) + + for shell_command_file in case_shell_command_files: + if os.path.isfile(shell_command_file): + os.chmod(shell_command_file, 0o777) + run_cmd_no_fail(shell_command_file, verbose=True)
+ + + +
+[docs] +def build_include_dirs_list(user_mods_path, include_dirs=None): + """ + If user_mods_path has a file "include_user_mods" read that + file and add directories to the include_dirs, recursively check + each of those directories for further directories. + The file may also include comments deleneated with # in the first column + """ + include_dirs = [] if include_dirs is None else include_dirs + if user_mods_path is None or user_mods_path == "UNSET": + return include_dirs + expect( + os.path.isabs(user_mods_path), + "Expected full directory path, got '{}'".format(user_mods_path), + ) + expect( + os.path.isdir(user_mods_path), "Directory not found {}".format(user_mods_path) + ) + norm_path = os.path.normpath(user_mods_path) + + for dir_ in include_dirs: + if norm_path == dir_: + include_dirs.remove(norm_path) + break + + logger.info("Adding user mods directory {}".format(norm_path)) + include_dirs.append(norm_path) + include_file = os.path.join(norm_path, "include_user_mods") + if os.path.isfile(include_file): + with open(include_file, "r") as fd: + for newpath in fd: + newpath = newpath.rstrip() + if len(newpath) > 0 and not newpath.startswith("#"): + if not os.path.isabs(newpath): + newpath = os.path.join(user_mods_path, newpath) + if os.path.isabs(newpath): + build_include_dirs_list(newpath, include_dirs) + else: + logger.warning( + "Could not resolve path '{}' in file '{}'".format( + newpath, include_file + ) + ) + + return include_dirs
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/utils.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/utils.html new file mode 100644 index 00000000000..482f02026a9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/utils.html @@ -0,0 +1,3166 @@ + + + + + + CIME.utils — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.utils

+"""
+Common functions used by cime python scripts
+Warning: you cannot use CIME Classes in this module as it causes circular dependencies
+"""
+import shlex
+import configparser
+import io, logging, gzip, sys, os, time, re, shutil, glob, string, random, importlib, fnmatch
+import importlib.util
+import errno, signal, warnings, filecmp
+import stat as statlib
+from argparse import Action
+from contextlib import contextmanager
+
+from distutils import file_util
+
+# Return this error code if the scripts worked but tests failed
+TESTS_FAILED_ERR_CODE = 100
+logger = logging.getLogger(__name__)
+
+# Fix to pass user defined `srcroot` to `CIME.XML.generic_xml.GenericXML`
+# where it's used to resolve $SRCROOT in XML config files.
+GLOBAL = {}
+
+
+
+[docs] +def deprecate_action(message): + class ActionStoreDeprecated(Action): + def __call__(self, parser, namespace, values, option_string=None): + raise DeprecationWarning(f"{option_string} is deprecated{message}") + + return ActionStoreDeprecated
+ + + +
+[docs] +def import_from_file(name, file_path): + loader = importlib.machinery.SourceFileLoader(name, file_path) + + spec = importlib.util.spec_from_loader(loader.name, loader) + + module = importlib.util.module_from_spec(spec) + + sys.modules[name] = module + + spec.loader.exec_module(module) + + return module
+ + + +
+[docs] +@contextmanager +def redirect_stdout(new_target): + old_target, sys.stdout = sys.stdout, new_target # replace sys.stdout + try: + yield new_target # run some code with the replaced stdout + finally: + sys.stdout = old_target # restore to the previous value
+ + + +
+[docs] +@contextmanager +def redirect_stderr(new_target): + old_target, sys.stderr = sys.stderr, new_target # replace sys.stdout + try: + yield new_target # run some code with the replaced stdout + finally: + sys.stderr = old_target # restore to the previous value
+ + + +
+[docs] +@contextmanager +def redirect_stdout_stderr(new_target): + old_stdout, old_stderr = sys.stdout, sys.stderr + sys.stdout, sys.stderr = new_target, new_target + try: + yield new_target + finally: + sys.stdout, sys.stderr = old_stdout, old_stderr
+ + + +
+[docs] +@contextmanager +def redirect_logger(new_target, logger_name): + ch = logging.StreamHandler(stream=new_target) + ch.setLevel(logging.DEBUG) + log = logging.getLogger(logger_name) + root_log = logging.getLogger() + orig_handlers = log.handlers + orig_root_loggers = root_log.handlers + + try: + root_log.handlers = [] + log.handlers = [ch] + yield log + finally: + root_log.handlers = orig_root_loggers + log.handlers = orig_handlers
+ + + +
+[docs] +class IndentFormatter(logging.Formatter): + def __init__(self, indent, fmt=None, datefmt=None): + logging.Formatter.__init__(self, fmt, datefmt) + self._indent = indent + +
+[docs] + def format(self, record): + record.msg = "{}{}".format(self._indent, record.msg) + out = logging.Formatter.format(self, record) + return out
+
+ + + +
+[docs] +def set_logger_indent(indent): + root_log = logging.getLogger() + root_log.handlers = [] + formatter = IndentFormatter(indent) + + handler = logging.StreamHandler() + handler.setFormatter(formatter) + root_log.addHandler(handler)
+ + + +
+[docs] +class EnvironmentContext(object): + """ + Context manager for environment variables + Usage: + os.environ['MYVAR'] = 'oldvalue' + with EnvironmentContex(MYVAR='myvalue', MYVAR2='myvalue2'): + print os.getenv('MYVAR') # Should print myvalue. + print os.getenv('MYVAR2') # Should print myvalue2. + print os.getenv('MYVAR') # Should print oldvalue. + print os.getenv('MYVAR2') # Should print None. + + CREDIT: https://github.com/sakurai-youhei/envcontext + """ + + def __init__(self, **kwargs): + self.envs = kwargs + self.old_envs = {} + + def __enter__(self): + self.old_envs = {} + for k, v in self.envs.items(): + self.old_envs[k] = os.environ.get(k) + os.environ[k] = v + + def __exit__(self, *args): + for k, v in self.old_envs.items(): + if v: + os.environ[k] = v + else: + del os.environ[k]
+ + + +# This should be the go-to exception for CIME use. It's a subclass +# of SystemExit in order suppress tracebacks, which users generally +# hate seeing. It's a subclass of Exception because we want it to be +# "catchable". If you are debugging CIME and want to see the stacktrace, +# run your CIME command with the --debug flag. +
+[docs] +class CIMEError(SystemExit, Exception): + pass
+ + + +
+[docs] +def expect(condition, error_msg, exc_type=CIMEError, error_prefix="ERROR:"): + """ + Similar to assert except doesn't generate an ugly stacktrace. Useful for + checking user error, not programming error. + + >>> expect(True, "error1") + >>> expect(False, "error2") # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: error2 + """ + # Without this line we get a futurewarning on the use of condition below + warnings.filterwarnings("ignore") + if not condition: + if logger.isEnabledFor(logging.DEBUG): + import pdb + + pdb.set_trace() # pylint: disable=forgotten-debug-statement + + msg = error_prefix + " " + error_msg + raise exc_type(msg)
+ + + +
+[docs] +def id_generator(size=6, chars=string.ascii_lowercase + string.digits): + return "".join(random.choice(chars) for _ in range(size))
+ + + +
+[docs] +def check_name(fullname, additional_chars=None, fullpath=False): + """ + check for unallowed characters in name, this routine only + checks the final name and does not check if path exists or is + writable + + >>> check_name("test.id", additional_chars=".") + False + >>> check_name("case.name", fullpath=False) + True + >>> check_name("/some/file/path/case.name", fullpath=True) + True + >>> check_name("mycase+mods") + False + >>> check_name("mycase?mods") + False + >>> check_name("mycase*mods") + False + >>> check_name("/some/full/path/name/") + False + """ + + chars = "+*?<>/{}[\]~`@:" # pylint: disable=anomalous-backslash-in-string + if additional_chars is not None: + chars += additional_chars + if fullname.endswith("/"): + return False + if fullpath: + _, name = os.path.split(fullname) + else: + name = fullname + match = re.search(r"[" + re.escape(chars) + "]", name) + if match is not None: + logger.warning( + "Illegal character {} found in name {}".format(match.group(0), name) + ) + return False + return True
+ + + +# Should only be called from get_cime_config() +def _read_cime_config_file(): + """ + READ the config file in ~/.cime, this file may contain + [main] + CIME_MODEL=e3sm,cesm,ufs + PROJECT=someprojectnumber + """ + allowed_sections = ("main", "create_test") + + allowed_in_main = ( + "cime_model", + "project", + "charge_account", + "srcroot", + "mail_type", + "mail_user", + "machine", + "mpilib", + "compiler", + "input_dir", + "cime_driver", + ) + allowed_in_create_test = ( + "mail_type", + "mail_user", + "save_timing", + "single_submit", + "test_root", + "output_root", + "baseline_root", + "clean", + "machine", + "mpilib", + "compiler", + "parallel_jobs", + "proc_pool", + "walltime", + "job_queue", + "allow_baseline_overwrite", + "wait", + "force_procs", + "force_threads", + "input_dir", + "pesfile", + "retry", + "walltime", + ) + + cime_config_file = os.path.abspath( + os.path.join(os.path.expanduser("~"), ".cime", "config") + ) + cime_config = configparser.ConfigParser() + if os.path.isfile(cime_config_file): + cime_config.read(cime_config_file) + for section in cime_config.sections(): + expect( + section in allowed_sections, + "Unknown section {} in .cime/config\nallowed sections are {}".format( + section, allowed_sections + ), + ) + if cime_config.has_section("main"): + for item, _ in cime_config.items("main"): + expect( + item in allowed_in_main, + 'Unknown option in config section "main": "{}"\nallowed options are {}'.format( + item, allowed_in_main + ), + ) + if cime_config.has_section("create_test"): + for item, _ in cime_config.items("create_test"): + expect( + item in allowed_in_create_test, + 'Unknown option in config section "test": "{}"\nallowed options are {}'.format( + item, allowed_in_create_test + ), + ) + else: + logger.debug("File {} not found".format(cime_config_file)) + cime_config.add_section("main") + + return cime_config + + +_CIMECONFIG = None + + +
+[docs] +def get_cime_config(): + global _CIMECONFIG + if not _CIMECONFIG: + _CIMECONFIG = _read_cime_config_file() + + return _CIMECONFIG
+ + + +
+[docs] +def reset_cime_config(): + """ + Useful to keep unit tests from interfering with each other + """ + global _CIMECONFIG + _CIMECONFIG = None
+ + + +
+[docs] +def copy_local_macros_to_dir(destination, extra_machdir=None): + """ + Copy any local macros files to the path given by 'destination'. + + Local macros files are potentially found in: + (1) extra_machdir/cmake_macros/*.cmake + (2) $HOME/.cime/*.cmake + """ + local_macros = [] + if extra_machdir: + if os.path.isdir(os.path.join(extra_machdir, "cmake_macros")): + local_macros.extend( + glob.glob(os.path.join(extra_machdir, "cmake_macros/*.cmake")) + ) + + dotcime = None + home = os.environ.get("HOME") + if home: + dotcime = os.path.join(home, ".cime") + if dotcime and os.path.isdir(dotcime): + local_macros.extend(glob.glob(dotcime + "/*.cmake")) + + for macro in local_macros: + safe_copy(macro, destination)
+ + + +
+[docs] +def get_python_libs_location_within_cime(): + """ + From within CIME, return subdirectory of python libraries + """ + return os.path.join("scripts", "lib")
+ + + +
+[docs] +def get_cime_root(case=None): + """ + Return the absolute path to the root of CIME that contains this script + """ + real_file_dir = os.path.dirname(os.path.realpath(__file__)) + cimeroot = os.path.abspath(os.path.join(real_file_dir, "..")) + + if case is not None: + case_cimeroot = os.path.abspath(case.get_value("CIMEROOT")) + cimeroot = os.path.abspath(cimeroot) + expect( + cimeroot == case_cimeroot, + "Inconsistent CIMEROOT variable: case -> '{}', file location -> '{}'".format( + case_cimeroot, cimeroot + ), + ) + + logger.debug("CIMEROOT is " + cimeroot) + return cimeroot
+ + + +
+[docs] +def get_config_path(): + cimeroot = get_cime_root() + + return os.path.join(cimeroot, "CIME", "data", "config")
+ + + +
+[docs] +def get_schema_path(): + config_path = get_config_path() + + return os.path.join(config_path, "xml_schemas")
+ + + +
+[docs] +def get_template_path(): + cimeroot = get_cime_root() + + return os.path.join(cimeroot, "CIME", "data", "templates")
+ + + +
+[docs] +def get_tools_path(): + cimeroot = get_cime_root() + + return os.path.join(cimeroot, "CIME", "Tools")
+ + + +
+[docs] +def get_src_root(): + """ + Return the absolute path to the root of SRCROOT. + + """ + cime_config = get_cime_config() + + if "SRCROOT" in os.environ: + srcroot = os.environ["SRCROOT"] + + logger.debug("SRCROOT from environment: {}".format(srcroot)) + elif cime_config.has_option("main", "SRCROOT"): + srcroot = cime_config.get("main", "SRCROOT") + + logger.debug("SRCROOT from user config: {}".format(srcroot)) + elif "SRCROOT" in GLOBAL: + srcroot = GLOBAL["SRCROOT"] + + logger.debug("SRCROOT from internal GLOBAL: {}".format(srcroot)) + else: + # If the share directory exists in the CIME root then it's + # assumed it's also the source root. This should only + # occur when the local "Externals.cfg" is used to install + # requirements for running/testing without a specific model. + if os.path.isdir(os.path.join(get_cime_root(), "share")): + srcroot = os.path.abspath(os.path.join(get_cime_root())) + else: + srcroot = os.path.abspath(os.path.join(get_cime_root(), "..")) + + logger.debug("SRCROOT from implicit detection: {}".format(srcroot)) + + return srcroot
+ + + +
+[docs] +def get_cime_default_driver(): + driver = os.environ.get("CIME_DRIVER") + if driver: + logger.debug("Setting CIME_DRIVER={} from environment".format(driver)) + else: + cime_config = get_cime_config() + if cime_config.has_option("main", "CIME_DRIVER"): + driver = cime_config.get("main", "CIME_DRIVER") + if driver: + logger.debug( + "Setting CIME_driver={} from ~/.cime/config".format(driver) + ) + + from CIME.config import Config + + config = Config.instance() + + if not driver: + driver = config.driver_default + + expect( + driver in config.driver_choices, + "Attempt to set invalid driver {}".format(driver), + ) + return driver
+ + + +
+[docs] +def get_all_cime_models(): + config_path = get_config_path() + models = [] + + for entry in os.listdir(config_path): + if os.path.isdir(os.path.join(config_path, entry)): + models.append(entry) + + models.remove("xml_schemas") + + return models
+ + + +
+[docs] +def set_model(model): + """ + Set the model to be used in this session + """ + cime_config = get_cime_config() + cime_models = get_all_cime_models() + if not cime_config.has_section("main"): + cime_config.add_section("main") + expect( + model in cime_models, + "model {} not recognized. The acceptable values of CIME_MODEL currently are {}".format( + model, cime_models + ), + ) + cime_config.set("main", "CIME_MODEL", model)
+ + + +
+[docs] +def get_model(): + """ + Get the currently configured model value + The CIME_MODEL env variable may or may not be set + + >>> os.environ["CIME_MODEL"] = "garbage" + >>> get_model() # doctest:+ELLIPSIS +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: model garbage not recognized + >>> del os.environ["CIME_MODEL"] + >>> set_model('rocky') # doctest:+ELLIPSIS +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: model rocky not recognized + >>> set_model('e3sm') + >>> get_model() + 'e3sm' + >>> reset_cime_config() + """ + model = os.environ.get("CIME_MODEL") + cime_models = get_all_cime_models() + if model in cime_models: + logger.debug("Setting CIME_MODEL={} from environment".format(model)) + else: + expect( + model is None, + "model {} not recognized. The acceptable values of CIME_MODEL currently are {}".format( + model, cime_models + ), + ) + cime_config = get_cime_config() + if cime_config.has_option("main", "CIME_MODEL"): + model = cime_config.get("main", "CIME_MODEL") + if model is not None: + logger.debug("Setting CIME_MODEL={} from ~/.cime/config".format(model)) + + # One last try + if model is None: + srcroot = get_src_root() + + if os.path.isfile(os.path.join(srcroot, "Externals.cfg")): + model = "cesm" + with open(os.path.join(srcroot, "Externals.cfg")) as fd: + for line in fd: + if re.search("ufs", line): + model = "ufs" + else: + model = "e3sm" + # This message interfers with the correct operation of xmlquery + # logger.debug("Guessing CIME_MODEL={}, set environment variable if this is incorrect".format(model)) + + if model is not None: + set_model(model) + return model + + modelroot = os.path.join(get_cime_root(), "CIME", "config") + models = os.listdir(modelroot) + msg = ".cime/config or environment variable CIME_MODEL must be set to one of: " + msg += ", ".join( + [ + model + for model in models + if os.path.isdir(os.path.join(modelroot, model)) and model != "xml_schemas" + ] + ) + expect(False, msg)
+ + + +def _get_path(filearg, from_dir): + if not filearg.startswith("/") and from_dir is not None: + filearg = os.path.join(from_dir, filearg) + + return filearg + + +def _convert_to_fd(filearg, from_dir, mode="a"): + filearg = _get_path(filearg, from_dir) + + return open(filearg, mode) + + +_hack = object() + + +def _line_defines_python_function(line, funcname): + """Returns True if the given line defines the function 'funcname' as a top-level definition + + ("top-level definition" means: not something like a class method; i.e., the def should + be at the start of the line, not indented) + + """ + if re.search(r"^def\s+{}\s*\(".format(funcname), line) or re.search( + r"^from\s.+\simport.*\s{}(?:,|\s|$)".format(funcname), line + ): + return True + return False + + +
+[docs] +def file_contains_python_function(filepath, funcname): + """Checks whether the given file contains a top-level definition of the function 'funcname' + + Returns a boolean value (True if the file contains this function definition, False otherwise) + """ + has_function = False + with open(filepath, "r") as fd: + for line in fd.readlines(): + if _line_defines_python_function(line, funcname): + has_function = True + break + + return has_function
+ + + +
+[docs] +def fixup_sys_path(*additional_paths): + cimeroot = get_cime_root() + + if cimeroot not in sys.path or sys.path.index(cimeroot) > 0: + sys.path.insert(0, cimeroot) + + tools_path = get_tools_path() + + if tools_path not in sys.path or sys.path.index(tools_path) > 1: + sys.path.insert(1, tools_path) + + for i, x in enumerate(additional_paths): + if x not in sys.path or sys.path.index(x) > 2 + i: + sys.path.insert(2 + i, x)
+ + + +
+[docs] +def import_and_run_sub_or_cmd( + cmd, + cmdargs, + subname, + subargs, + config_dir, + compname, + logfile=None, + case=None, + from_dir=None, + timeout=None, +): + sys_path_old = sys.path + # ensure we provide `get_src_root()` and `get_tools_path()` to sys.path + # allowing imported modules to correctly import `CIME` module or any + # tool under `CIME/Tools`. + fixup_sys_path(config_dir) + try: + mod = importlib.import_module(f"{compname}_cime_py") + getattr(mod, subname)(*subargs) + except (ModuleNotFoundError, AttributeError) as e: + # * ModuleNotFoundError if importlib can not find module, + # * AttributeError if importlib finds the module but + # {subname} is not defined in the module + expect( + os.path.isfile(cmd), + f"Could not find {subname} file for component {compname}", + ) + + # TODO shouldn't need to use logger.isEnabledFor for debug logging + if isinstance(e, ModuleNotFoundError) and logger.isEnabledFor(logging.DEBUG): + logger.info( + "WARNING: Could not import module '{}_cime_py'".format(compname) + ) + + try: + run_sub_or_cmd( + cmd, cmdargs, subname, subargs, logfile, case, from_dir, timeout + ) + except Exception as e1: + raise e1 from None + except Exception: + if logfile: + with open(logfile, "a") as log_fd: + log_fd.write(str(sys.exc_info()[1])) + expect(False, "{} FAILED, cat {}".format(cmd, logfile)) + else: + raise + sys.path = sys_path_old
+ + + +
+[docs] +def run_sub_or_cmd( + cmd, cmdargs, subname, subargs, logfile=None, case=None, from_dir=None, timeout=None +): + """ + This code will try to import and run each cmd as a subroutine + if that fails it will run it as a program in a seperate shell + + Raises exception on failure. + """ + if file_contains_python_function(cmd, subname): + do_run_cmd = False + else: + do_run_cmd = True + + if not do_run_cmd: + # ensure we provide `get_src_root()` and `get_tools_path()` to sys.path + # allowing imported modules to correctly import `CIME` module or any + # tool under `CIME/Tools`. + fixup_sys_path() + + try: + mod = import_from_file(subname, cmd) + logger.info(" Calling {}".format(cmd)) + # Careful: logfile code is not thread safe! + if logfile: + with open(logfile, "w") as log_fd: + with redirect_logger(log_fd, subname): + with redirect_stdout_stderr(log_fd): + getattr(mod, subname)(*subargs) + else: + getattr(mod, subname)(*subargs) + + except (SyntaxError, AttributeError) as _: + pass # Need to try to run as shell command + + except Exception: + if logfile: + with open(logfile, "a") as log_fd: + log_fd.write(str(sys.exc_info()[1])) + + expect(False, "{} FAILED, cat {}".format(cmd, logfile)) + else: + raise + + else: + return # Running as python function worked, we're done + + logger.info(" Running {} ".format(cmd)) + if case is not None: + case.flush() + + fullcmd = cmd + if isinstance(cmdargs, list): + for arg in cmdargs: + fullcmd += " " + str(arg) + else: + fullcmd += " " + cmdargs + + if logfile: + fullcmd += " >& {} ".format(logfile) + + stat, output, _ = run_cmd( + "{}".format(fullcmd), combine_output=True, from_dir=from_dir, timeout=timeout + ) + if output: # Will be empty if logfile + logger.info(output) + + if stat != 0: + if logfile: + expect(False, "{} FAILED, cat {}".format(fullcmd, logfile)) + else: + expect(False, "{} FAILED, see above".format(fullcmd)) + + # refresh case xml object from file + if case is not None: + case.read_xml()
+ + + +
+[docs] +def run_cmd( + cmd, + input_str=None, + from_dir=None, + verbose=None, + arg_stdout=_hack, + arg_stderr=_hack, + env=None, + combine_output=False, + timeout=None, + executable=None, + shell=True, +): + """ + Wrapper around subprocess to make it much more convenient to run shell commands + + >>> run_cmd('ls file_i_hope_doesnt_exist')[0] != 0 + True + """ + import subprocess # Not safe to do globally, module not available in older pythons + + # Real defaults for these value should be subprocess.PIPE + if arg_stdout is _hack: + arg_stdout = subprocess.PIPE + elif isinstance(arg_stdout, str): + arg_stdout = _convert_to_fd(arg_stdout, from_dir) + + if arg_stderr is _hack: + arg_stderr = subprocess.STDOUT if combine_output else subprocess.PIPE + elif isinstance(arg_stderr, str): + arg_stderr = _convert_to_fd(arg_stdout, from_dir) + + if verbose != False and (verbose or logger.isEnabledFor(logging.DEBUG)): + logger.info( + "RUN: {}\nFROM: {}".format( + cmd, os.getcwd() if from_dir is None else from_dir + ) + ) + + if input_str is not None: + stdin = subprocess.PIPE + else: + stdin = None + + if not shell: + cmd = shlex.split(cmd) + + # ensure we have an environment to use if not being over written by parent + if env is None: + # persist current environment + env = os.environ.copy() + + # Always provide these variables for anything called externally. + # `CIMEROOT` is provided for external scripts, makefiles, etc that + # may reference it. `PYTHONPATH` is provided to ensure external + # python can correctly import the CIME module and anything under + # `CIME/tools`. + # + # `get_tools_path()` is provided for backwards compatibility. + # External python prior to the CIME module move would use `CIMEROOT` + # or build a relative path and append `sys.path` to import + # `standard_script_setup`. Providing `PYTHONPATH` fixes protential + # broken paths in external python. + env.update( + { + "CIMEROOT": f"{get_cime_root()}", + "PYTHONPATH": f"{get_cime_root()}:{get_tools_path()}", + } + ) + + if timeout: + with Timeout(timeout): + proc = subprocess.Popen( + cmd, + shell=shell, + stdout=arg_stdout, + stderr=arg_stderr, + stdin=stdin, + cwd=from_dir, + executable=executable, + env=env, + ) + + output, errput = proc.communicate(input_str) + else: + proc = subprocess.Popen( + cmd, + shell=shell, + stdout=arg_stdout, + stderr=arg_stderr, + stdin=stdin, + cwd=from_dir, + executable=executable, + env=env, + ) + + output, errput = proc.communicate(input_str) + + # In Python3, subprocess.communicate returns bytes. We want to work with strings + # as much as possible, so we convert bytes to string (which is unicode in py3) via + # decode. For python2, we do NOT want to do this since decode will yield unicode + # strings which are not necessarily compatible with the system's default base str type. + if output is not None: + try: + output = output.decode("utf-8", errors="ignore") + except AttributeError: + pass + if errput is not None: + try: + errput = errput.decode("utf-8", errors="ignore") + except AttributeError: + pass + + # Always strip outputs + if output: + output = output.strip() + if errput: + errput = errput.strip() + + stat = proc.wait() + if isinstance(arg_stdout, io.IOBase): + arg_stdout.close() # pylint: disable=no-member + if isinstance(arg_stderr, io.IOBase) and arg_stderr is not arg_stdout: + arg_stderr.close() # pylint: disable=no-member + + if verbose != False and (verbose or logger.isEnabledFor(logging.DEBUG)): + if stat != 0: + logger.info(" stat: {:d}\n".format(stat)) + if output: + logger.info(" output: {}\n".format(output)) + if errput: + logger.info(" errput: {}\n".format(errput)) + + return stat, output, errput
+ + + +
+[docs] +def run_cmd_no_fail( + cmd, + input_str=None, + from_dir=None, + verbose=None, + arg_stdout=_hack, + arg_stderr=_hack, + env=None, + combine_output=False, + timeout=None, + executable=None, +): + """ + Wrapper around subprocess to make it much more convenient to run shell commands. + Expects command to work. Just returns output string. + + >>> run_cmd_no_fail('echo foo') == 'foo' + True + >>> run_cmd_no_fail('echo THE ERROR >&2; false') # doctest:+ELLIPSIS +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Command: 'echo THE ERROR >&2; false' failed with error ... + + >>> run_cmd_no_fail('grep foo', input_str=b'foo') == 'foo' + True + >>> run_cmd_no_fail('echo THE ERROR >&2', combine_output=True) == 'THE ERROR' + True + """ + stat, output, errput = run_cmd( + cmd, + input_str, + from_dir, + verbose, + arg_stdout, + arg_stderr, + env, + combine_output, + executable=executable, + timeout=timeout, + ) + if stat != 0: + # If command produced no errput, put output in the exception since we + # have nothing else to go on. + errput = output if not errput else errput + if errput is None: + if combine_output: + if isinstance(arg_stdout, str): + errput = "See {}".format(_get_path(arg_stdout, from_dir)) + else: + errput = "" + elif isinstance(arg_stderr, str): + errput = "See {}".format(_get_path(arg_stderr, from_dir)) + else: + errput = "" + + expect( + False, + "Command: '{}' failed with error '{}' from dir '{}'".format( + cmd, errput, os.getcwd() if from_dir is None else from_dir + ), + ) + + return output
+ + + +
+[docs] +def normalize_case_id(case_id): + """ + Given a case_id, return it in form TESTCASE.GRID.COMPSET.PLATFORM + + >>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel') + 'ERT.ne16_g37.B1850C5.sandiatoss3_intel' + >>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod') + 'ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod' + >>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel.G.20151121') + 'ERT.ne16_g37.B1850C5.sandiatoss3_intel' + >>> normalize_case_id('ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod.G.20151121') + 'ERT.ne16_g37.B1850C5.sandiatoss3_intel.test-mod' + """ + sep_count = case_id.count(".") + expect( + sep_count >= 3 and sep_count <= 6, + "Case '{}' needs to be in form: TESTCASE.GRID.COMPSET.PLATFORM[.TESTMOD] or TESTCASE.GRID.COMPSET.PLATFORM[.TESTMOD].GC.TESTID".format( + case_id + ), + ) + if sep_count in [5, 6]: + return ".".join(case_id.split(".")[:-2]) + else: + return case_id
+ + + +
+[docs] +def parse_test_name(test_name): + """ + Given a CIME test name TESTCASE[_CASEOPTS].GRID.COMPSET[.MACHINE_COMPILER[.TESTMODS]], + return each component of the testname with machine and compiler split. + Do not error if a partial testname is provided (TESTCASE or TESTCASE.GRID) instead + parse and return the partial results. + + TESTMODS use hyphens in a special way: + - A single hyphen stands for a path separator (for example, 'test-mods' resolves to + the path 'test/mods') + - A double hyphen separates multiple test mods (for example, 'test-mods--other-dir-path' + indicates two test mods: 'test/mods' and 'other/dir/path') + + If there are one or more TESTMODS, then the testmods component of the result will be a + list, where each element of the list is one testmod, and hyphens have been replaced by + slashes. (If there are no TESTMODS in this test, then the TESTMODS component of the + result is None, as for other optional components.) + + >>> parse_test_name('ERS') + ['ERS', None, None, None, None, None, None] + >>> parse_test_name('ERS.fe12_123') + ['ERS', None, 'fe12_123', None, None, None, None] + >>> parse_test_name('ERS.fe12_123.JGF') + ['ERS', None, 'fe12_123', 'JGF', None, None, None] + >>> parse_test_name('ERS_D.fe12_123.JGF') + ['ERS', ['D'], 'fe12_123', 'JGF', None, None, None] + >>> parse_test_name('ERS_D_P1.fe12_123.JGF') + ['ERS', ['D', 'P1'], 'fe12_123', 'JGF', None, None, None] + >>> parse_test_name('ERS_D_G2.fe12_123.JGF') + ['ERS', ['D', 'G2'], 'fe12_123', 'JGF', None, None, None] + >>> parse_test_name('SMS_D_Ln9_Mmpi-serial.f19_g16_rx1.A') + ['SMS', ['D', 'Ln9', 'Mmpi-serial'], 'f19_g16_rx1', 'A', None, None, None] + >>> parse_test_name('ERS.fe12_123.JGF.machine_compiler') + ['ERS', None, 'fe12_123', 'JGF', 'machine', 'compiler', None] + >>> parse_test_name('ERS.fe12_123.JGF.machine_compiler.test-mods') + ['ERS', None, 'fe12_123', 'JGF', 'machine', 'compiler', ['test/mods']] + >>> parse_test_name('ERS.fe12_123.JGF.*_compiler.test-mods') + ['ERS', None, 'fe12_123', 'JGF', None, 'compiler', ['test/mods']] + >>> parse_test_name('ERS.fe12_123.JGF.machine_*.test-mods') + ['ERS', None, 'fe12_123', 'JGF', 'machine', None, ['test/mods']] + >>> parse_test_name('ERS.fe12_123.JGF.*_*.test-mods') + ['ERS', None, 'fe12_123', 'JGF', None, None, ['test/mods']] + >>> parse_test_name('ERS.fe12_123.JGF.machine_compiler.test-mods--other-dir-path--and-one-more') + ['ERS', None, 'fe12_123', 'JGF', 'machine', 'compiler', ['test/mods', 'other/dir/path', 'and/one/more']] + >>> parse_test_name('SMS.f19_g16.2000_DATM%QI.A_XLND_SICE_SOCN_XROF_XGLC_SWAV.mach-ine_compiler.test-mods') # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Expected 4th item of 'SMS.f19_g16.2000_DATM%QI.A_XLND_SICE_SOCN_XROF_XGLC_SWAV.mach-ine_compiler.test-mods' ('A_XLND_SICE_SOCN_XROF_XGLC_SWAV') to be in form machine_compiler + >>> parse_test_name('SMS.f19_g16.2000_DATM%QI/A_XLND_SICE_SOCN_XROF_XGLC_SWAV.mach-ine_compiler.test-mods') # doctest: +IGNORE_EXCEPTION_DETAIL + Traceback (most recent call last): + ... + CIMEError: ERROR: Invalid compset name 2000_DATM%QI/A_XLND_SICE_SOCN_XROF_XGLC_SWAV + """ + rv = [None] * 7 + num_dots = test_name.count(".") + + rv[0 : num_dots + 1] = test_name.split(".") + testcase_field_underscores = rv[0].count("_") + rv.insert(1, None) # Make room for caseopts + rv.pop() + if testcase_field_underscores > 0: + full_str = rv[0] + rv[0] = full_str.split("_")[0] + rv[1] = full_str.split("_")[1:] + + if num_dots >= 3: + expect(check_name(rv[3]), "Invalid compset name {}".format(rv[3])) + + expect( + rv[4].count("_") == 1, + "Expected 4th item of '{}' ('{}') to be in form machine_compiler".format( + test_name, rv[4] + ), + ) + rv[4:5] = rv[4].split("_") + if rv[4] == "*": + rv[4] = None + if rv[5] == "*": + rv[5] = None + rv.pop() + + if rv[-1] is not None: + # The last element of the return value - testmods - will be a list of testmods, + # built by separating the TESTMODS component on strings of double hyphens + testmods = rv[-1].split("--") + rv[-1] = [one_testmod.replace("-", "/") for one_testmod in testmods] + + expect( + num_dots <= 4, + "'{}' does not look like a CIME test name, expect TESTCASE.GRID.COMPSET[.MACHINE_COMPILER[.TESTMODS]]".format( + test_name + ), + ) + + return rv
+ + + +
+[docs] +def get_full_test_name( + partial_test, + caseopts=None, + grid=None, + compset=None, + machine=None, + compiler=None, + testmods_list=None, + testmods_string=None, +): + """ + Given a partial CIME test name, return in form TESTCASE.GRID.COMPSET.MACHINE_COMPILER[.TESTMODS] + Use the additional args to fill out the name if needed + + Testmods can be provided through one of two arguments, but *not* both: + - testmods_list: a list of one or more testmods (as would be returned by + parse_test_name, for example) + - testmods_string: a single string containing one or more testmods; if there is more + than one, then they should be separated by a string of two hyphens ('--') + + For both testmods_list and testmods_string, any slashes as path separators ('/') are + replaced by hyphens ('-'). + + >>> get_full_test_name("ERS", grid="ne16_fe16", compset="JGF", machine="melvin", compiler="gnu") + 'ERS.ne16_fe16.JGF.melvin_gnu' + >>> get_full_test_name("ERS", caseopts=["D", "P16"], grid="ne16_fe16", compset="JGF", machine="melvin", compiler="gnu") + 'ERS_D_P16.ne16_fe16.JGF.melvin_gnu' + >>> get_full_test_name("ERS.ne16_fe16", compset="JGF", machine="melvin", compiler="gnu") + 'ERS.ne16_fe16.JGF.melvin_gnu' + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu") + 'ERS.ne16_fe16.JGF.melvin_gnu' + >>> get_full_test_name("ERS.ne16_fe16.JGF.melvin_gnu.mods", machine="melvin", compiler="gnu") + 'ERS.ne16_fe16.JGF.melvin_gnu.mods' + + testmods_list can be a single element: + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu", testmods_list=["mods/test"]) + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test' + + testmods_list can also have multiple elements, separated either by slashes or hyphens: + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu", testmods_list=["mods/test", "mods2/test2/subdir2", "mods3/test3/subdir3"]) + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test--mods2-test2-subdir2--mods3-test3-subdir3' + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu", testmods_list=["mods-test", "mods2-test2-subdir2", "mods3-test3-subdir3"]) + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test--mods2-test2-subdir2--mods3-test3-subdir3' + + The above testmods_list tests should also work with equivalent testmods_string arguments: + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu", testmods_string="mods/test") + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test' + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu", testmods_string="mods/test--mods2/test2/subdir2--mods3/test3/subdir3") + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test--mods2-test2-subdir2--mods3-test3-subdir3' + >>> get_full_test_name("ERS.ne16_fe16.JGF", machine="melvin", compiler="gnu", testmods_string="mods-test--mods2-test2-subdir2--mods3-test3-subdir3") + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test--mods2-test2-subdir2--mods3-test3-subdir3' + + The following tests the consistency check between the test name and various optional arguments: + >>> get_full_test_name("ERS.ne16_fe16.JGF.melvin_gnu.mods-test--mods2-test2-subdir2--mods3-test3-subdir3", machine="melvin", compiler="gnu", testmods_list=["mods/test", "mods2/test2/subdir2", "mods3/test3/subdir3"]) + 'ERS.ne16_fe16.JGF.melvin_gnu.mods-test--mods2-test2-subdir2--mods3-test3-subdir3' + """ + ( + partial_testcase, + partial_caseopts, + partial_grid, + partial_compset, + partial_machine, + partial_compiler, + partial_testmods, + ) = parse_test_name(partial_test) + + required_fields = [ + (partial_grid, grid, "grid"), + (partial_compset, compset, "compset"), + (partial_machine, machine, "machine"), + (partial_compiler, compiler, "compiler"), + ] + + result = partial_test + for partial_val, arg_val, name in required_fields: + if partial_val is None: + # Add to result based on args + expect( + arg_val is not None, + "Could not fill-out test name, partial string '{}' had no {} information and you did not provide any".format( + partial_test, name + ), + ) + if name == "machine" and "*_" in result: + result = result.replace("*_", arg_val + "_") + elif name == "compiler" and "_*" in result: + result = result.replace("_*", "_" + arg_val) + else: + result = "{}{}{}".format( + result, "_" if name == "compiler" else ".", arg_val + ) + elif arg_val is not None and partial_val != partial_compiler: + expect( + arg_val == partial_val, + "Mismatch in field {}, partial string '{}' indicated it should be '{}' but you provided '{}'".format( + name, partial_test, partial_val, arg_val + ), + ) + + if testmods_string is not None: + expect( + testmods_list is None, + "Cannot provide both testmods_list and testmods_string", + ) + # Convert testmods_string to testmods_list; after this point, the code will work + # the same regardless of whether testmods_string or testmods_list was provided. + testmods_list = testmods_string.split("--") + if partial_testmods is None: + if testmods_list is None: + # No testmods for this test and that's OK + pass + else: + testmods_hyphenated = [ + one_testmod.replace("/", "-") for one_testmod in testmods_list + ] + result += ".{}".format("--".join(testmods_hyphenated)) + elif testmods_list is not None: + expect( + testmods_list == partial_testmods, + "Mismatch in field testmods, partial string '{}' indicated it should be '{}' but you provided '{}'".format( + partial_test, partial_testmods, testmods_list + ), + ) + + if partial_caseopts is None: + if caseopts is None: + # No casemods for this test and that's OK + pass + else: + result = result.replace( + partial_testcase, + "{}_{}".format(partial_testcase, "_".join(caseopts)), + 1, + ) + elif caseopts is not None: + expect( + caseopts == partial_caseopts, + "Mismatch in field caseopts, partial string '{}' indicated it should be '{}' but you provided '{}'".format( + partial_test, partial_caseopts, caseopts + ), + ) + + return result
+ + + +
+[docs] +def get_current_branch(repo=None): + """ + Return the name of the current branch for a repository + + >>> if "GIT_BRANCH" in os.environ: + ... get_current_branch() is not None + ... else: + ... os.environ["GIT_BRANCH"] = "foo" + ... get_current_branch() == "foo" + True + """ + if "GIT_BRANCH" in os.environ: + # This approach works better for Jenkins jobs because the Jenkins + # git plugin does not use local tracking branches, it just checks out + # to a commit + branch = os.environ["GIT_BRANCH"] + if branch.startswith("origin/"): + branch = branch.replace("origin/", "", 1) + return branch + else: + stat, output, _ = run_cmd("git symbolic-ref HEAD", from_dir=repo) + if stat != 0: + return None + else: + return output.replace("refs/heads/", "")
+ + + +
+[docs] +def get_current_commit(short=False, repo=None, tag=False): + """ + Return the sha1 of the current HEAD commit + + >>> get_current_commit() is not None + True + """ + if tag: + rc, output, _ = run_cmd( + "git describe --tags $(git log -n1 --pretty='%h')", from_dir=repo + ) + else: + rc, output, _ = run_cmd( + "git rev-parse {} HEAD".format("--short" if short else ""), from_dir=repo + ) + + return output if rc == 0 else "unknown"
+ + + +
+[docs] +def get_model_config_location_within_cime(model=None): + model = get_model() if model is None else model + return os.path.join("config", model)
+ + + +
+[docs] +def get_scripts_root(): + """ + Get absolute path to scripts + + >>> os.path.isdir(get_scripts_root()) + True + """ + return os.path.join(get_cime_root(), "scripts")
+ + + +
+[docs] +def get_model_config_root(model=None): + """ + Get absolute path to model config area" + + >>> os.environ["CIME_MODEL"] = "e3sm" # Set the test up don't depend on external resources + >>> os.path.isdir(get_model_config_root()) + True + """ + model = get_model() if model is None else model + return os.path.join( + get_cime_root(), "CIME", "data", get_model_config_location_within_cime(model) + )
+ + + +
+[docs] +def stop_buffering_output(): + """ + All stdout, stderr will not be buffered after this is called. + """ + os.environ["PYTHONUNBUFFERED"] = "1"
+ + + +
+[docs] +def start_buffering_output(): + """ + All stdout, stderr will be buffered after this is called. This is python's + default behavior. + """ + sys.stdout.flush() + sys.stdout = os.fdopen(sys.stdout.fileno(), "w")
+ + + +
+[docs] +def match_any(item, re_counts): + """ + Return true if item matches any regex in re_counts' keys. Increments + count if a match was found. + """ + for regex_str in re_counts: + regex = re.compile(regex_str) + if regex.match(item): + re_counts[regex_str] += 1 + return True + + return False
+ + + +
+[docs] +def get_current_submodule_status(recursive=False, repo=None): + """ + Return the sha1s of the current currently checked out commit for each submodule, + along with the submodule path and the output of git describe for the SHA-1. + + >>> get_current_submodule_status() is not None + True + """ + rc, output, _ = run_cmd( + "git submodule status {}".format("--recursive" if recursive else ""), + from_dir=repo, + ) + + return output if rc == 0 else "unknown"
+ + + +
+[docs] +def copy_globs(globs_to_copy, output_directory, lid=None): + """ + Takes a list of globs and copies all files to `output_directory`. + + Hiddens files become unhidden i.e. removing starting dot. + + Output filename is derviced from the basename of the input path and can + be appended with the `lid`. + + """ + for glob_to_copy in globs_to_copy: + for item in glob.glob(glob_to_copy): + item_basename = os.path.basename(item).lstrip(".") + + if lid is None: + filename = item_basename + else: + filename = f"{item_basename}.{lid}" + + safe_copy( + item, os.path.join(output_directory, filename), preserve_meta=False + )
+ + + +
+[docs] +def safe_copy(src_path, tgt_path, preserve_meta=True): + """ + A flexbile and safe copy routine. Will try to copy file and metadata, but this + can fail if the current user doesn't own the tgt file. A fallback data-only copy is + attempted in this case. Works even if overwriting a read-only file. + + tgt_path can be a directory, src_path must be a file + + most of the complexity here is handling the case where the tgt_path file already + exists. This problem does not exist for the tree operations so we don't need to wrap those. + + preserve_meta toggles if file meta-data, like permissions, should be preserved. If you are + copying baseline files, you should be within a SharedArea context manager and preserve_meta + should be false so that the umask set up by SharedArea can take affect regardless of the + permissions of the src files. + """ + + tgt_path = ( + os.path.join(tgt_path, os.path.basename(src_path)) + if os.path.isdir(tgt_path) + else tgt_path + ) + + # Handle pre-existing file + if os.path.isfile(tgt_path): + st = os.stat(tgt_path) + owner_uid = st.st_uid + + # Handle read-only files if possible + if not os.access(tgt_path, os.W_OK): + if owner_uid == os.getuid(): + # I am the owner, make writeable + os.chmod(tgt_path, st.st_mode | statlib.S_IWRITE) + else: + # I won't be able to copy this file + raise OSError( + "Cannot copy over file {}, it is readonly and you are not the owner".format( + tgt_path + ) + ) + + if owner_uid == os.getuid(): + # I am the owner, copy file contents, permissions, and metadata + file_util.copy_file( + src_path, + tgt_path, + preserve_mode=preserve_meta, + preserve_times=preserve_meta, + ) + else: + # I am not the owner, just copy file contents + shutil.copyfile(src_path, tgt_path) + + else: + # We are making a new file, copy file contents, permissions, and metadata. + # This can fail if the underlying directory is not writable by current user. + file_util.copy_file( + src_path, + tgt_path, + preserve_mode=preserve_meta, + preserve_times=preserve_meta, + ) + + # If src file was executable, then the tgt file should be too + st = os.stat(tgt_path) + if os.access(src_path, os.X_OK) and st.st_uid == os.getuid(): + os.chmod( + tgt_path, st.st_mode | statlib.S_IXUSR | statlib.S_IXGRP | statlib.S_IXOTH + )
+ + + +
+[docs] +def safe_recursive_copy(src_dir, tgt_dir, file_map): + """ + Copies a set of files from one dir to another. Works even if overwriting a + read-only file. Files can be relative paths and the relative path will be + matched on the tgt side. + """ + for src_file, tgt_file in file_map: + full_tgt = os.path.join(tgt_dir, tgt_file) + full_src = ( + src_file if os.path.isabs(src_file) else os.path.join(src_dir, src_file) + ) + expect( + os.path.isfile(full_src), + "Source dir '{}' missing file '{}'".format(src_dir, src_file), + ) + safe_copy(full_src, full_tgt)
+ + + + + + + +
+[docs] +def find_proc_id(proc_name=None, children_only=False, of_parent=None): + """ + Children implies recursive. + """ + expect( + proc_name is not None or children_only, + "Must provide proc_name if not searching for children", + ) + expect( + not (of_parent is not None and not children_only), + "of_parent only used with children_only", + ) + + parent = of_parent if of_parent is not None else os.getpid() + + pgrep_cmd = "pgrep {} {}".format( + proc_name if proc_name is not None else "", + "-P {:d}".format(parent if children_only else ""), + ) + stat, output, errput = run_cmd(pgrep_cmd) + expect(stat in [0, 1], "pgrep failed with error: '{}'".format(errput)) + + rv = set([int(item.strip()) for item in output.splitlines()]) + if children_only: + pgrep_cmd = "pgrep -P {}".format(parent) + stat, output, errput = run_cmd(pgrep_cmd) + expect(stat in [0, 1], "pgrep failed with error: '{}'".format(errput)) + + for child in output.splitlines(): + rv = rv.union( + set(find_proc_id(proc_name, children_only, int(child.strip()))) + ) + + return list(rv)
+ + + +
+[docs] +def get_timestamp(timestamp_format="%Y%m%d_%H%M%S", utc_time=False): + """ + Get a string representing the current UTC time in format: YYYYMMDD_HHMMSS + + The format can be changed if needed. + """ + if utc_time: + time_tuple = time.gmtime() + else: + time_tuple = time.localtime() + return time.strftime(timestamp_format, time_tuple)
+ + + +
+[docs] +def get_project(machobj=None): + """ + Hierarchy for choosing PROJECT: + 0. Command line flag to create_newcase or create_test + 1. Environment variable PROJECT + 2 Environment variable ACCOUNT (this is for backward compatibility) + 3. File $HOME/.cime/config (this is new) + 4 File $HOME/.cesm_proj (this is for backward compatibility) + 5 config_machines.xml (if machobj provided) + """ + project = os.environ.get("PROJECT") + if project is not None: + logger.info("Using project from env PROJECT: " + project) + return project + project = os.environ.get("ACCOUNT") + if project is not None: + logger.info("Using project from env ACCOUNT: " + project) + return project + + cime_config = get_cime_config() + if cime_config.has_option("main", "PROJECT"): + project = cime_config.get("main", "PROJECT") + if project is not None: + logger.info("Using project from .cime/config: " + project) + return project + + projectfile = os.path.abspath(os.path.join(os.path.expanduser("~"), ".cesm_proj")) + if os.path.isfile(projectfile): + with open(projectfile, "r") as myfile: + for line in myfile: + project = line.rstrip() + if not project.startswith("#"): + break + if project is not None: + logger.info("Using project from .cesm_proj: " + project) + cime_config.set("main", "PROJECT", project) + return project + + if machobj is not None: + project = machobj.get_value("PROJECT") + if project is not None: + logger.info("Using project from config_machines.xml: " + project) + return project + + logger.info("No project info available") + return None
+ + + +
+[docs] +def get_charge_account(machobj=None, project=None): + """ + Hierarchy for choosing CHARGE_ACCOUNT: + 1. Environment variable CHARGE_ACCOUNT + 2. File $HOME/.cime/config + 3. config_machines.xml (if machobj provided) + 4. default to same value as PROJECT + + >>> import CIME + >>> import CIME.XML.machines + >>> machobj = CIME.XML.machines.Machines(machine="theta") + >>> project = get_project(machobj) + >>> charge_account = get_charge_account(machobj, project) + >>> project == charge_account + True + >>> os.environ["CHARGE_ACCOUNT"] = "ChargeAccount" + >>> get_charge_account(machobj, project) + 'ChargeAccount' + >>> del os.environ["CHARGE_ACCOUNT"] + """ + charge_account = os.environ.get("CHARGE_ACCOUNT") + if charge_account is not None: + logger.info("Using charge_account from env CHARGE_ACCOUNT: " + charge_account) + return charge_account + + cime_config = get_cime_config() + if cime_config.has_option("main", "CHARGE_ACCOUNT"): + charge_account = cime_config.get("main", "CHARGE_ACCOUNT") + if charge_account is not None: + logger.info("Using charge_account from .cime/config: " + charge_account) + return charge_account + + if machobj is not None: + charge_account = machobj.get_value("CHARGE_ACCOUNT") + if charge_account is not None: + logger.info( + "Using charge_account from config_machines.xml: " + charge_account + ) + return charge_account + + logger.info("No charge_account info available, using value from PROJECT") + return project
+ + + +
+[docs] +def find_files(rootdir, pattern): + """ + recursively find all files matching a pattern + """ + result = [] + for root, _, files in os.walk(rootdir): + for filename in files: + if fnmatch.fnmatch(filename, pattern): + result.append(os.path.join(root, filename)) + + return result
+ + + +
+[docs] +def setup_standard_logging_options(parser): + group = parser.add_argument_group("Logging options") + + helpfile = os.path.join(os.getcwd(), os.path.basename("{}.log".format(sys.argv[0]))) + + group.add_argument( + "-d", + "--debug", + action="store_true", + help="Print debug information (very verbose) to file {}".format(helpfile), + ) + + group.add_argument( + "-v", + "--verbose", + action="store_true", + help="Add additional context (time and file) to log messages", + ) + + group.add_argument( + "-s", + "--silent", + action="store_true", + help="Print only warnings and error messages", + )
+ + + +class _LessThanFilter(logging.Filter): + def __init__(self, exclusive_maximum, name=""): + super(_LessThanFilter, self).__init__(name) + self.max_level = exclusive_maximum + + def filter(self, record): + # non-zero return means we log this message + return 1 if record.levelno < self.max_level else 0 + + +
+[docs] +def configure_logging(verbose, debug, silent): + root_logger = logging.getLogger() + + verbose_formatter = logging.Formatter( + fmt="%(asctime)s %(name)-12s %(levelname)-8s %(message)s", datefmt="%m-%d %H:%M" + ) + + # Change info to go to stdout. This handle applies to INFO exclusively + stdout_stream_handler = logging.StreamHandler(stream=sys.stdout) + stdout_stream_handler.setLevel(logging.INFO) + stdout_stream_handler.addFilter(_LessThanFilter(logging.WARNING)) + + # Change warnings and above to go to stderr + stderr_stream_handler = logging.StreamHandler(stream=sys.stderr) + stderr_stream_handler.setLevel(logging.WARNING) + + # --verbose adds to the message format but does not impact the log level + if verbose: + stdout_stream_handler.setFormatter(verbose_formatter) + stderr_stream_handler.setFormatter(verbose_formatter) + + root_logger.addHandler(stdout_stream_handler) + root_logger.addHandler(stderr_stream_handler) + + if debug: + # Set up log file to catch ALL logging records + log_file = "{}.log".format(os.path.basename(sys.argv[0])) + + debug_log_handler = logging.FileHandler(log_file, mode="w") + debug_log_handler.setFormatter(verbose_formatter) + debug_log_handler.setLevel(logging.DEBUG) + root_logger.addHandler(debug_log_handler) + + root_logger.setLevel(logging.DEBUG) + elif silent: + root_logger.setLevel(logging.WARN) + else: + root_logger.setLevel(logging.INFO)
+ + + +
+[docs] +def parse_args_and_handle_standard_logging_options(args, parser=None): + """ + Guide to logging in CIME. + + logger.debug -> Verbose/detailed output, use for debugging, off by default. Goes to a .log file + logger.info -> Goes to stdout (and log if --debug). Use for normal program output + logger.warning -> Goes to stderr (and log if --debug). Use for minor problems + logger.error -> Goes to stderr (and log if --debug) + """ + # scripts_regression_tests is the only thing that should pass a None argument in parser + if parser is not None: + if "--help" not in args[1:]: + _check_for_invalid_args(args[1:]) + args = parser.parse_args(args[1:]) + + configure_logging(args.verbose, args.debug, args.silent) + + return args
+ + + +
+[docs] +def get_logging_options(): + """ + Use to pass same logging options as was used for current + executable to subprocesses. + """ + root_logger = logging.getLogger() + + if root_logger.level == logging.DEBUG: + return "--debug" + elif root_logger.level == logging.WARN: + return "--silent" + else: + return ""
+ + + +
+[docs] +def convert_to_type(value, type_str, vid=""): + """ + Convert value from string to another type. + vid is only for generating better error messages. + """ + if value is not None: + + if type_str == "char": + pass + + elif type_str == "integer": + try: + value = int(eval(value)) + except Exception: + expect( + False, + "Entry {} was listed as type int but value '{}' is not valid int".format( + vid, value + ), + ) + + elif type_str == "logical": + expect( + value.upper() in ["TRUE", "FALSE"], + "Entry {} was listed as type logical but had val '{}' instead of TRUE or FALSE".format( + vid, value + ), + ) + value = value.upper() == "TRUE" + + elif type_str == "real": + try: + value = float(value) + except Exception: + expect( + False, + "Entry {} was listed as type real but value '{}' is not valid real".format( + vid, value + ), + ) + + else: + expect(False, "Unknown type '{}'".format(type_str)) + + return value
+ + + +
+[docs] +def convert_to_unknown_type(value): + """ + Convert value to it's real type by probing conversions. + """ + if value is not None: + + # Attempt to convert to logical + if value.upper() in ["TRUE", "FALSE"]: + return value.upper() == "TRUE" + + # Attempt to convert to integer + try: + value = int(eval(value)) + except Exception: + pass + else: + return value + + # Attempt to convert to float + try: + value = float(value) + except Exception: + pass + else: + return value + + # Just treat as string + + return value
+ + + +
+[docs] +def convert_to_string(value, type_str=None, vid=""): + """ + Convert value back to string. + vid is only for generating better error messages. + >>> convert_to_string(6, type_str="integer") == '6' + True + >>> convert_to_string('6', type_str="integer") == '6' + True + >>> convert_to_string('6.0', type_str="real") == '6.0' + True + >>> convert_to_string(6.01, type_str="real") == '6.01' + True + """ + if value is not None and not isinstance(value, str): + if type_str == "char": + expect( + isinstance(value, str), + "Wrong type for entry id '{}'".format(vid), + ) + elif type_str == "integer": + expect( + isinstance(value, int), + "Wrong type for entry id '{}'".format(vid), + ) + value = str(value) + elif type_str == "logical": + expect(type(value) is bool, "Wrong type for entry id '{}'".format(vid)) + value = "TRUE" if value else "FALSE" + elif type_str == "real": + expect(type(value) is float, "Wrong type for entry id '{}'".format(vid)) + value = str(value) + else: + expect(False, "Unknown type '{}'".format(type_str)) + if value is None: + value = "" + logger.debug("Attempt to convert None value for vid {} {}".format(vid, value)) + + return value
+ + + +
+[docs] +def convert_to_seconds(time_str): + """ + Convert time value in [[HH:]MM:]SS to seconds + + We assume that XX:YY is likely to be HH:MM, not MM:SS + + >>> convert_to_seconds("42") + 42 + >>> convert_to_seconds("01:01:01") + 3661 + >>> convert_to_seconds("01:01") + 3660 + """ + components = time_str.split(":") + expect(len(components) < 4, "Unusual time string: '{}'".format(time_str)) + + components.reverse() + result = 0 + starting_exp = 1 if len(components) == 2 else 0 + for idx, component in enumerate(components): + result += int(component) * pow(60, idx + starting_exp) + + return result
+ + + +
+[docs] +def convert_to_babylonian_time(seconds): + """ + Convert time value to seconds to HH:MM:SS + + >>> convert_to_babylonian_time(3661) + '01:01:01' + >>> convert_to_babylonian_time(360000) + '100:00:00' + """ + hours = int(seconds / 3600) + seconds %= 3600 + minutes = int(seconds / 60) + seconds %= 60 + + return "{:02d}:{:02d}:{:02d}".format(hours, minutes, seconds)
+ + + +
+[docs] +def get_time_in_seconds(timeval, unit): + """ + Convert a time from 'unit' to seconds + """ + if "nyear" in unit: + dmult = 365 * 24 * 3600 + elif "nmonth" in unit: + dmult = 30 * 24 * 3600 + elif "nday" in unit: + dmult = 24 * 3600 + elif "nhour" in unit: + dmult = 3600 + elif "nminute" in unit: + dmult = 60 + else: + dmult = 1 + + return dmult * timeval
+ + + +
+[docs] +def compute_total_time(job_cost_map, proc_pool): + """ + Given a map: jobname -> (procs, est-time), return a total time + estimate for a given processor pool size + + >>> job_cost_map = {"A" : (4, 3000), "B" : (2, 1000), "C" : (8, 2000), "D" : (1, 800)} + >>> compute_total_time(job_cost_map, 8) + 5160 + >>> compute_total_time(job_cost_map, 12) + 3180 + >>> compute_total_time(job_cost_map, 16) + 3060 + """ + current_time = 0 + waiting_jobs = dict(job_cost_map) + running_jobs = {} # name -> (procs, est-time, start-time) + while len(waiting_jobs) > 0 or len(running_jobs) > 0: + launched_jobs = [] + for jobname, data in waiting_jobs.items(): + procs_for_job, time_for_job = data + if procs_for_job <= proc_pool: + proc_pool -= procs_for_job + launched_jobs.append(jobname) + running_jobs[jobname] = (procs_for_job, time_for_job, current_time) + + for launched_job in launched_jobs: + del waiting_jobs[launched_job] + + completed_jobs = [] + for jobname, data in running_jobs.items(): + procs_for_job, time_for_job, time_started = data + if (current_time - time_started) >= time_for_job: + proc_pool += procs_for_job + completed_jobs.append(jobname) + + for completed_job in completed_jobs: + del running_jobs[completed_job] + + current_time += 60 # minute time step + + return current_time
+ + + +
+[docs] +def format_time(time_format, input_format, input_time): + """ + Converts the string input_time from input_format to time_format + Valid format specifiers are "%H", "%M", and "%S" + % signs must be followed by an H, M, or S and then a separator + Separators can be any string without digits or a % sign + Each specifier can occur more than once in the input_format, + but only the first occurence will be used. + An example of a valid format: "%H:%M:%S" + Unlike strptime, this does support %H >= 24 + + >>> format_time("%H:%M:%S", "%H", "43") + '43:00:00' + >>> format_time("%H %M", "%M,%S", "59,59") + '0 59' + >>> format_time("%H, %S", "%H:%M:%S", "2:43:9") + '2, 09' + """ + input_fields = input_format.split("%") + expect( + input_fields[0] == input_time[: len(input_fields[0])], + "Failed to parse the input time '{}'; does not match the header string '{}'".format( + input_time, input_format + ), + ) + input_time = input_time[len(input_fields[0]) :] + timespec = {"H": None, "M": None, "S": None} + maxvals = {"M": 60, "S": 60} + DIGIT_CHECK = re.compile("[^0-9]*") + # Loop invariants given input follows the specs: + # field starts with H, M, or S + # input_time starts with a number corresponding with the start of field + for field in input_fields[1:]: + # Find all of the digits at the start of the string + spec = field[0] + value_re = re.match(r"\d*", input_time) + expect( + value_re is not None, + "Failed to parse the input time for the '{}' specifier, expected an integer".format( + spec + ), + ) + value = value_re.group(0) + expect(spec in timespec, "Unknown time specifier '" + spec + "'") + # Don't do anything if the time field is already specified + if timespec[spec] is None: + # Verify we aren't exceeding the maximum value + if spec in maxvals: + expect( + int(value) < maxvals[spec], + "Failed to parse the '{}' specifier: A value less than {:d} is expected".format( + spec, maxvals[spec] + ), + ) + timespec[spec] = value + input_time = input_time[len(value) :] + # Check for the separator string + expect( + len(re.match(DIGIT_CHECK, field).group(0)) == len(field), + "Numbers are not permissible in separator strings", + ) + expect( + input_time[: len(field) - 1] == field[1:], + "The separator string ({}) doesn't match '{}'".format( + field[1:], input_time + ), + ) + input_time = input_time[len(field) - 1 :] + output_fields = time_format.split("%") + output_time = output_fields[0] + # Used when a value isn't given + min_len_spec = {"H": 1, "M": 2, "S": 2} + # Loop invariants given input follows the specs: + # field starts with H, M, or S + # output_time + for field in output_fields[1:]: + expect( + field == output_fields[-1] or len(field) > 1, + "Separator strings are required to properly parse times", + ) + spec = field[0] + expect(spec in timespec, "Unknown time specifier '" + spec + "'") + if timespec[spec] is not None: + output_time += "0" * (min_len_spec[spec] - len(timespec[spec])) + output_time += timespec[spec] + else: + output_time += "0" * min_len_spec[spec] + output_time += field[1:] + return output_time
+ + + +
+[docs] +def append_status(msg, sfile, caseroot="."): + """ + Append msg to sfile in caseroot + """ + ctime = time.strftime("%Y-%m-%d %H:%M:%S: ") + + # Reduce empty lines in CaseStatus. It's a very concise file + # and does not need extra newlines for readability + line_ending = "\n" + + with open(os.path.join(caseroot, sfile), "a") as fd: + fd.write(ctime + msg + line_ending) + fd.write(" ---------------------------------------------------" + line_ending)
+ + + +
+[docs] +def append_testlog(msg, caseroot="."): + """ + Add to TestStatus.log file + """ + append_status(msg, "TestStatus.log", caseroot)
+ + + +
+[docs] +def append_case_status(phase, status, msg=None, caseroot="."): + """ + Update CaseStatus file + """ + append_status( + "{} {}{}".format(phase, status, " {}".format(msg if msg else "")), + "CaseStatus", + caseroot, + )
+ + + +
+[docs] +def does_file_have_string(filepath, text): + """ + Does the text string appear in the filepath file + """ + return os.path.isfile(filepath) and text in open(filepath).read()
+ + + +
+[docs] +def is_last_process_complete(filepath, expect_text, fail_text): + """ + Search the filepath in reverse order looking for expect_text + before finding fail_text. This utility is used by archive_metadata. + + """ + complete = False + fh = open(filepath, "r") + fb = fh.readlines() + + rfb = "".join(reversed(fb)) + + findex = re.search(fail_text, rfb) + if findex is None: + findex = 0 + else: + findex = findex.start() + + eindex = re.search(expect_text, rfb) + if eindex is None: + eindex = 0 + else: + eindex = eindex.start() + + if findex > eindex: + complete = True + + return complete
+ + + +
+[docs] +def transform_vars(text, case=None, subgroup=None, overrides=None, default=None): + """ + Do the variable substitution for any variables that need transforms + recursively. + + >>> transform_vars("{{ cesm_stdout }}", default="cesm.stdout") + 'cesm.stdout' + >>> member_store = lambda : None + >>> member_store.foo = "hi" + >>> transform_vars("I say {{ foo }}", overrides={"foo":"hi"}) + 'I say hi' + """ + directive_re = re.compile(r"{{ (\w+) }}", flags=re.M) + # loop through directive text, replacing each string enclosed with + # template characters with the necessary values. + while directive_re.search(text): + m = directive_re.search(text) + variable = m.groups()[0] + whole_match = m.group() + if ( + overrides is not None + and variable.lower() in overrides + and overrides[variable.lower()] is not None + ): + repl = overrides[variable.lower()] + logger.debug( + "from overrides: in {}, replacing {} with {}".format( + text, whole_match, str(repl) + ) + ) + text = text.replace(whole_match, str(repl)) + + elif ( + case is not None + and hasattr(case, variable.lower()) + and getattr(case, variable.lower()) is not None + ): + repl = getattr(case, variable.lower()) + logger.debug( + "from case members: in {}, replacing {} with {}".format( + text, whole_match, str(repl) + ) + ) + text = text.replace(whole_match, str(repl)) + + elif ( + case is not None + and case.get_value(variable.upper(), subgroup=subgroup) is not None + ): + repl = case.get_value(variable.upper(), subgroup=subgroup) + logger.debug( + "from case: in {}, replacing {} with {}".format( + text, whole_match, str(repl) + ) + ) + text = text.replace(whole_match, str(repl)) + + elif default is not None: + logger.debug( + "from default: in {}, replacing {} with {}".format( + text, whole_match, str(default) + ) + ) + text = text.replace(whole_match, default) + + else: + # If no queue exists, then the directive '-q' by itself will cause an error + if "-q {{ queue }}" in text: + text = "" + else: + logger.warning("Could not replace variable '{}'".format(variable)) + text = text.replace(whole_match, "") + + return text
+ + + +
+[docs] +def wait_for_unlocked(filepath): + locked = True + file_object = None + while locked: + try: + buffer_size = 8 + # Opening file in append mode and read the first 8 characters. + file_object = open(filepath, "a", buffer_size) + if file_object: + locked = False + except IOError: + locked = True + time.sleep(1) + finally: + if file_object: + file_object.close()
+ + + +
+[docs] +def gunzip_existing_file(filepath): + with gzip.open(filepath, "rb") as fd: + return fd.read()
+ + + +
+[docs] +def gzip_existing_file(filepath): + """ + Gzips an existing file, removes the unzipped version, returns path to zip file. + Note the that the timestamp of the original file will be maintained in + the zipped file. + + >>> import tempfile + >>> fd, filename = tempfile.mkstemp(text=True) + >>> _ = os.write(fd, b"Hello World") + >>> os.close(fd) + >>> gzfile = gzip_existing_file(filename) + >>> gunzip_existing_file(gzfile) == b'Hello World' + True + >>> os.remove(gzfile) + """ + expect(os.path.exists(filepath), "{} does not exists".format(filepath)) + + st = os.stat(filepath) + orig_atime, orig_mtime = st[statlib.ST_ATIME], st[statlib.ST_MTIME] + + gzpath = "{}.gz".format(filepath) + with open(filepath, "rb") as f_in: + with gzip.open(gzpath, "wb") as f_out: + shutil.copyfileobj(f_in, f_out) + + os.remove(filepath) + + os.utime(gzpath, (orig_atime, orig_mtime)) + + return gzpath
+ + + +
+[docs] +def touch(fname): + if os.path.exists(fname): + os.utime(fname, None) + else: + open(fname, "a").close()
+ + + +
+[docs] +def find_system_test(testname, case): + """ + Find and import the test matching testname + Look through the paths set in config_files.xml variable SYSTEM_TESTS_DIR + for components used in this case to find a test matching testname. Add the + path to that directory to sys.path if its not there and return the test object + Fail if the test is not found in any of the paths. + """ + from importlib import import_module + + system_test_path = None + if testname.startswith("TEST"): + system_test_path = "CIME.SystemTests.system_tests_common.{}".format(testname) + else: + components = ["any"] + components.extend(case.get_compset_components()) + fdir = [] + for component in components: + tdir = case.get_value( + "SYSTEM_TESTS_DIR", attribute={"component": component} + ) + if tdir is not None: + tdir = os.path.abspath(tdir) + system_test_file = os.path.join(tdir, "{}.py".format(testname.lower())) + if os.path.isfile(system_test_file): + fdir.append(tdir) + logger.debug("found " + system_test_file) + if component == "any": + system_test_path = "CIME.SystemTests.{}.{}".format( + testname.lower(), testname + ) + else: + system_test_dir = os.path.dirname(system_test_file) + if system_test_dir not in sys.path: + sys.path.append(system_test_dir) + system_test_path = "{}.{}".format(testname.lower(), testname) + expect(len(fdir) > 0, "Test {} not found, aborting".format(testname)) + expect( + len(fdir) == 1, + "Test {} found in multiple locations {}, aborting".format(testname, fdir), + ) + expect(system_test_path is not None, "No test {} found".format(testname)) + + path, m = system_test_path.rsplit(".", 1) + mod = import_module(path) + return getattr(mod, m)
+ + + +def _get_most_recent_lid_impl(files): + """ + >>> files = ['/foo/bar/e3sm.log.20160905_111212', '/foo/bar/e3sm.log.20160906_111212.gz'] + >>> _get_most_recent_lid_impl(files) + ['20160905_111212', '20160906_111212'] + >>> files = ['/foo/bar/e3sm.log.20160905_111212', '/foo/bar/e3sm.log.20160905_111212.gz'] + >>> _get_most_recent_lid_impl(files) + ['20160905_111212'] + """ + results = [] + for item in files: + basename = os.path.basename(item) + components = basename.split(".") + if len(components) > 2: + results.append(components[2]) + else: + logger.warning( + "Apparent model log file '{}' did not conform to expected name format".format( + item + ) + ) + + return sorted(list(set(results))) + + +
+[docs] +def ls_sorted_by_mtime(path): + """return list of path sorted by timestamp oldest first""" + mtime = lambda f: os.stat(os.path.join(path, f)).st_mtime + return list(sorted(os.listdir(path), key=mtime))
+ + + +
+[docs] +def get_lids(case): + model = case.get_value("MODEL") + rundir = case.get_value("RUNDIR") + return _get_most_recent_lid_impl(glob.glob("{}/{}.log*".format(rundir, model)))
+ + + +
+[docs] +def new_lid(case=None): + lid = time.strftime("%y%m%d-%H%M%S") + jobid = batch_jobid(case=case) + if jobid is not None: + lid = jobid + "." + lid + os.environ["LID"] = lid + return lid
+ + + +
+[docs] +def batch_jobid(case=None): + jobid = os.environ.get("PBS_JOBID") + if jobid is None: + jobid = os.environ.get("SLURM_JOB_ID") + if jobid is None: + jobid = os.environ.get("LSB_JOBID") + if jobid is None: + jobid = os.environ.get("COBALT_JOBID") + if case: + jobid = case.get_job_id(jobid) + return jobid
+ + + +
+[docs] +def analyze_build_log(comp, log, compiler): + """ + Capture and report warning count, + capture and report errors and undefined references. + """ + warncnt = 0 + if "intel" in compiler: + warn_re = re.compile(r" warning #") + error_re = re.compile(r" error #") + undefined_re = re.compile(r" undefined reference to ") + elif "gnu" in compiler or "nag" in compiler: + warn_re = re.compile(r"^Warning: ") + error_re = re.compile(r"^Error: ") + undefined_re = re.compile(r" undefined reference to ") + else: + # don't know enough about this compiler + return + + with open(log, "r") as fd: + for line in fd: + if re.search(warn_re, line): + warncnt += 1 + if re.search(error_re, line): + logger.warning(line) + if re.search(undefined_re, line): + logger.warning(line) + + if warncnt > 0: + logger.info( + "Component {} build complete with {} warnings".format(comp, warncnt) + )
+ + + +
+[docs] +def is_python_executable(filepath): + first_line = None + if os.path.isfile(filepath): + with open(filepath, "rt") as f: + try: + first_line = f.readline() + except Exception: + pass + + return ( + first_line is not None + and first_line.startswith("#!") + and "python" in first_line + ) + return False
+ + + +
+[docs] +def get_umask(): + current_umask = os.umask(0) + os.umask(current_umask) + + return current_umask
+ + + +
+[docs] +def stringify_bool(val): + val = False if val is None else val + expect(type(val) is bool, "Wrong type for val '{}'".format(repr(val))) + return "TRUE" if val else "FALSE"
+ + + +
+[docs] +def indent_string(the_string, indent_level): + """Indents the given string by a given number of spaces + + Args: + the_string: str + indent_level: int + + Returns a new string that is the same as the_string, except that + each line is indented by 'indent_level' spaces. + + In python3, this can be done with textwrap.indent. + """ + + lines = the_string.splitlines(True) + padding = " " * indent_level + lines_indented = [padding + line for line in lines] + return "".join(lines_indented)
+ + + +
+[docs] +def verbatim_success_msg(return_val): + return return_val
+ + + +CASE_SUCCESS = "success" +CASE_FAILURE = "error" + + +
+[docs] +def run_and_log_case_status( + func, + phase, + caseroot=".", + custom_starting_msg_functor=None, + custom_success_msg_functor=None, + is_batch=False, +): + starting_msg = None + + if custom_starting_msg_functor is not None: + starting_msg = custom_starting_msg_functor() + + # Delay appending "starting" on "case.subsmit" phase when batch system is + # present since we don't have the jobid yet + if phase != "case.submit" or not is_batch: + append_case_status(phase, "starting", msg=starting_msg, caseroot=caseroot) + rv = None + try: + rv = func() + except BaseException: + custom_success_msg = ( + custom_success_msg_functor(rv) + if custom_success_msg_functor and rv is not None + else None + ) + if phase == "case.submit" and is_batch: + append_case_status( + phase, "starting", msg=custom_success_msg, caseroot=caseroot + ) + e = sys.exc_info()[1] + append_case_status( + phase, CASE_FAILURE, msg=("\n{}".format(e)), caseroot=caseroot + ) + raise + else: + custom_success_msg = ( + custom_success_msg_functor(rv) if custom_success_msg_functor else None + ) + if phase == "case.submit" and is_batch: + append_case_status( + phase, "starting", msg=custom_success_msg, caseroot=caseroot + ) + append_case_status( + phase, CASE_SUCCESS, msg=custom_success_msg, caseroot=caseroot + ) + + return rv
+ + + +def _check_for_invalid_args(args): + # Prevent circular import + from CIME.config import Config + + # TODO Is this really model specific + if Config.instance().check_invalid_args: + for arg in args: + # if arg contains a space then it was originally quoted and we can ignore it here. + if " " in arg or arg.startswith("--"): + continue + if arg.startswith("-") and len(arg) > 2: + sys.stderr.write( + 'WARNING: The {} argument is deprecated. Multi-character arguments should begin with "--" and single character with "-"\n Use --help for a complete list of available options\n'.format( + arg + ) + ) + + +
+[docs] +def add_mail_type_args(parser): + parser.add_argument("--mail-user", help="Email to be used for batch notification.") + + parser.add_argument( + "-M", + "--mail-type", + action="append", + help="When to send user email. Options are: never, all, begin, end, fail.\n" + "You can specify multiple types with either comma-separated args or multiple -M flags.", + )
+ + + +
+[docs] +def resolve_mail_type_args(args): + if args.mail_type is not None: + resolved_mail_types = [] + for mail_type in args.mail_type: + resolved_mail_types.extend(mail_type.split(",")) + + for mail_type in resolved_mail_types: + expect( + mail_type in ("never", "all", "begin", "end", "fail"), + "Unsupported mail-type '{}'".format(mail_type), + ) + + args.mail_type = resolved_mail_types
+ + + +
+[docs] +def copyifnewer(src, dest): + """if dest does not exist or is older than src copy src to dest""" + if not os.path.isfile(dest) or not filecmp.cmp(src, dest): + safe_copy(src, dest)
+ + + +
+[docs] +class SharedArea(object): + """ + Enable 0002 umask within this manager + """ + + def __init__(self, new_perms=0o002): + self._orig_umask = None + self._new_perms = new_perms + + def __enter__(self): + self._orig_umask = os.umask(self._new_perms) + + def __exit__(self, *_): + os.umask(self._orig_umask)
+ + + +
+[docs] +class Timeout(object): + """ + A context manager that implements a timeout. By default, it + will raise exception, but a custon function call can be provided. + Provided None as seconds makes this class a no-op + """ + + def __init__(self, seconds, action=None): + self._seconds = seconds + self._action = action if action is not None else self._handle_timeout + + def _handle_timeout(self, *_): + raise RuntimeError("Timeout expired") + + def __enter__(self): + if self._seconds is not None: + signal.signal(signal.SIGALRM, self._action) + signal.alarm(self._seconds) + + def __exit__(self, *_): + if self._seconds is not None: + signal.alarm(0)
+ + + +
+[docs] +def filter_unicode(unistr): + """ + Sometimes unicode chars can cause problems + """ + return "".join([i if ord(i) < 128 else " " for i in unistr])
+ + + +
+[docs] +def run_bld_cmd_ensure_logging(cmd, arg_logger, from_dir=None, timeout=None): + arg_logger.info(cmd) + stat, output, errput = run_cmd(cmd, from_dir=from_dir, timeout=timeout) + arg_logger.info(output) + arg_logger.info(errput) + expect(stat == 0, filter_unicode(errput))
+ + + +
+[docs] +def get_batch_script_for_job(job): + return job if "st_archive" in job else "." + job
+ + + +
+[docs] +def string_in_list(_string, _list): + """Case insensitive search for string in list + returns the matching list value + >>> string_in_list("Brack",["bar", "bracK", "foo"]) + 'bracK' + >>> string_in_list("foo", ["FFO", "FOO", "foo2", "foo3"]) + 'FOO' + >>> string_in_list("foo", ["FFO", "foo2", "foo3"]) + """ + for x in _list: + if _string.lower() == x.lower(): + return x + return None
+ + + +
+[docs] +def model_log(model, arg_logger, msg, debug_others=True): + if get_model() == model: + arg_logger.info(msg) + elif debug_others: + arg_logger.debug(msg)
+ + + +
+[docs] +def get_htmlroot(machobj=None): + """Get location for test HTML output + + Hierarchy for choosing CIME_HTML_ROOT: + 0. Environment variable CIME_HTML_ROOT + 1. File $HOME/.cime/config + 2. config_machines.xml (if machobj provided) + """ + htmlroot = os.environ.get("CIME_HTML_ROOT") + if htmlroot is not None: + logger.info("Using htmlroot from env CIME_HTML_ROOT: {}".format(htmlroot)) + return htmlroot + + cime_config = get_cime_config() + if cime_config.has_option("main", "CIME_HTML_ROOT"): + htmlroot = cime_config.get("main", "CIME_HTML_ROOT") + if htmlroot is not None: + logger.info("Using htmlroot from .cime/config: {}".format(htmlroot)) + return htmlroot + + if machobj is not None: + htmlroot = machobj.get_value("CIME_HTML_ROOT") + if htmlroot is not None: + logger.info("Using htmlroot from config_machines.xml: {}".format(htmlroot)) + return htmlroot + + logger.info("No htmlroot info available") + return None
+ + + +
+[docs] +def get_urlroot(machobj=None): + """Get URL to htmlroot + + Hierarchy for choosing CIME_URL_ROOT: + 0. Environment variable CIME_URL_ROOT + 1. File $HOME/.cime/config + 2. config_machines.xml (if machobj provided) + """ + urlroot = os.environ.get("CIME_URL_ROOT") + if urlroot is not None: + logger.info("Using urlroot from env CIME_URL_ROOT: {}".format(urlroot)) + return urlroot + + cime_config = get_cime_config() + if cime_config.has_option("main", "CIME_URL_ROOT"): + urlroot = cime_config.get("main", "CIME_URL_ROOT") + if urlroot is not None: + logger.info("Using urlroot from .cime/config: {}".format(urlroot)) + return urlroot + + if machobj is not None: + urlroot = machobj.get_value("CIME_URL_ROOT") + if urlroot is not None: + logger.info("Using urlroot from config_machines.xml: {}".format(urlroot)) + return urlroot + + logger.info("No urlroot info available") + return None
+ + + +
+[docs] +def clear_folder(_dir): + if os.path.exists(_dir): + for the_file in os.listdir(_dir): + file_path = os.path.join(_dir, the_file) + try: + if os.path.isfile(file_path): + os.unlink(file_path) + else: + clear_folder(file_path) + os.rmdir(file_path) + except Exception as e: + print(e)
+ + + +
+[docs] +def add_flag_to_cmd(flag, val): + """ + Given a flag and value for a shell command, return a string + + >>> add_flag_to_cmd("-f", "hi") + '-f hi' + >>> add_flag_to_cmd("--foo", 42) + '--foo 42' + >>> add_flag_to_cmd("--foo=", 42) + '--foo=42' + >>> add_flag_to_cmd("--foo:", 42) + '--foo:42' + >>> add_flag_to_cmd("--foo:", " hi ") + '--foo:hi' + """ + no_space_chars = "=:" + no_space = False + for item in no_space_chars: + if flag.endswith(item): + no_space = True + + separator = "" if no_space else " " + return "{}{}{}".format(flag, separator, str(val).strip())
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/CIME/wait_for_tests.html b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/wait_for_tests.html new file mode 100644 index 00000000000..6fc4084c73f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/CIME/wait_for_tests.html @@ -0,0 +1,999 @@ + + + + + + CIME.wait_for_tests — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for CIME.wait_for_tests

+# pylint: disable=import-error
+import queue
+import os, time, threading, socket, signal, shutil, glob
+
+# pylint: disable=import-error
+from distutils.spawn import find_executable
+import logging
+import xml.etree.ElementTree as xmlet
+
+import CIME.utils
+from CIME.utils import expect, Timeout, run_cmd_no_fail, safe_copy, CIMEError
+from CIME.XML.machines import Machines
+from CIME.test_status import *
+from CIME.provenance import save_test_success
+from CIME.case.case import Case
+
+SIGNAL_RECEIVED = False
+E3SM_MAIN_CDASH = "E3SM"
+CDASH_DEFAULT_BUILD_GROUP = "ACME_Latest"
+SLEEP_INTERVAL_SEC = 0.1
+
+###############################################################################
+
+[docs] +def signal_handler(*_): + ############################################################################### + global SIGNAL_RECEIVED + SIGNAL_RECEIVED = True
+ + + +############################################################################### +
+[docs] +def set_up_signal_handlers(): + ############################################################################### + signal.signal(signal.SIGTERM, signal_handler) + signal.signal(signal.SIGINT, signal_handler)
+ + + +############################################################################### +
+[docs] +def get_test_time(test_path): + ############################################################################### + ts = TestStatus(test_dir=test_path) + comment = ts.get_comment(RUN_PHASE) + if comment is None or "time=" not in comment: + logging.warning("No run-phase time data found in {}".format(test_path)) + return 0 + else: + time_data = [token for token in comment.split() if token.startswith("time=")][0] + return int(time_data.split("=")[1])
+ + + +############################################################################### +
+[docs] +def get_test_phase(test_path, phase): + ############################################################################### + ts = TestStatus(test_dir=test_path) + return ts.get_status(phase)
+ + + +############################################################################### +
+[docs] +def get_nml_diff(test_path): + ############################################################################### + test_log = os.path.join(test_path, "TestStatus.log") + + diffs = "" + with open(test_log, "r") as fd: + started = False + for line in fd.readlines(): + if "NLCOMP" in line: + started = True + elif started: + if "------------" in line: + break + else: + diffs += line + + return diffs
+ + + +############################################################################### +
+[docs] +def get_test_output(test_path): + ############################################################################### + output_file = os.path.join(test_path, "TestStatus.log") + if os.path.exists(output_file): + return open(output_file, "r").read() + else: + logging.warning("File '{}' not found".format(output_file)) + return ""
+ + + +############################################################################### +
+[docs] +def create_cdash_xml_boiler( + phase, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + git_commit, +): + ############################################################################### + site_elem = xmlet.Element("Site") + + if "JENKINS_START_TIME" in os.environ: + time_info_str = "Total testing time: {:d} seconds".format( + int(current_time) - int(os.environ["JENKINS_START_TIME"]) + ) + else: + time_info_str = "" + + site_elem.attrib["BuildName"] = cdash_build_name + site_elem.attrib["BuildStamp"] = "{}-{}".format(utc_time, cdash_build_group) + site_elem.attrib["Name"] = hostname + site_elem.attrib["OSName"] = "Linux" + site_elem.attrib["Hostname"] = hostname + site_elem.attrib["OSVersion"] = "Commit: {}{}".format(git_commit, time_info_str) + + phase_elem = xmlet.SubElement(site_elem, phase) + + xmlet.SubElement(phase_elem, "StartDateTime").text = time.ctime(current_time) + xmlet.SubElement( + phase_elem, "Start{}Time".format("Test" if phase == "Testing" else phase) + ).text = str(int(current_time)) + + return site_elem, phase_elem
+ + + +############################################################################### +
+[docs] +def create_cdash_config_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + data_rel_path, + git_commit, +): + ############################################################################### + site_elem, config_elem = create_cdash_xml_boiler( + "Configure", + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + git_commit, + ) + + xmlet.SubElement(config_elem, "ConfigureCommand").text = "namelists" + + config_results = [] + for test_name in sorted(results): + test_path = results[test_name][0] + test_norm_path = ( + test_path if os.path.isdir(test_path) else os.path.dirname(test_path) + ) + nml_phase_result = get_test_phase(test_norm_path, NAMELIST_PHASE) + if nml_phase_result == TEST_FAIL_STATUS: + nml_diff = get_nml_diff(test_norm_path) + cdash_warning = "CMake Warning:\n\n{} NML DIFF:\n{}\n".format( + test_name, nml_diff + ) + config_results.append(cdash_warning) + + xmlet.SubElement(config_elem, "Log").text = "\n".join(config_results) + + xmlet.SubElement(config_elem, "ConfigureStatus").text = "0" + xmlet.SubElement(config_elem, "ElapsedMinutes").text = "0" # Skip for now + + etree = xmlet.ElementTree(site_elem) + etree.write(os.path.join(data_rel_path, "Configure.xml"))
+ + + +############################################################################### +
+[docs] +def create_cdash_build_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + data_rel_path, + git_commit, +): + ############################################################################### + site_elem, build_elem = create_cdash_xml_boiler( + "Build", + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + git_commit, + ) + + xmlet.SubElement(build_elem, "ConfigureCommand").text = "case.build" + + build_results = [] + for test_name in sorted(results): + build_results.append(test_name) + + xmlet.SubElement(build_elem, "Log").text = "\n".join(build_results) + + for idx, test_name in enumerate(sorted(results)): + test_path, test_status, _ = results[test_name] + test_norm_path = ( + test_path if os.path.isdir(test_path) else os.path.dirname(test_path) + ) + if test_status == TEST_FAIL_STATUS and get_test_time(test_norm_path) == 0: + error_elem = xmlet.SubElement(build_elem, "Error") + xmlet.SubElement(error_elem, "Text").text = test_name + xmlet.SubElement(error_elem, "BuildLogLine").text = str(idx) + xmlet.SubElement(error_elem, "PreContext").text = test_name + xmlet.SubElement(error_elem, "PostContext").text = "" + xmlet.SubElement(error_elem, "RepeatCount").text = "0" + + xmlet.SubElement(build_elem, "ElapsedMinutes").text = "0" # Skip for now + + etree = xmlet.ElementTree(site_elem) + etree.write(os.path.join(data_rel_path, "Build.xml"))
+ + + +############################################################################### +
+[docs] +def create_cdash_test_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + data_rel_path, + git_commit, +): + ############################################################################### + site_elem, testing_elem = create_cdash_xml_boiler( + "Testing", + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + git_commit, + ) + + test_list_elem = xmlet.SubElement(testing_elem, "TestList") + for test_name in sorted(results): + xmlet.SubElement(test_list_elem, "Test").text = test_name + + for test_name in sorted(results): + test_path, test_status, _ = results[test_name] + test_passed = test_status in [TEST_PASS_STATUS, NAMELIST_FAIL_STATUS] + test_norm_path = ( + test_path if os.path.isdir(test_path) else os.path.dirname(test_path) + ) + + full_test_elem = xmlet.SubElement(testing_elem, "Test") + if test_passed: + full_test_elem.attrib["Status"] = "passed" + elif test_status == TEST_PEND_STATUS: + full_test_elem.attrib["Status"] = "notrun" + else: + full_test_elem.attrib["Status"] = "failed" + + xmlet.SubElement(full_test_elem, "Name").text = test_name + + xmlet.SubElement(full_test_elem, "Path").text = test_norm_path + + xmlet.SubElement(full_test_elem, "FullName").text = test_name + + xmlet.SubElement(full_test_elem, "FullCommandLine") + # text ? + + results_elem = xmlet.SubElement(full_test_elem, "Results") + + named_measurements = ( + ("text/string", "Exit Code", test_status), + ("text/string", "Exit Value", "0" if test_passed else "1"), + ("numeric_double", "Execution Time", str(get_test_time(test_norm_path))), + ( + "text/string", + "Completion Status", + "Not Completed" if test_status == TEST_PEND_STATUS else "Completed", + ), + ("text/string", "Command line", "create_test"), + ) + + for type_attr, name_attr, value in named_measurements: + named_measurement_elem = xmlet.SubElement(results_elem, "NamedMeasurement") + named_measurement_elem.attrib["type"] = type_attr + named_measurement_elem.attrib["name"] = name_attr + + xmlet.SubElement(named_measurement_elem, "Value").text = value + + measurement_elem = xmlet.SubElement(results_elem, "Measurement") + + value_elem = xmlet.SubElement(measurement_elem, "Value") + value_elem.text = "".join( + [item for item in get_test_output(test_norm_path) if ord(item) < 128] + ) + + xmlet.SubElement(testing_elem, "ElapsedMinutes").text = "0" # Skip for now + + etree = xmlet.ElementTree(site_elem) + + etree.write(os.path.join(data_rel_path, "Test.xml"))
+ + + +############################################################################### +
+[docs] +def create_cdash_xml_fakes( + results, cdash_build_name, cdash_build_group, utc_time, current_time, hostname +): + ############################################################################### + # We assume all cases were created from the same code repo + first_result_case = os.path.dirname(list(results.items())[0][1][0]) + try: + srcroot = run_cmd_no_fail( + "./xmlquery --value SRCROOT", from_dir=first_result_case + ) + except CIMEError: + # Use repo containing this script as last resort + srcroot = os.path.join(CIME.utils.get_cime_root(), "..") + + git_commit = CIME.utils.get_current_commit(repo=srcroot) + + data_rel_path = os.path.join("Testing", utc_time) + + create_cdash_config_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + data_rel_path, + git_commit, + ) + + create_cdash_build_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + data_rel_path, + git_commit, + ) + + create_cdash_test_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + data_rel_path, + git_commit, + )
+ + + +############################################################################### +
+[docs] +def create_cdash_upload_xml( + results, cdash_build_name, cdash_build_group, utc_time, hostname, force_log_upload +): + ############################################################################### + + data_rel_path = os.path.join("Testing", utc_time) + + try: + log_dir = "{}_logs".format(cdash_build_name) + + need_to_upload = False + + for test_name, test_data in results.items(): + test_path, test_status, _ = test_data + + if test_status != TEST_PASS_STATUS or force_log_upload: + test_case_dir = os.path.dirname(test_path) + + case_dirs = [test_case_dir] + case_base = os.path.basename(test_case_dir) + test_case2_dir = os.path.join(test_case_dir, "case2", case_base) + if os.path.exists(test_case2_dir): + case_dirs.append(test_case2_dir) + + for case_dir in case_dirs: + for param in ["EXEROOT", "RUNDIR", "CASEDIR"]: + if param == "CASEDIR": + log_src_dir = case_dir + else: + # it's possible that tests that failed very badly/early, and fake cases for testing + # will not be able to support xmlquery + try: + log_src_dir = run_cmd_no_fail( + "./xmlquery {} --value".format(param), + from_dir=case_dir, + ) + except: + continue + + log_dst_dir = os.path.join( + log_dir, + "{}{}_{}_logs".format( + test_name, + "" if case_dir == test_case_dir else ".case2", + param, + ), + ) + os.makedirs(log_dst_dir) + for log_file in glob.glob(os.path.join(log_src_dir, "*log*")): + if os.path.isdir(log_file): + shutil.copytree( + log_file, + os.path.join( + log_dst_dir, os.path.basename(log_file) + ), + ) + else: + safe_copy(log_file, log_dst_dir) + for log_file in glob.glob( + os.path.join(log_src_dir, "*.cprnc.out*") + ): + safe_copy(log_file, log_dst_dir) + + need_to_upload = True + + if need_to_upload: + + tarball = "{}.tar.gz".format(log_dir) + if os.path.exists(tarball): + os.remove(tarball) + + run_cmd_no_fail( + "tar -cf - {} | gzip -c".format(log_dir), arg_stdout=tarball + ) + base64 = run_cmd_no_fail("base64 {}".format(tarball)) + + xml_text = r"""<?xml version="1.0" encoding="UTF-8"?> +<?xml-stylesheet type="text/xsl" href="Dart/Source/Server/XSL/Build.xsl <file:///Dart/Source/Server/XSL/Build.xsl> "?> +<Site BuildName="{}" BuildStamp="{}-{}" Name="{}" Generator="ctest3.0.0"> +<Upload> +<File filename="{}"> +<Content encoding="base64"> +{} +</Content> +</File> +</Upload> +</Site> +""".format( + cdash_build_name, + utc_time, + cdash_build_group, + hostname, + os.path.abspath(tarball), + base64, + ) + + with open(os.path.join(data_rel_path, "Upload.xml"), "w") as fd: + fd.write(xml_text) + + finally: + if os.path.isdir(log_dir): + shutil.rmtree(log_dir)
+ + + +############################################################################### +
+[docs] +def create_cdash_xml( + results, cdash_build_name, cdash_project, cdash_build_group, force_log_upload=False +): + ############################################################################### + + # + # Create dart config file + # + + current_time = time.time() + + utc_time_tuple = time.gmtime(current_time) + cdash_timestamp = time.strftime("%H:%M:%S", utc_time_tuple) + + hostname = Machines().get_machine_name() + if hostname is None: + hostname = socket.gethostname().split(".")[0] + logging.warning( + "Could not convert hostname '{}' into an E3SM machine name".format(hostname) + ) + + for drop_method in ["https", "http"]: + dart_config = """ +SourceDirectory: {0} +BuildDirectory: {0} + +# Site is something like machine.domain, i.e. pragmatic.crd +Site: {1} + +# Build name is osname-revision-compiler, i.e. Linux-2.4.2-2smp-c++ +BuildName: {2} + +# Submission information +IsCDash: TRUE +CDashVersion: +QueryCDashVersion: +DropSite: my.cdash.org +DropLocation: /submit.php?project={3} +DropSiteUser: +DropSitePassword: +DropSiteMode: +DropMethod: {6} +TriggerSite: +ScpCommand: {4} + +# Dashboard start time +NightlyStartTime: {5} UTC + +UseLaunchers: +CurlOptions: CURLOPT_SSL_VERIFYPEER_OFF;CURLOPT_SSL_VERIFYHOST_OFF +""".format( + os.getcwd(), + hostname, + cdash_build_name, + cdash_project, + find_executable("scp"), + cdash_timestamp, + drop_method, + ) + + with open("DartConfiguration.tcl", "w") as dart_fd: + dart_fd.write(dart_config) + + utc_time = time.strftime("%Y%m%d-%H%M", utc_time_tuple) + testing_dir = os.path.join("Testing", utc_time) + if os.path.isdir(testing_dir): + shutil.rmtree(testing_dir) + + os.makedirs(os.path.join("Testing", utc_time)) + + # Make tag file + with open("Testing/TAG", "w") as tag_fd: + tag_fd.write("{}\n{}\n".format(utc_time, cdash_build_group)) + + create_cdash_xml_fakes( + results, + cdash_build_name, + cdash_build_group, + utc_time, + current_time, + hostname, + ) + + create_cdash_upload_xml( + results, + cdash_build_name, + cdash_build_group, + utc_time, + hostname, + force_log_upload, + ) + + stat, out, _ = run_cmd("ctest -VV -D NightlySubmit", combine_output=True) + if stat != 0: + logging.warning( + "ctest upload drop method {} FAILED:\n{}".format(drop_method, out) + ) + else: + logging.info("Upload SUCCESS:\n{}".format(out)) + return + + expect(False, "All cdash upload attempts failed")
+ + + +############################################################################### +
+[docs] +def wait_for_test( + test_path, + results, + wait, + check_throughput, + check_memory, + ignore_namelists, + ignore_memleak, + no_run, +): + ############################################################################### + if os.path.isdir(test_path): + test_status_filepath = os.path.join(test_path, TEST_STATUS_FILENAME) + else: + test_status_filepath = test_path + + logging.debug("Watching file: '{}'".format(test_status_filepath)) + test_log_path = os.path.join( + os.path.dirname(test_status_filepath), ".internal_test_status.log" + ) + + # We don't want to make it a requirement that wait_for_tests has write access + # to all case directories + try: + fd = open(test_log_path, "w") + fd.close() + except (IOError, OSError): + test_log_path = "/dev/null" + + prior_ts = None + with open(test_log_path, "w") as log_fd: + while True: + if os.path.exists(test_status_filepath): + ts = TestStatus(test_dir=os.path.dirname(test_status_filepath)) + test_name = ts.get_name() + test_status, test_phase = ts.get_overall_test_status( + wait_for_run=not no_run, # Important + no_run=no_run, + check_throughput=check_throughput, + check_memory=check_memory, + ignore_namelists=ignore_namelists, + ignore_memleak=ignore_memleak, + ) + + if prior_ts is not None and prior_ts != ts: + log_fd.write(ts.phase_statuses_dump()) + log_fd.write("OVERALL: {}\n\n".format(test_status)) + + prior_ts = ts + + if test_status == TEST_PEND_STATUS and (wait and not SIGNAL_RECEIVED): + time.sleep(SLEEP_INTERVAL_SEC) + logging.debug("Waiting for test to finish") + else: + results.put((test_name, test_path, test_status, test_phase)) + break + + else: + if wait and not SIGNAL_RECEIVED: + logging.debug( + "File '{}' does not yet exist".format(test_status_filepath) + ) + time.sleep(SLEEP_INTERVAL_SEC) + else: + test_name = os.path.abspath(test_status_filepath).split("/")[-2] + results.put( + ( + test_name, + test_path, + "File '{}' doesn't exist".format(test_status_filepath), + CREATE_NEWCASE_PHASE, + ) + ) + break
+ + + +############################################################################### +
+[docs] +def wait_for_tests_impl( + test_paths, + no_wait=False, + check_throughput=False, + check_memory=False, + ignore_namelists=False, + ignore_memleak=False, + no_run=False, +): + ############################################################################### + results = queue.Queue() + + for test_path in test_paths: + t = threading.Thread( + target=wait_for_test, + args=( + test_path, + results, + not no_wait, + check_throughput, + check_memory, + ignore_namelists, + ignore_memleak, + no_run, + ), + ) + t.daemon = True + t.start() + + while threading.active_count() > 1: + time.sleep(1) + + test_results = {} + completed_test_paths = [] + while not results.empty(): + test_name, test_path, test_status, test_phase = results.get() + if test_name in test_results: + prior_path, prior_status, _ = test_results[test_name] + if test_status == prior_status: + logging.warning( + "Test name '{}' was found in both '{}' and '{}'".format( + test_name, test_path, prior_path + ) + ) + else: + raise CIMEError( + "Test name '{}' was found in both '{}' and '{}' with different results".format( + test_name, test_path, prior_path + ) + ) + + expect( + test_name is not None, + "Failed to get test name for test_path: {}".format(test_path), + ) + test_results[test_name] = (test_path, test_status, test_phase) + completed_test_paths.append(test_path) + + expect( + set(test_paths) == set(completed_test_paths), + "Missing results for test paths: {}".format( + set(test_paths) - set(completed_test_paths) + ), + ) + return test_results
+ + + +############################################################################### +
+[docs] +def wait_for_tests( + test_paths, + no_wait=False, + check_throughput=False, + check_memory=False, + ignore_namelists=False, + ignore_memleak=False, + cdash_build_name=None, + cdash_project=E3SM_MAIN_CDASH, + cdash_build_group=CDASH_DEFAULT_BUILD_GROUP, + timeout=None, + force_log_upload=False, + no_run=False, + update_success=False, + expect_test_complete=True, +): + ############################################################################### + # Set up signal handling, we want to print results before the program + # is terminated + set_up_signal_handlers() + + with Timeout(timeout, action=signal_handler): + test_results = wait_for_tests_impl( + test_paths, + no_wait, + check_throughput, + check_memory, + ignore_namelists, + ignore_memleak, + no_run, + ) + + all_pass = True + env_loaded = False + for test_name, test_data in sorted(test_results.items()): + test_path, test_status, phase = test_data + case_dir = os.path.dirname(test_path) + + if test_status not in [ + TEST_PASS_STATUS, + TEST_PEND_STATUS, + NAMELIST_FAIL_STATUS, + ]: + # Report failed phases + logging.info("{} {} (phase {})".format(test_status, test_name, phase)) + all_pass = False + else: + # Be cautious about telling the user that the test passed since we might + # not know that the test passed yet. + if test_status == TEST_PEND_STATUS: + if expect_test_complete: + logging.info( + "{} {} (phase {} unexpectedly left in PEND)".format( + TEST_PEND_STATUS, test_name, phase + ) + ) + all_pass = False + else: + logging.info( + "{} {} (phase {} has not yet completed)".format( + TEST_PEND_STATUS, test_name, phase + ) + ) + + elif test_status == NAMELIST_FAIL_STATUS: + logging.info( + "{} {} (but otherwise OK) {}".format( + NAMELIST_FAIL_STATUS, test_name, phase + ) + ) + all_pass = False + else: + expect( + test_status == TEST_PASS_STATUS, + "Expected pass if we made it here, instead: {}".format(test_status), + ) + logging.info("{} {} {}".format(test_status, test_name, phase)) + + logging.info(" Case dir: {}".format(case_dir)) + + if update_success or (cdash_build_name and not env_loaded): + try: + # This can fail if the case crashed before setup completed + with Case(case_dir, read_only=True) as case: + srcroot = case.get_value("SRCROOT") + baseline_root = case.get_value("BASELINE_ROOT") + # Submitting to cdash requires availability of cmake. We can't guarantee + # that without loading the env for a case + if cdash_build_name and not env_loaded: + case.load_env() + env_loaded = True + + if update_success: + save_test_success( + baseline_root, + srcroot, + test_name, + test_status in [TEST_PASS_STATUS, NAMELIST_FAIL_STATUS], + ) + + except CIMEError as e: + logging.warning( + "Failed to update success / load_env for Case {}: {}".format( + case_dir, e + ) + ) + + if cdash_build_name: + create_cdash_xml( + test_results, + cdash_build_name, + cdash_project, + cdash_build_group, + force_log_upload, + ) + + return all_pass
+ +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/Tools/generate_cylc_workflow.html b/branch/azamat/baselines/update-perf-info/html/_modules/Tools/generate_cylc_workflow.html new file mode 100644 index 00000000000..aed86d0e7a2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/Tools/generate_cylc_workflow.html @@ -0,0 +1,349 @@ + + + + + + Tools.generate_cylc_workflow — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for Tools.generate_cylc_workflow

+#!/usr/bin/env python3
+
+"""
+Generates a cylc workflow file for the case.  See https://cylc.github.io for details about cylc
+"""
+import os
+import sys
+
+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))
+
+from CIME.Tools.standard_script_setup import *
+
+from CIME.case import Case
+from CIME.utils import expect, transform_vars
+
+import argparse, re
+
+logger = logging.getLogger(__name__)
+
+###############################################################################
+
+[docs] +def parse_command_line(args, description): + ############################################################################### + parser = argparse.ArgumentParser( + description=description, formatter_class=argparse.RawTextHelpFormatter + ) + + CIME.utils.setup_standard_logging_options(parser) + + parser.add_argument( + "caseroot", + nargs="?", + default=os.getcwd(), + help="Case directory for which namelists are generated.\n" + "Default is current directory.", + ) + + parser.add_argument( + "--cycles", default=1, help="The number of cycles to run, default is RESUBMIT" + ) + + parser.add_argument( + "--ensemble", + default=1, + help="generate suite.rc for an ensemble of cases, the case name argument must end in an integer.\n" + "for example: ./generate_cylc_workflow.py --ensemble 4 \n" + "will generate a workflow file in the current case, if that case is named case.01," + "the workflow will include case.01, case.02, case.03 and case.04", + ) + + args = CIME.utils.parse_args_and_handle_standard_logging_options(args, parser) + + return args.caseroot, args.cycles, int(args.ensemble)
+ + + +
+[docs] +def cylc_get_ensemble_first_and_last(case, ensemble): + if ensemble == 1: + return 1, None + casename = case.get_value("CASE") + m = re.search(r"(.*[^\d])(\d+)$", casename) + minval = int(m.group(2)) + maxval = minval + ensemble - 1 + return minval, maxval
+ + + +
+[docs] +def cylc_get_case_path_string(case, ensemble): + caseroot = case.get_value("CASEROOT") + casename = case.get_value("CASE") + if ensemble == 1: + return "{};".format(caseroot) + basepath = os.path.abspath(caseroot + "/..") + m = re.search(r"(.*[^\d])(\d+)$", casename) + + expect(m, "casename {} must end in an integer for ensemble method".format(casename)) + + return ( + '{basepath}/{basename}$(printf "%0{intlen}d"'.format( + basepath=basepath, basename=m.group(1), intlen=len(m.group(2)) + ) + + " ${CYLC_TASK_PARAM_member});" + )
+ + + +
+[docs] +def cylc_batch_job_template(job, jobname, case, ensemble): + + env_batch = case.get_env("batch") + batch_system_type = env_batch.get_batch_system_type() + batchsubmit = env_batch.get_value("batch_submit") + submit_args = env_batch.get_submit_args(case, job) + case_path_string = cylc_get_case_path_string(case, ensemble) + + return ( + """ + [[{jobname}<member>]] + script = cd {case_path_string} ./case.submit --job {job} + [[[job]]] + batch system = {batch_system_type} + batch submit command template = {batchsubmit} {submit_args} '%(job)s' + [[[directives]]] +""".format( + jobname=jobname, + job=job, + case_path_string=case_path_string, + batch_system_type=batch_system_type, + batchsubmit=batchsubmit, + submit_args=submit_args, + ) + + "{{ batchdirectives }}\n" + )
+ + + +
+[docs] +def cylc_script_job_template(job, case, ensemble): + case_path_string = cylc_get_case_path_string(case, ensemble) + return """ + [[{job}<member>]] + script = cd {case_path_string} ./case.submit --job {job} +""".format( + job=job, case_path_string=case_path_string + )
+ + + +############################################################################### +def _main_func(description): + ############################################################################### + caseroot, cycles, ensemble = parse_command_line(sys.argv, description) + + expect( + os.path.isfile(os.path.join(caseroot, "CaseStatus")), + "case.setup must be run prior to running {}".format(__file__), + ) + with Case(caseroot, read_only=True) as case: + if cycles == 1: + cycles = max(1, case.get_value("RESUBMIT")) + env_batch = case.get_env("batch") + env_workflow = case.get_env("workflow") + jobs = env_workflow.get_jobs() + casename = case.get_value("CASE") + input_template = os.path.join( + case.get_value("MACHDIR"), "cylc_suite.rc.template" + ) + + overrides = {"cycles": cycles, "casename": casename} + input_text = open(input_template).read() + + first, last = cylc_get_ensemble_first_and_last(case, ensemble) + if ensemble == 1: + overrides.update({"members": "{}".format(first)}) + overrides.update( + {"workflow_description": "case {}".format(case.get_value("CASE"))} + ) + else: + overrides.update({"members": "{}..{}".format(first, last)}) + firstcase = case.get_value("CASE") + intlen = len(str(last)) + lastcase = firstcase[:-intlen] + str(last) + overrides.update( + { + "workflow_description": "ensemble from {} to {}".format( + firstcase, lastcase + ) + } + ) + overrides.update( + {"case_path_string": cylc_get_case_path_string(case, ensemble)} + ) + + for job in jobs: + jobname = job + if job == "case.st_archive": + continue + if job == "case.run": + jobname = "run" + overrides.update(env_batch.get_job_overrides(job, case)) + overrides.update({"job_id": "run." + casename}) + input_text = input_text + cylc_batch_job_template( + job, jobname, case, ensemble + ) + else: + depends_on = env_workflow.get_value("dependency", subgroup=job) + if depends_on.startswith("case."): + depends_on = depends_on[5:] + input_text = input_text.replace( + " => " + depends_on, " => " + depends_on + "<member> => " + job + ) + + overrides.update(env_batch.get_job_overrides(job, case)) + overrides.update({"job_id": job + "." + casename}) + if "total_tasks" in overrides and overrides["total_tasks"] > 1: + input_text = input_text + cylc_batch_job_template( + job, jobname, case, ensemble + ) + else: + input_text = input_text + cylc_script_job_template( + jobname, case, ensemble + ) + + overrides.update( + { + "batchdirectives": env_batch.get_batch_directives( + case, job, overrides=overrides, output_format="cylc" + ) + } + ) + # we need to re-transform for each job to get job size correctly + input_text = transform_vars( + input_text, case=case, subgroup=job, overrides=overrides + ) + + with open("suite.rc", "w") as f: + f.write(case.get_resolved_value(input_text)) + + +if __name__ == "__main__": + _main_func(__doc__) +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/Tools/standard_script_setup.html b/branch/azamat/baselines/update-perf-info/html/_modules/Tools/standard_script_setup.html new file mode 100644 index 00000000000..cf85c88bf73 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/Tools/standard_script_setup.html @@ -0,0 +1,169 @@ + + + + + + Tools.standard_script_setup — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for Tools.standard_script_setup

+"""
+Encapsulate the importing of python utils and logging setup, things
+that every script should do.
+"""
+# pylint: disable=unused-import
+
+import sys, os
+import __main__ as main
+
+
+
+[docs] +def check_minimum_python_version(major, minor): + """ + Check your python version. + + >>> check_minimum_python_version(sys.version_info[0], sys.version_info[1]) + >>> + """ + msg = ( + "Python " + + str(major) + + ", minor version " + + str(minor) + + " is required, you have " + + str(sys.version_info[0]) + + "." + + str(sys.version_info[1]) + ) + assert sys.version_info[0] > major or ( + sys.version_info[0] == major and sys.version_info[1] >= minor + ), msg
+ + + +check_minimum_python_version(3, 6) + +real_file_dir = os.path.dirname(os.path.realpath(__file__)) +cimeroot = os.path.abspath(os.path.join(real_file_dir, "..", "..")) +sys.path.insert(0, cimeroot) + +# Important: Allows external tools to link up with CIME +os.environ["CIMEROOT"] = cimeroot + +import CIME.utils + +CIME.utils.stop_buffering_output() +import logging, argparse +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/Tools/testreporter.html b/branch/azamat/baselines/update-perf-info/html/_modules/Tools/testreporter.html new file mode 100644 index 00000000000..b3982666495 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/Tools/testreporter.html @@ -0,0 +1,386 @@ + + + + + + Tools.testreporter — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+ +
+
+
+
+ +

Source code for Tools.testreporter

+#!/usr/bin/env python3
+
+"""
+Simple script to populate CESM test database with test results.
+"""
+import os
+import sys
+
+sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "..")))
+
+from CIME.Tools.standard_script_setup import *
+
+from CIME.XML.env_build import EnvBuild
+from CIME.XML.env_case import EnvCase
+from CIME.XML.env_test import EnvTest
+from CIME.XML.test_reporter import TestReporter
+from CIME.utils import expect
+from CIME.XML.generic_xml import GenericXML
+
+import glob
+
+###############################################################################
+
+[docs] +def parse_command_line(args): + ############################################################################### + parser = argparse.ArgumentParser() + + CIME.utils.setup_standard_logging_options(parser) + + # Parse command line options + + # parser = argparse.ArgumentParser(description='Arguements for testreporter') + parser.add_argument("--tagname", help="Name of the tag being tested.") + parser.add_argument("--testid", help="Test id, ie c2_0_a6g_ing,c2_0_b6g_gnu.") + parser.add_argument( + "--testroot", help="Root directory for tests to populate the database." + ) + parser.add_argument("--testtype", help="Type of test, prealpha or prebeta.") + parser.add_argument( + "--dryrun", + action="store_true", + help="Do a dry run, database will not be populated.", + ) + parser.add_argument( + "--dumpxml", action="store_true", help="Dump XML test results to sceen." + ) + args = parser.parse_args() + CIME.utils.parse_args_and_handle_standard_logging_options(args) + + return ( + args.testroot, + args.testid, + args.tagname, + args.testtype, + args.dryrun, + args.dumpxml, + )
+ + + +############################################################################### +
+[docs] +def get_testreporter_xml(testroot, testid, tagname, testtype): + ############################################################################### + os.chdir(testroot) + + # + # Retrieve compiler name and mpi library + # + xml_file = glob.glob("*" + testid + "/env_build.xml") + expect( + len(xml_file) > 0, + "Tests not found. It's possible your testid, {} is wrong.".format(testid), + ) + envxml = EnvBuild(".", infile=xml_file[0]) + compiler = envxml.get_value("COMPILER") + mpilib = envxml.get_value("MPILIB") + + # + # Retrieve machine name + # + xml_file = glob.glob("*" + testid + "/env_case.xml") + envxml = EnvCase(".", infile=xml_file[0]) + machine = envxml.get_value("MACH") + + # + # Retrieve baseline tag to compare to + # + xml_file = glob.glob("*" + testid + "/env_test.xml") + envxml = EnvTest(".", infile=xml_file[0]) + baseline = envxml.get_value("BASELINE_NAME_CMP") + + # + # Create XML header + # + + testxml = TestReporter() + testxml.setup_header( + tagname, machine, compiler, mpilib, testroot, testtype, baseline + ) + + # + # Create lists on tests based on the testid in the testroot directory. + # + test_names = glob.glob("*" + testid) + # + # Loop over all tests and parse the test results + # + test_status = {} + for test_name in test_names: + if not os.path.isfile(test_name + "/TestStatus"): + continue + test_status["COMMENT"] = "" + test_status["BASELINE"] = "----" + test_status["MEMCOMP"] = "----" + test_status["MEMLEAK"] = "----" + test_status["NLCOMP"] = "----" + test_status["STATUS"] = "----" + test_status["TPUTCOMP"] = "----" + # + # Check to see if TestStatus is present, if not then continue + # I might want to set the status to fail + # + try: + lines = [line.rstrip("\n") for line in open(test_name + "/TestStatus")] + except (IOError, OSError): + test_status["STATUS"] = "FAIL" + test_status["COMMENT"] = "TestStatus missing. " + continue + # + # Loop over each line of TestStatus, and check for different types of failures. + # + for line in lines: + if "NLCOMP" in line: + test_status["NLCOMP"] = line[0:4] + if "MEMLEAK" in line: + test_status["MEMLEAK"] = line[0:4] + if "MEMCOMP" in line: + test_status["MEMCOMP"] = line[0:4] + if "BASELINE" in line: + test_status["BASELINE"] = line[0:4] + if "TPUTCOMP" in line: + test_status["TPUTCOMP"] = line[0:4] + if "FAIL PFS" in line: + test_status["STATUS"] = "FAIL" + if "INIT" in line: + test_status["INIT"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "INIT fail! " + break + if "CREATE_NEWCASE" in line: + test_status["CREATE_NEWCASE"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "CREATE_NEWCASE fail! " + break + if "XML" in line: + test_status["XML"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "XML fail! " + break + if "SETUP" in line: + test_status["SETUP"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "SFAIL" + test_status["COMMENT"] += "SETUP fail! " + break + if "SHAREDLIB_BUILD" in line: + test_status["SHAREDLIB_BUILD"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "CFAIL" + test_status["COMMENT"] += "SHAREDLIB_BUILD fail! " + break + if "MODEL_BUILD" in line: + test_status["MODEL_BUILD"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["STATUS"] = "CFAIL" + test_status["COMMENT"] += "MODEL_BUILD fail! " + break + if "SUBMIT" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "SUBMIT fail! " + break + if "RUN" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "RUN fail! " + break + if "COMPARE_base_rest" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Restart fail! " + break + if "COMPARE_base_hybrid" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Hybrid fail! " + break + if "COMPARE_base_multiinst" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Multi instance fail! " + break + if "COMPARE_base_test" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Base test fail! " + break + if "COMPARE_base_single_thread" in line: + test_status["STATUS"] = line[0:4] + if line[0:4] in ("FAIL", "PEND"): + test_status["COMMENT"] += "Thread test fail! " + break + + # + # Do not include time comments. Just a preference to have cleaner comments in the test database + # + try: + if "time=" not in line and "GENERATE" not in line: + if "BASELINE" not in line: + test_status["COMMENT"] += line.split(" ", 3)[3] + " " + else: + test_status["COMMENT"] += line.split(" ", 4)[4] + " " + except Exception: # Probably want to be more specific here + pass + + # + # Fill in the xml with the test results + # + testxml.add_result(test_name, test_status) + + return testxml
+ + + +############################################################################## +def _main_func(): + ############################################################################### + + testroot, testid, tagname, testtype, dryrun, dumpxml = parse_command_line(sys.argv) + + testxml = get_testreporter_xml(testroot, testid, tagname, testtype) + + # + # Dump xml to a file. + # + if dumpxml: + GenericXML.write(testxml, outfile="TestRecord.xml") + + # + # Prompt for username and password, then post the XML string to the test database website + # + if not dryrun: + testxml.push2testdb() + + +############################################################################### + +if __name__ == "__main__": + _main_func() +
+ +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_modules/index.html b/branch/azamat/baselines/update-perf-info/html/_modules/index.html new file mode 100644 index 00000000000..8bd419c5a52 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_modules/index.html @@ -0,0 +1,297 @@ + + + + + + Overview: module code — CIME master documentation + + + + + + + + + + + + + + + + + + + +
+ + +
+ +
+
+
+
    +
  • + +
  • +
  • +
+
+
+
+
+ +

All modules for which code is available

+ + +
+
+ +
+
+
+
+ + + + \ No newline at end of file diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.BuildTools.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.BuildTools.rst.txt new file mode 100644 index 00000000000..4da79947b78 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.BuildTools.rst.txt @@ -0,0 +1,21 @@ +CIME.BuildTools package +======================= + +Submodules +---------- + +CIME.BuildTools.configure module +-------------------------------- + +.. automodule:: CIME.BuildTools.configure + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.BuildTools + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.Servers.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.Servers.rst.txt new file mode 100644 index 00000000000..92d83549769 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.Servers.rst.txt @@ -0,0 +1,53 @@ +CIME.Servers package +==================== + +Submodules +---------- + +CIME.Servers.ftp module +----------------------- + +.. automodule:: CIME.Servers.ftp + :members: + :undoc-members: + :show-inheritance: + +CIME.Servers.generic\_server module +----------------------------------- + +.. automodule:: CIME.Servers.generic_server + :members: + :undoc-members: + :show-inheritance: + +CIME.Servers.gftp module +------------------------ + +.. automodule:: CIME.Servers.gftp + :members: + :undoc-members: + :show-inheritance: + +CIME.Servers.svn module +----------------------- + +.. automodule:: CIME.Servers.svn + :members: + :undoc-members: + :show-inheritance: + +CIME.Servers.wget module +------------------------ + +.. automodule:: CIME.Servers.wget + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.Servers + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.SystemTests.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.SystemTests.rst.txt new file mode 100644 index 00000000000..4886908720d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.SystemTests.rst.txt @@ -0,0 +1,309 @@ +CIME.SystemTests package +======================== + +Subpackages +----------- + +.. toctree:: + :maxdepth: 4 + + CIME.SystemTests.test_utils + +Submodules +---------- + +CIME.SystemTests.dae module +--------------------------- + +.. automodule:: CIME.SystemTests.dae + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.eri module +--------------------------- + +.. automodule:: CIME.SystemTests.eri + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.erio module +---------------------------- + +.. automodule:: CIME.SystemTests.erio + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.erp module +--------------------------- + +.. automodule:: CIME.SystemTests.erp + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.err module +--------------------------- + +.. automodule:: CIME.SystemTests.err + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.erri module +---------------------------- + +.. automodule:: CIME.SystemTests.erri + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.ers module +--------------------------- + +.. automodule:: CIME.SystemTests.ers + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.ers2 module +---------------------------- + +.. automodule:: CIME.SystemTests.ers2 + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.ert module +--------------------------- + +.. automodule:: CIME.SystemTests.ert + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.funit module +----------------------------- + +.. automodule:: CIME.SystemTests.funit + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.homme module +----------------------------- + +.. automodule:: CIME.SystemTests.homme + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.hommebaseclass module +-------------------------------------- + +.. automodule:: CIME.SystemTests.hommebaseclass + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.hommebfb module +-------------------------------- + +.. automodule:: CIME.SystemTests.hommebfb + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.icp module +--------------------------- + +.. automodule:: CIME.SystemTests.icp + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.irt module +--------------------------- + +.. automodule:: CIME.SystemTests.irt + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.ldsta module +----------------------------- + +.. automodule:: CIME.SystemTests.ldsta + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.mcc module +--------------------------- + +.. automodule:: CIME.SystemTests.mcc + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.mvk module +--------------------------- + +.. automodule:: CIME.SystemTests.mvk + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.nck module +--------------------------- + +.. automodule:: CIME.SystemTests.nck + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.ncr module +--------------------------- + +.. automodule:: CIME.SystemTests.ncr + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.nodefail module +-------------------------------- + +.. automodule:: CIME.SystemTests.nodefail + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.pea module +--------------------------- + +.. automodule:: CIME.SystemTests.pea + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.pem module +--------------------------- + +.. automodule:: CIME.SystemTests.pem + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.pet module +--------------------------- + +.. automodule:: CIME.SystemTests.pet + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.pfs module +--------------------------- + +.. automodule:: CIME.SystemTests.pfs + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.pgn module +--------------------------- + +.. automodule:: CIME.SystemTests.pgn + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.pre module +--------------------------- + +.. automodule:: CIME.SystemTests.pre + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.rep module +--------------------------- + +.. automodule:: CIME.SystemTests.rep + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.restart\_tests module +-------------------------------------- + +.. automodule:: CIME.SystemTests.restart_tests + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.reuseinitfiles module +-------------------------------------- + +.. automodule:: CIME.SystemTests.reuseinitfiles + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.seq module +--------------------------- + +.. automodule:: CIME.SystemTests.seq + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.sms module +--------------------------- + +.. automodule:: CIME.SystemTests.sms + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.system\_tests\_common module +--------------------------------------------- + +.. automodule:: CIME.SystemTests.system_tests_common + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.system\_tests\_compare\_n module +------------------------------------------------- + +.. automodule:: CIME.SystemTests.system_tests_compare_n + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.system\_tests\_compare\_two module +--------------------------------------------------- + +.. automodule:: CIME.SystemTests.system_tests_compare_two + :members: + :undoc-members: + :show-inheritance: + +CIME.SystemTests.tsc module +--------------------------- + +.. automodule:: CIME.SystemTests.tsc + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.SystemTests + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.SystemTests.test_utils.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.SystemTests.test_utils.rst.txt new file mode 100644 index 00000000000..433003260b0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.SystemTests.test_utils.rst.txt @@ -0,0 +1,21 @@ +CIME.SystemTests.test\_utils package +==================================== + +Submodules +---------- + +CIME.SystemTests.test\_utils.user\_nl\_utils module +--------------------------------------------------- + +.. automodule:: CIME.SystemTests.test_utils.user_nl_utils + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.SystemTests.test_utils + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.Tools.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.Tools.rst.txt new file mode 100644 index 00000000000..3f32969300c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.Tools.rst.txt @@ -0,0 +1,37 @@ +CIME.Tools package +================== + +Submodules +---------- + +CIME.Tools.generate\_cylc\_workflow module +------------------------------------------ + +.. automodule:: CIME.Tools.generate_cylc_workflow + :members: + :undoc-members: + :show-inheritance: + +CIME.Tools.standard\_script\_setup module +----------------------------------------- + +.. automodule:: CIME.Tools.standard_script_setup + :members: + :undoc-members: + :show-inheritance: + +CIME.Tools.testreporter module +------------------------------ + +.. automodule:: CIME.Tools.testreporter + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.Tools + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.XML.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.XML.rst.txt new file mode 100644 index 00000000000..2a96f42aab7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.XML.rst.txt @@ -0,0 +1,277 @@ +CIME.XML package +================ + +Submodules +---------- + +CIME.XML.archive module +----------------------- + +.. automodule:: CIME.XML.archive + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.archive\_base module +----------------------------- + +.. automodule:: CIME.XML.archive_base + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.batch module +--------------------- + +.. automodule:: CIME.XML.batch + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.component module +------------------------- + +.. automodule:: CIME.XML.component + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.compsets module +------------------------ + +.. automodule:: CIME.XML.compsets + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.entry\_id module +------------------------- + +.. automodule:: CIME.XML.entry_id + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_archive module +---------------------------- + +.. automodule:: CIME.XML.env_archive + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_base module +------------------------- + +.. automodule:: CIME.XML.env_base + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_batch module +-------------------------- + +.. automodule:: CIME.XML.env_batch + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_build module +-------------------------- + +.. automodule:: CIME.XML.env_build + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_case module +------------------------- + +.. automodule:: CIME.XML.env_case + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_mach\_pes module +------------------------------ + +.. automodule:: CIME.XML.env_mach_pes + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_mach\_specific module +----------------------------------- + +.. automodule:: CIME.XML.env_mach_specific + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_run module +------------------------ + +.. automodule:: CIME.XML.env_run + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_test module +------------------------- + +.. automodule:: CIME.XML.env_test + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.env\_workflow module +----------------------------- + +.. automodule:: CIME.XML.env_workflow + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.expected\_fails\_file module +------------------------------------- + +.. automodule:: CIME.XML.expected_fails_file + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.files module +--------------------- + +.. automodule:: CIME.XML.files + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.generic\_xml module +---------------------------- + +.. automodule:: CIME.XML.generic_xml + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.grids module +--------------------- + +.. automodule:: CIME.XML.grids + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.headers module +----------------------- + +.. automodule:: CIME.XML.headers + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.inputdata module +------------------------- + +.. automodule:: CIME.XML.inputdata + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.machines module +------------------------ + +.. automodule:: CIME.XML.machines + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.namelist\_definition module +------------------------------------ + +.. automodule:: CIME.XML.namelist_definition + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.pes module +------------------- + +.. automodule:: CIME.XML.pes + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.pio module +------------------- + +.. automodule:: CIME.XML.pio + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.standard\_module\_setup module +--------------------------------------- + +.. automodule:: CIME.XML.standard_module_setup + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.stream module +---------------------- + +.. automodule:: CIME.XML.stream + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.test\_reporter module +------------------------------ + +.. automodule:: CIME.XML.test_reporter + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.testlist module +------------------------ + +.. automodule:: CIME.XML.testlist + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.tests module +--------------------- + +.. automodule:: CIME.XML.tests + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.testspec module +------------------------ + +.. automodule:: CIME.XML.testspec + :members: + :undoc-members: + :show-inheritance: + +CIME.XML.workflow module +------------------------ + +.. automodule:: CIME.XML.workflow + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.XML + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.baselines.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.baselines.rst.txt new file mode 100644 index 00000000000..4c5f75cdc43 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.baselines.rst.txt @@ -0,0 +1,21 @@ +CIME.baselines package +====================== + +Submodules +---------- + +CIME.baselines.performance module +--------------------------------- + +.. automodule:: CIME.baselines.performance + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.baselines + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.build_scripts.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.build_scripts.rst.txt new file mode 100644 index 00000000000..a792d8946b7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.build_scripts.rst.txt @@ -0,0 +1,10 @@ +CIME.build\_scripts package +=========================== + +Module contents +--------------- + +.. automodule:: CIME.build_scripts + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.case.rst.txt new file mode 100644 index 00000000000..68500a8c4ec --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.case.rst.txt @@ -0,0 +1,101 @@ +CIME.case package +================= + +Submodules +---------- + +CIME.case.case module +--------------------- + +.. automodule:: CIME.case.case + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_clone module +---------------------------- + +.. automodule:: CIME.case.case_clone + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_cmpgen\_namelists module +---------------------------------------- + +.. automodule:: CIME.case.case_cmpgen_namelists + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_run module +-------------------------- + +.. automodule:: CIME.case.case_run + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_setup module +---------------------------- + +.. automodule:: CIME.case.case_setup + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_st\_archive module +---------------------------------- + +.. automodule:: CIME.case.case_st_archive + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_submit module +----------------------------- + +.. automodule:: CIME.case.case_submit + :members: + :undoc-members: + :show-inheritance: + +CIME.case.case\_test module +--------------------------- + +.. automodule:: CIME.case.case_test + :members: + :undoc-members: + :show-inheritance: + +CIME.case.check\_input\_data module +----------------------------------- + +.. automodule:: CIME.case.check_input_data + :members: + :undoc-members: + :show-inheritance: + +CIME.case.check\_lockedfiles module +----------------------------------- + +.. automodule:: CIME.case.check_lockedfiles + :members: + :undoc-members: + :show-inheritance: + +CIME.case.preview\_namelists module +----------------------------------- + +.. automodule:: CIME.case.preview_namelists + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.case + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.config.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.config.rst.txt new file mode 100644 index 00000000000..677ad072de7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.config.rst.txt @@ -0,0 +1,10 @@ +CIME.data.config package +======================== + +Module contents +--------------- + +.. automodule:: CIME.data.config + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.rst.txt new file mode 100644 index 00000000000..7cde8b2e321 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.rst.txt @@ -0,0 +1,19 @@ +CIME.data package +================= + +Subpackages +----------- + +.. toctree:: + :maxdepth: 4 + + CIME.data.config + CIME.data.templates + +Module contents +--------------- + +.. automodule:: CIME.data + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.templates.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.templates.rst.txt new file mode 100644 index 00000000000..4a05478bf1d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.data.templates.rst.txt @@ -0,0 +1,10 @@ +CIME.data.templates package +=========================== + +Module contents +--------------- + +.. automodule:: CIME.data.templates + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.rst.txt new file mode 100644 index 00000000000..4c6e8292b7f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.rst.txt @@ -0,0 +1,255 @@ +CIME package +============ + +Subpackages +----------- + +.. toctree:: + :maxdepth: 4 + + CIME.BuildTools + CIME.Servers + CIME.SystemTests + CIME.Tools + CIME.XML + CIME.baselines + CIME.build_scripts + CIME.case + CIME.data + CIME.scripts + CIME.tests + +Submodules +---------- + +CIME.aprun module +----------------- + +.. automodule:: CIME.aprun + :members: + :undoc-members: + :show-inheritance: + +CIME.bless\_test\_results module +-------------------------------- + +.. automodule:: CIME.bless_test_results + :members: + :undoc-members: + :show-inheritance: + +CIME.build module +----------------- + +.. automodule:: CIME.build + :members: + :undoc-members: + :show-inheritance: + +CIME.buildlib module +-------------------- + +.. automodule:: CIME.buildlib + :members: + :undoc-members: + :show-inheritance: + +CIME.buildnml module +-------------------- + +.. automodule:: CIME.buildnml + :members: + :undoc-members: + :show-inheritance: + +CIME.code\_checker module +------------------------- + +.. automodule:: CIME.code_checker + :members: + :undoc-members: + :show-inheritance: + +CIME.compare\_namelists module +------------------------------ + +.. automodule:: CIME.compare_namelists + :members: + :undoc-members: + :show-inheritance: + +CIME.compare\_test\_results module +---------------------------------- + +.. automodule:: CIME.compare_test_results + :members: + :undoc-members: + :show-inheritance: + +CIME.config module +------------------ + +.. automodule:: CIME.config + :members: + :undoc-members: + :show-inheritance: + +CIME.cs\_status module +---------------------- + +.. automodule:: CIME.cs_status + :members: + :undoc-members: + :show-inheritance: + +CIME.cs\_status\_creator module +------------------------------- + +.. automodule:: CIME.cs_status_creator + :members: + :undoc-members: + :show-inheritance: + +CIME.date module +---------------- + +.. automodule:: CIME.date + :members: + :undoc-members: + :show-inheritance: + +CIME.expected\_fails module +--------------------------- + +.. automodule:: CIME.expected_fails + :members: + :undoc-members: + :show-inheritance: + +CIME.get\_tests module +---------------------- + +.. automodule:: CIME.get_tests + :members: + :undoc-members: + :show-inheritance: + +CIME.get\_timing module +----------------------- + +.. automodule:: CIME.get_timing + :members: + :undoc-members: + :show-inheritance: + +CIME.hist\_utils module +----------------------- + +.. automodule:: CIME.hist_utils + :members: + :undoc-members: + :show-inheritance: + +CIME.jenkins\_generic\_job module +--------------------------------- + +.. automodule:: CIME.jenkins_generic_job + :members: + :undoc-members: + :show-inheritance: + +CIME.locked\_files module +------------------------- + +.. automodule:: CIME.locked_files + :members: + :undoc-members: + :show-inheritance: + +CIME.namelist module +-------------------- + +.. automodule:: CIME.namelist + :members: + :undoc-members: + :show-inheritance: + +CIME.nmlgen module +------------------ + +.. automodule:: CIME.nmlgen + :members: + :undoc-members: + :show-inheritance: + +CIME.provenance module +---------------------- + +.. automodule:: CIME.provenance + :members: + :undoc-members: + :show-inheritance: + +CIME.simple\_compare module +--------------------------- + +.. automodule:: CIME.simple_compare + :members: + :undoc-members: + :show-inheritance: + +CIME.test\_scheduler module +--------------------------- + +.. automodule:: CIME.test_scheduler + :members: + :undoc-members: + :show-inheritance: + +CIME.test\_status module +------------------------ + +.. automodule:: CIME.test_status + :members: + :undoc-members: + :show-inheritance: + +CIME.test\_utils module +----------------------- + +.. automodule:: CIME.test_utils + :members: + :undoc-members: + :show-inheritance: + +CIME.user\_mod\_support module +------------------------------ + +.. automodule:: CIME.user_mod_support + :members: + :undoc-members: + :show-inheritance: + +CIME.utils module +----------------- + +.. automodule:: CIME.utils + :members: + :undoc-members: + :show-inheritance: + +CIME.wait\_for\_tests module +---------------------------- + +.. automodule:: CIME.wait_for_tests + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.scripts.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.scripts.rst.txt new file mode 100644 index 00000000000..65764cc9f3b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.scripts.rst.txt @@ -0,0 +1,53 @@ +CIME.scripts package +==================== + +Submodules +---------- + +CIME.scripts.create\_clone module +--------------------------------- + +.. automodule:: CIME.scripts.create_clone + :members: + :undoc-members: + :show-inheritance: + +CIME.scripts.create\_newcase module +----------------------------------- + +.. automodule:: CIME.scripts.create_newcase + :members: + :undoc-members: + :show-inheritance: + +CIME.scripts.create\_test module +-------------------------------- + +.. automodule:: CIME.scripts.create_test + :members: + :undoc-members: + :show-inheritance: + +CIME.scripts.query\_config module +--------------------------------- + +.. automodule:: CIME.scripts.query_config + :members: + :undoc-members: + :show-inheritance: + +CIME.scripts.query\_testlists module +------------------------------------ + +.. automodule:: CIME.scripts.query_testlists + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.scripts + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.tests.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.tests.rst.txt new file mode 100644 index 00000000000..bf8a8f5c3c5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/CIME.tests.rst.txt @@ -0,0 +1,421 @@ +CIME.tests package +================== + +Submodules +---------- + +CIME.tests.base module +---------------------- + +.. automodule:: CIME.tests.base + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.case\_fake module +---------------------------- + +.. automodule:: CIME.tests.case_fake + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.custom\_assertions\_test\_status module +-------------------------------------------------- + +.. automodule:: CIME.tests.custom_assertions_test_status + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.scripts\_regression\_tests module +-------------------------------------------- + +.. automodule:: CIME.tests.scripts_regression_tests + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_bless\_tests\_results module +-------------------------------------------------- + +.. automodule:: CIME.tests.test_sys_bless_tests_results + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_build\_system module +------------------------------------------ + +.. automodule:: CIME.tests.test_sys_build_system + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_cime\_case module +--------------------------------------- + +.. automodule:: CIME.tests.test_sys_cime_case + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_cime\_performance module +---------------------------------------------- + +.. automodule:: CIME.tests.test_sys_cime_performance + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_create\_newcase module +-------------------------------------------- + +.. automodule:: CIME.tests.test_sys_create_newcase + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_full\_system module +----------------------------------------- + +.. automodule:: CIME.tests.test_sys_full_system + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_grid\_generation module +--------------------------------------------- + +.. automodule:: CIME.tests.test_sys_grid_generation + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_jenkins\_generic\_job module +-------------------------------------------------- + +.. automodule:: CIME.tests.test_sys_jenkins_generic_job + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_manage\_and\_query module +----------------------------------------------- + +.. automodule:: CIME.tests.test_sys_manage_and_query + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_query\_config module +------------------------------------------ + +.. automodule:: CIME.tests.test_sys_query_config + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_run\_restart module +----------------------------------------- + +.. automodule:: CIME.tests.test_sys_run_restart + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_save\_timings module +------------------------------------------ + +.. automodule:: CIME.tests.test_sys_save_timings + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_single\_submit module +------------------------------------------- + +.. automodule:: CIME.tests.test_sys_single_submit + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_test\_scheduler module +-------------------------------------------- + +.. automodule:: CIME.tests.test_sys_test_scheduler + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_unittest module +------------------------------------- + +.. automodule:: CIME.tests.test_sys_unittest + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_user\_concurrent\_mods module +--------------------------------------------------- + +.. automodule:: CIME.tests.test_sys_user_concurrent_mods + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_sys\_wait\_for\_tests module +--------------------------------------------- + +.. automodule:: CIME.tests.test_sys_wait_for_tests + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_aprun module +----------------------------------- + +.. automodule:: CIME.tests.test_unit_aprun + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_baselines\_performance module +---------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_baselines_performance + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_bless\_test\_results module +-------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_bless_test_results + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_case module +---------------------------------- + +.. automodule:: CIME.tests.test_unit_case + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_case\_fake module +---------------------------------------- + +.. automodule:: CIME.tests.test_unit_case_fake + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_case\_setup module +----------------------------------------- + +.. automodule:: CIME.tests.test_unit_case_setup + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_compare\_test\_results module +---------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_compare_test_results + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_compare\_two module +------------------------------------------ + +.. automodule:: CIME.tests.test_unit_compare_two + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_config module +------------------------------------ + +.. automodule:: CIME.tests.test_unit_config + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_cs\_status module +---------------------------------------- + +.. automodule:: CIME.tests.test_unit_cs_status + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_custom\_assertions\_test\_status module +-------------------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_custom_assertions_test_status + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_doctest module +------------------------------------- + +.. automodule:: CIME.tests.test_unit_doctest + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_expected\_fails\_file module +--------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_expected_fails_file + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_grids module +----------------------------------- + +.. automodule:: CIME.tests.test_unit_grids + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_hist\_utils module +----------------------------------------- + +.. automodule:: CIME.tests.test_unit_hist_utils + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_nmlgen module +------------------------------------ + +.. automodule:: CIME.tests.test_unit_nmlgen + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_paramgen module +-------------------------------------- + +.. automodule:: CIME.tests.test_unit_paramgen + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_system\_tests module +------------------------------------------- + +.. automodule:: CIME.tests.test_unit_system_tests + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_test\_status module +------------------------------------------ + +.. automodule:: CIME.tests.test_unit_test_status + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_two\_link\_to\_case2\_output module +---------------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_two_link_to_case2_output + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_user\_mod\_support module +------------------------------------------------ + +.. automodule:: CIME.tests.test_unit_user_mod_support + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_user\_nl\_utils module +--------------------------------------------- + +.. automodule:: CIME.tests.test_unit_user_nl_utils + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_utils module +----------------------------------- + +.. automodule:: CIME.tests.test_unit_utils + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_xml\_archive\_base module +------------------------------------------------ + +.. automodule:: CIME.tests.test_unit_xml_archive_base + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_xml\_env\_batch module +--------------------------------------------- + +.. automodule:: CIME.tests.test_unit_xml_env_batch + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_xml\_env\_mach\_specific module +------------------------------------------------------ + +.. automodule:: CIME.tests.test_unit_xml_env_mach_specific + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_xml\_machines module +------------------------------------------- + +.. automodule:: CIME.tests.test_unit_xml_machines + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_xml\_namelist\_definition module +------------------------------------------------------- + +.. automodule:: CIME.tests.test_unit_xml_namelist_definition + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.test\_unit\_xml\_tests module +---------------------------------------- + +.. automodule:: CIME.tests.test_unit_xml_tests + :members: + :undoc-members: + :show-inheritance: + +CIME.tests.utils module +----------------------- + +.. automodule:: CIME.tests.utils + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: CIME.tests + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/modules.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/modules.rst.txt new file mode 100644 index 00000000000..f98c2603acb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/CIME_api/modules.rst.txt @@ -0,0 +1,7 @@ +CIME +==== + +.. toctree:: + :maxdepth: 4 + + CIME diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_api/Tools.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_api/Tools.rst.txt new file mode 100644 index 00000000000..ada0dae541d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_api/Tools.rst.txt @@ -0,0 +1,37 @@ +Tools package +============= + +Submodules +---------- + +Tools.generate\_cylc\_workflow module +------------------------------------- + +.. automodule:: Tools.generate_cylc_workflow + :members: + :undoc-members: + :show-inheritance: + +Tools.standard\_script\_setup module +------------------------------------ + +.. automodule:: Tools.standard_script_setup + :members: + :undoc-members: + :show-inheritance: + +Tools.testreporter module +------------------------- + +.. automodule:: Tools.testreporter + :members: + :undoc-members: + :show-inheritance: + +Module contents +--------------- + +.. automodule:: Tools + :members: + :undoc-members: + :show-inheritance: diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_api/modules.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_api/modules.rst.txt new file mode 100644 index 00000000000..788189a3760 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_api/modules.rst.txt @@ -0,0 +1,7 @@ +Tools +===== + +.. toctree:: + :maxdepth: 4 + + Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/advanced-py-prof.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/advanced-py-prof.rst.txt new file mode 100644 index 00000000000..87c2fc571ac --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/advanced-py-prof.rst.txt @@ -0,0 +1,14 @@ + +.. _advanced-py-prof: + +#################################################### +advanced-py-prof +#################################################### + +**advanced-py-prof** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./advanced-py-prof --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/archive_metadata.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/archive_metadata.rst.txt new file mode 100644 index 00000000000..e7b36bcac6b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/archive_metadata.rst.txt @@ -0,0 +1,14 @@ + +.. _archive_metadata: + +#################################################### +archive_metadata +#################################################### + +**archive_metadata** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./archive_metadata --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/bld_diff.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/bld_diff.rst.txt new file mode 100644 index 00000000000..55d2314ea0d --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/bld_diff.rst.txt @@ -0,0 +1,14 @@ + +.. _bld_diff: + +#################################################### +bld_diff +#################################################### + +**bld_diff** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./bld_diff --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/bless_test_results.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/bless_test_results.rst.txt new file mode 100644 index 00000000000..37963bda9ed --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/bless_test_results.rst.txt @@ -0,0 +1,14 @@ + +.. _bless_test_results: + +#################################################### +bless_test_results +#################################################### + +**bless_test_results** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./bless_test_results --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.build.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.build.rst.txt new file mode 100644 index 00000000000..3439450dea1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.build.rst.txt @@ -0,0 +1,14 @@ + +.. _case.build: + +#################################################### +case.build +#################################################### + +**case.build** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./case.build --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.cmpgen_namelists.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.cmpgen_namelists.rst.txt new file mode 100644 index 00000000000..2897c6ac1c5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.cmpgen_namelists.rst.txt @@ -0,0 +1,14 @@ + +.. _case.cmpgen_namelists: + +#################################################### +case.cmpgen_namelists +#################################################### + +**case.cmpgen_namelists** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./case.cmpgen_namelists --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.qstatus.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.qstatus.rst.txt new file mode 100644 index 00000000000..6d33eb0d876 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.qstatus.rst.txt @@ -0,0 +1,14 @@ + +.. _case.qstatus: + +#################################################### +case.qstatus +#################################################### + +**case.qstatus** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./case.qstatus --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.setup.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.setup.rst.txt new file mode 100644 index 00000000000..09d77d7acd0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.setup.rst.txt @@ -0,0 +1,14 @@ + +.. _case.setup: + +#################################################### +case.setup +#################################################### + +**case.setup** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./case.setup --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.submit.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.submit.rst.txt new file mode 100644 index 00000000000..d1d7e69b710 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case.submit.rst.txt @@ -0,0 +1,14 @@ + +.. _case.submit: + +#################################################### +case.submit +#################################################### + +**case.submit** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./case.submit --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case_diff.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case_diff.rst.txt new file mode 100644 index 00000000000..c532e8dc491 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/case_diff.rst.txt @@ -0,0 +1,14 @@ + +.. _case_diff: + +#################################################### +case_diff +#################################################### + +**case_diff** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./case_diff --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_case.rst.txt new file mode 100644 index 00000000000..93167f0d7b8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_case.rst.txt @@ -0,0 +1,14 @@ + +.. _check_case: + +#################################################### +check_case +#################################################### + +**check_case** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./check_case --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_input_data.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_input_data.rst.txt new file mode 100644 index 00000000000..d45d180e3e0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_input_data.rst.txt @@ -0,0 +1,14 @@ + +.. _check_input_data: + +#################################################### +check_input_data +#################################################### + +**check_input_data** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./check_input_data --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_lockedfiles.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_lockedfiles.rst.txt new file mode 100644 index 00000000000..2f269da57b8 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/check_lockedfiles.rst.txt @@ -0,0 +1,14 @@ + +.. _check_lockedfiles: + +#################################################### +check_lockedfiles +#################################################### + +**check_lockedfiles** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./check_lockedfiles --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/cime_bisect.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/cime_bisect.rst.txt new file mode 100644 index 00000000000..18f17fe5151 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/cime_bisect.rst.txt @@ -0,0 +1,14 @@ + +.. _cime_bisect: + +#################################################### +cime_bisect +#################################################### + +**cime_bisect** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./cime_bisect --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/code_checker.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/code_checker.rst.txt new file mode 100644 index 00000000000..7a3125f54c9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/code_checker.rst.txt @@ -0,0 +1,14 @@ + +.. _code_checker: + +#################################################### +code_checker +#################################################### + +**code_checker** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./code_checker --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/compare_namelists.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/compare_namelists.rst.txt new file mode 100644 index 00000000000..c7ea60e899b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/compare_namelists.rst.txt @@ -0,0 +1,14 @@ + +.. _compare_namelists: + +#################################################### +compare_namelists +#################################################### + +**compare_namelists** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./compare_namelists --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/compare_test_results.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/compare_test_results.rst.txt new file mode 100644 index 00000000000..c2db982550b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/compare_test_results.rst.txt @@ -0,0 +1,14 @@ + +.. _compare_test_results: + +#################################################### +compare_test_results +#################################################### + +**compare_test_results** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./compare_test_results --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_baseline.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_baseline.rst.txt new file mode 100644 index 00000000000..4454c4543ae --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_baseline.rst.txt @@ -0,0 +1,14 @@ + +.. _component_compare_baseline: + +#################################################### +component_compare_baseline +#################################################### + +**component_compare_baseline** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./component_compare_baseline --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_copy.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_copy.rst.txt new file mode 100644 index 00000000000..ca94d75e960 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_copy.rst.txt @@ -0,0 +1,14 @@ + +.. _component_compare_copy: + +#################################################### +component_compare_copy +#################################################### + +**component_compare_copy** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./component_compare_copy --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_test.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_test.rst.txt new file mode 100644 index 00000000000..1ee6637caf0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_compare_test.rst.txt @@ -0,0 +1,14 @@ + +.. _component_compare_test: + +#################################################### +component_compare_test +#################################################### + +**component_compare_test** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./component_compare_test --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_generate_baseline.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_generate_baseline.rst.txt new file mode 100644 index 00000000000..44168b00538 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/component_generate_baseline.rst.txt @@ -0,0 +1,14 @@ + +.. _component_generate_baseline: + +#################################################### +component_generate_baseline +#################################################### + +**component_generate_baseline** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./component_generate_baseline --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/concat_daily_hist.csh.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/concat_daily_hist.csh.rst.txt new file mode 100644 index 00000000000..073513931ff --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/concat_daily_hist.csh.rst.txt @@ -0,0 +1,14 @@ + +.. _concat_daily_hist.csh: + +#################################################### +concat_daily_hist.csh +#################################################### + +**concat_daily_hist.csh** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./concat_daily_hist.csh --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_clone.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_clone.rst.txt new file mode 100644 index 00000000000..dfcef31b85f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_clone.rst.txt @@ -0,0 +1,14 @@ + +.. _create_clone: + +#################################################### +create_clone +#################################################### + +**create_clone** is a script in CIMEROOT/scripts. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./create_clone --help + :cwd: ../../../scripts diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_newcase.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_newcase.rst.txt new file mode 100644 index 00000000000..3f58e9d9c05 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_newcase.rst.txt @@ -0,0 +1,14 @@ + +.. _create_newcase: + +#################################################### +create_newcase +#################################################### + +**create_newcase** is a script in CIMEROOT/scripts. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./create_newcase --help + :cwd: ../../../scripts diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_test.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_test.rst.txt new file mode 100644 index 00000000000..220a9751dbc --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/create_test.rst.txt @@ -0,0 +1,14 @@ + +.. _create_test: + +#################################################### +create_test +#################################################### + +**create_test** is a script in CIMEROOT/scripts. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./create_test --help + :cwd: ../../../scripts diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/cs.status.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/cs.status.rst.txt new file mode 100644 index 00000000000..2d79ae9de47 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/cs.status.rst.txt @@ -0,0 +1,14 @@ + +.. _cs.status: + +#################################################### +cs.status +#################################################### + +**cs.status** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./cs.status --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/e3sm_check_env.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/e3sm_check_env.rst.txt new file mode 100644 index 00000000000..62ace75c10b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/e3sm_check_env.rst.txt @@ -0,0 +1,14 @@ + +.. _e3sm_check_env: + +#################################################### +e3sm_check_env +#################################################### + +**e3sm_check_env** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./e3sm_check_env --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/generate_cylc_workflow.py.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/generate_cylc_workflow.py.rst.txt new file mode 100644 index 00000000000..5e08cd7f9af --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/generate_cylc_workflow.py.rst.txt @@ -0,0 +1,14 @@ + +.. _generate_cylc_workflow.py: + +#################################################### +generate_cylc_workflow.py +#################################################### + +**generate_cylc_workflow.py** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./generate_cylc_workflow.py --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/getTiming.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/getTiming.rst.txt new file mode 100644 index 00000000000..b0e1fe297ad --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/getTiming.rst.txt @@ -0,0 +1,14 @@ + +.. _getTiming: + +#################################################### +getTiming +#################################################### + +**getTiming** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./getTiming --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/get_case_env.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/get_case_env.rst.txt new file mode 100644 index 00000000000..b149880456c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/get_case_env.rst.txt @@ -0,0 +1,14 @@ + +.. _get_case_env: + +#################################################### +get_case_env +#################################################### + +**get_case_env** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./get_case_env --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/get_standard_makefile_args.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/get_standard_makefile_args.rst.txt new file mode 100644 index 00000000000..7ce9adb169a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/get_standard_makefile_args.rst.txt @@ -0,0 +1,14 @@ + +.. _get_standard_makefile_args: + +#################################################### +get_standard_makefile_args +#################################################### + +**get_standard_makefile_args** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./get_standard_makefile_args --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/index.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/index.rst.txt new file mode 100644 index 00000000000..452b7dd85a5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/index.rst.txt @@ -0,0 +1,63 @@ +.. _Tools_user: + +########## +User Tools +########## + +CIME includes a number of user scripts. Some of these scripts are copied into +the CASEROOT as part of **create_newcase**, **create_test**, **create_clone**, +and **case.setup**. + +.. toctree:: + :maxdepth: 1 + + advanced-py-prof + archive_metadata + bld_diff + bless_test_results + case.build + case.cmpgen_namelists + case.qstatus + case.setup + case.submit + case_diff + check_case + check_input_data + check_lockedfiles + cime_bisect + code_checker + compare_namelists + compare_test_results + component_compare_baseline + component_compare_copy + component_compare_test + component_generate_baseline + concat_daily_hist.csh + create_clone + create_newcase + create_test + cs.status + e3sm_check_env + generate_cylc_workflow.py + getTiming + get_case_env + get_standard_makefile_args + jenkins_generic_job + list_e3sm_tests + mkDepends + mkSrcfiles + mvsource + normalize_cases + pelayout + preview_namelists + preview_run + query_config + query_testlists + save_provenance + simple-py-prof + simple_compare + testreporter.py + wait_for_tests + xmlchange + xmlquery + xmltestentry diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/jenkins_generic_job.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/jenkins_generic_job.rst.txt new file mode 100644 index 00000000000..947abd2e390 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/jenkins_generic_job.rst.txt @@ -0,0 +1,14 @@ + +.. _jenkins_generic_job: + +#################################################### +jenkins_generic_job +#################################################### + +**jenkins_generic_job** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./jenkins_generic_job --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/list_e3sm_tests.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/list_e3sm_tests.rst.txt new file mode 100644 index 00000000000..c3e7463151e --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/list_e3sm_tests.rst.txt @@ -0,0 +1,14 @@ + +.. _list_e3sm_tests: + +#################################################### +list_e3sm_tests +#################################################### + +**list_e3sm_tests** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./list_e3sm_tests --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mkDepends.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mkDepends.rst.txt new file mode 100644 index 00000000000..366508f0173 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mkDepends.rst.txt @@ -0,0 +1,14 @@ + +.. _mkDepends: + +#################################################### +mkDepends +#################################################### + +**mkDepends** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./mkDepends --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mkSrcfiles.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mkSrcfiles.rst.txt new file mode 100644 index 00000000000..ecdeaf45274 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mkSrcfiles.rst.txt @@ -0,0 +1,14 @@ + +.. _mkSrcfiles: + +#################################################### +mkSrcfiles +#################################################### + +**mkSrcfiles** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./mkSrcfiles --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mvsource.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mvsource.rst.txt new file mode 100644 index 00000000000..ca56a6bb47a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/mvsource.rst.txt @@ -0,0 +1,14 @@ + +.. _mvsource: + +#################################################### +mvsource +#################################################### + +**mvsource** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./mvsource --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/normalize_cases.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/normalize_cases.rst.txt new file mode 100644 index 00000000000..e59c668a8a2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/normalize_cases.rst.txt @@ -0,0 +1,14 @@ + +.. _normalize_cases: + +#################################################### +normalize_cases +#################################################### + +**normalize_cases** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./normalize_cases --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/pelayout.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/pelayout.rst.txt new file mode 100644 index 00000000000..2e2be45b00b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/pelayout.rst.txt @@ -0,0 +1,14 @@ + +.. _pelayout: + +#################################################### +pelayout +#################################################### + +**pelayout** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./pelayout --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/preview_namelists.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/preview_namelists.rst.txt new file mode 100644 index 00000000000..d2518864f19 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/preview_namelists.rst.txt @@ -0,0 +1,14 @@ + +.. _preview_namelists: + +#################################################### +preview_namelists +#################################################### + +**preview_namelists** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./preview_namelists --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/preview_run.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/preview_run.rst.txt new file mode 100644 index 00000000000..b3a41c5a744 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/preview_run.rst.txt @@ -0,0 +1,14 @@ + +.. _preview_run: + +#################################################### +preview_run +#################################################### + +**preview_run** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./preview_run --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/query_config.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/query_config.rst.txt new file mode 100644 index 00000000000..1060269312f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/query_config.rst.txt @@ -0,0 +1,14 @@ + +.. _query_config: + +#################################################### +query_config +#################################################### + +**query_config** is a script in CIMEROOT/scripts. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./query_config --help + :cwd: ../../../scripts diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/query_testlists.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/query_testlists.rst.txt new file mode 100644 index 00000000000..e3eef5c74d2 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/query_testlists.rst.txt @@ -0,0 +1,14 @@ + +.. _query_testlists: + +#################################################### +query_testlists +#################################################### + +**query_testlists** is a script in CIMEROOT/scripts. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./query_testlists --help + :cwd: ../../../scripts diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/save_provenance.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/save_provenance.rst.txt new file mode 100644 index 00000000000..c239d5ef619 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/save_provenance.rst.txt @@ -0,0 +1,14 @@ + +.. _save_provenance: + +#################################################### +save_provenance +#################################################### + +**save_provenance** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./save_provenance --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/simple-py-prof.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/simple-py-prof.rst.txt new file mode 100644 index 00000000000..fc07e9f958c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/simple-py-prof.rst.txt @@ -0,0 +1,14 @@ + +.. _simple-py-prof: + +#################################################### +simple-py-prof +#################################################### + +**simple-py-prof** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./simple-py-prof --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/simple_compare.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/simple_compare.rst.txt new file mode 100644 index 00000000000..8fe1a72f683 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/simple_compare.rst.txt @@ -0,0 +1,14 @@ + +.. _simple_compare: + +#################################################### +simple_compare +#################################################### + +**simple_compare** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./simple_compare --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/testreporter.py.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/testreporter.py.rst.txt new file mode 100644 index 00000000000..0256b0750ad --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/testreporter.py.rst.txt @@ -0,0 +1,14 @@ + +.. _testreporter.py: + +#################################################### +testreporter.py +#################################################### + +**testreporter.py** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./testreporter.py --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/wait_for_tests.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/wait_for_tests.rst.txt new file mode 100644 index 00000000000..53ecaf85f32 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/wait_for_tests.rst.txt @@ -0,0 +1,14 @@ + +.. _wait_for_tests: + +#################################################### +wait_for_tests +#################################################### + +**wait_for_tests** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./wait_for_tests --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmlchange.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmlchange.rst.txt new file mode 100644 index 00000000000..86c76d3c60f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmlchange.rst.txt @@ -0,0 +1,14 @@ + +.. _xmlchange: + +#################################################### +xmlchange +#################################################### + +**xmlchange** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./xmlchange --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmlquery.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmlquery.rst.txt new file mode 100644 index 00000000000..365f07891e1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmlquery.rst.txt @@ -0,0 +1,14 @@ + +.. _xmlquery: + +#################################################### +xmlquery +#################################################### + +**xmlquery** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./xmlquery --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmltestentry.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmltestentry.rst.txt new file mode 100644 index 00000000000..e6d68dd3edb --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/Tools_user/xmltestentry.rst.txt @@ -0,0 +1,14 @@ + +.. _xmltestentry: + +#################################################### +xmltestentry +#################################################### + +**xmltestentry** is a script in CIMEROOT/CIME/Tools. + +.. toctree:: + :maxdepth: 1 + +.. command-output:: ./xmltestentry --help + :cwd: ../../../CIME/Tools diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/adding-components.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/adding-components.rst.txt new file mode 100644 index 00000000000..c9fcf234dd1 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/adding-components.rst.txt @@ -0,0 +1,36 @@ +.. _adding-components: + +=================== +Adding components +=================== + +Here are the steps to add prognostic components to CIME models. + +There are a couple of aspects of a component interface to CIME, the +scripts interface which controls setting up component inputs and +building the component and the run interface which controls connecting +the component to the coupler and through the coupler, the other +components of the CIME based model. + +The component should have a subdirectory **cime_config** and this +subdirectory should have two files **buildnml** and **buildlib** The +**buildnml** script is used to build the components instructional, +runtime inputs. These have traditionally been in the form of fortran +namelists but may also follow other formats. The **buildnml** may +either be called from the command line or as a python subroutine. If +buildnml is called from the command line it will be passed the +caseroot directory on the command line. If it is called as a +subroutine, the subroutine name must be buildnml and it will take +three arguments, a Case object, a caseroot directory and a component +name. The **buildlib** script will always be called from the command +line, it is called in the case.build step and is expected to build the +the buildlib script will be called with three arguments in order they +are caseroot, libroot (the location of the installed library, +typically EXEROOT/lib) and bldroot, the location of the component +build directory. Look at the cime internal components such as datm +for an example. + +The coupler interface is dependent on which coupler is used, for the mct coupler in cime +the component model must provide NNN_INIT_MCT, NNN_RUN_MCT, NNN_FINAL_MCT where NNN is the +component type of the particular component (eg ATM for an atmosphere, LND for a land model) +these subroutines are expected to be in the component library. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/index.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/index.rst.txt new file mode 100644 index 00000000000..a492a431548 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/index.rst.txt @@ -0,0 +1,24 @@ +.. _build-cpl: + +.. on documentation master file, created by + sphinx-quickstart on Tue Jan 31 19:46:36 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +####################################################################### +Building a Coupled Model with CIME +####################################################################### + +.. toctree:: + :maxdepth: 3 + :numbered: + + introduction.rst + adding-components.rst + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/introduction.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/introduction.rst.txt new file mode 100644 index 00000000000..0352da61bf7 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/build_cpl/introduction.rst.txt @@ -0,0 +1,14 @@ +Introduction +============ + +Content to go here: + +How to add a new component model to cime. + +How to replace an existing cime model with another one. + +How to integrate your model in to the cime build/configure system and coupler. + +How to work with the CIME-supplied models. + +What to do if you want to add another component to the long name. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/change.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/change.rst.txt new file mode 100644 index 00000000000..e69de29bb2d diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/glossary/index.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/glossary/index.rst.txt new file mode 100644 index 00000000000..6ab8f033fa9 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/glossary/index.rst.txt @@ -0,0 +1,171 @@ +.. _glossary: + +######## +Glossary +######## + +.. toctree:: + :maxdepth: 1 + :numbered: + +********* +General +********* + +.. glossary:: + + active or prognostic component + Solves a complex set of equations to describe a sub-model’s behavior. + + case (CASE) + An instance of a global climate model simulation. A case is defined by a component set, a model grid, + a machine, a compiler, and any other additional customizations. + + component + A sub-model coupled with other components to constitute a global climate modeling system. + Example components: atmosphere, ocean, land, etc. + + component set (COMPSET) + A complete set of components to be linked together into a climate model to + run a specific case. + + data component + Replacement for an active component. Sends and receives the same variables to and from other + models (but ignores the variables received). + + grid (GRID) + A set of numerical grids of a case. Each active component operates on its own numerical grid. + + resolution + Used to refer to a set of grids. Each grid within a set may have different resolution. + + stub component + Simply occupies the required place in the climate execution sequence and does send or receive + any data. + +********* +Coupling +********* + +.. glossary:: + + coupler + A component of the CIME infrastructure that is run from within the driver. It can be run on a + subset of the total processors, and carries out mapping (interpolation), merging, diagnostics, and other + calculations. + + driver + The hub that connects all components. CIME driver runs on all hardware processors, runs the top + level instructions, and, executes the driver time loop. + + forcing + An imposed perturbation of Earth's energy balance + + Model Coupling Toolkit or MCT + A library used by CIME for all data rearranging and mapping (interpolation) + + mask + Determines land/ocean boundaries in the model + + mapping + Interpolation of fields between components. + +********************* +Files and Directories +********************* + +.. glossary:: + + archive directory (DOUT_S_ROOT) + If short term archiving is activated (DOUT_S = TRUE), the restart files and run output files + are copied to archive directory location (DOUT_S_ROOT). + + build directory (EXEROOT) + Location where the case is built. + + case root (CASEROOT) + The directory where the case is created. Includes namelist files, xml files, and scripts to setup, + build, and run the case. Also, includes logs and timing output files. + + CIME root (CIMEROOT) + The directory where the CIME source code resides + + history files + NetCDF files that contain fields associated with the state of the model at a given time slice. + + initial files + Files required to start a file + + input data stream (DIN_LOC_ROOT) + A time-series of input data files where all the fields in the stream are located in the + same data file and all share the same spatial and temporal coordinates. + + namelist files + Each namelist file includes input parameters for a specific component. + + run directory (RUNDIR) + Where the case is run. + + restart files + Written and read by each component in the RUNDIR to stop and subsequently restart in a bit-for-bit fashion. + + rpointer files + Text file written by the coupler in the RUNDIR with a list of necessary files required for model restart. + + XML files + Elements and attributes in these files configure a case. (building, running, batch, etc.) These files + include env_archive.xml, env_batch.xml, env_build.xml, env_case.xml, env_mach_pes.xml, env_mach_specific.xml, env_run.xml + in CASEROOT and can be queried and modifed using the xmlquery and xmlchange tools. + +*********** +Development +*********** + +.. glossary:: + + sandbox (SRCROOT) + A checked out tag on a local or a remote machine. may be edited to create a new tag. or, may + just be used for running cases. + + source modifications (CASEROOT/SourceMods) + one or more source files that are modified by the user. Before building a case, CIME replaces + the original source files with these files. + + tag + A snapshot of the source code. With each consecutive tag (one or more) answer-changing modifications + to the source code of a component are introduced. + + user namelist files (CASEROOT/user_nl_*) + User modifications for a given case can be specified in these files. + +******** +Testing +******** + +.. glossary:: + + baseline + A set of test cases that is run using a tag which is complete, tested, and has no modifications + in the source code. Used to assess the performance/accuracy of a case that is run using a sandbox. + + baseline failure + A test that fails in its comparison with a baseline. + + blessing + Part of the unit testing framework used by CIME scripts regression tests. + + regression test + A test that compares with baseline results to determine if any new errors have been introduced + into the code base. + + unit testing + A fast, self-verifying test of a small piece of code. + +************* +Miscellaneous +************* + +.. glossary:: + + ESP + External System Processing: handles data assimilation diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/index.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/index.rst.txt new file mode 100644 index 00000000000..89cc7155218 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/index.rst.txt @@ -0,0 +1,46 @@ +.. on documentation master file, created by + sphinx-quickstart on Tue Jan 31 19:46:36 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +CIME documentation +================== + +The Common Infrastructure for Modeling the Earth (CIME - pronounced +"SEAM") provides a Case Control System for configuring, compiling and executing +Earth system models, data and stub model components, a driver and associated tools +and libraries. + +Table of contents +----------------- +.. toctree:: + :maxdepth: 2 + + what_cime/index.rst + users_guide/index.rst + build_cpl/index.rst + misc_tools/index.rst + +Appendices +---------- +.. toctree:: + :maxdepth: 2 + + glossary/index.rst + Tools_user/index.rst + xml_files/index.rst + CIME_api/modules.rst + Tools_api/modules.rst + +Python Module Indices and Search +--------------------------------- + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` + + + +CIME is developed by the +`E3SM `_ and +`CESM `_ projects. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/ect.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/ect.rst.txt new file mode 100644 index 00000000000..5a2af9737de --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/ect.rst.txt @@ -0,0 +1,118 @@ +.. _ensemble-consistency-test: + +============================== +CESM-ECT (CESM Ensemble Consistency Test): +============================== + +CESM-ECT is a suite of tests to determine whether a new +simulation set up (new machine, compiler, etc.) is statistically +distinguishable from an accepted ensemble. The verification tools in +the CESM-ECT suite are: + +CAM-ECT - detects issues in CAM and CLM (12 month runs) +UF-CAM-ECT - detects issues in CAM and CLM (9 time step runs) +POP-ECT - detects issues in POP and CICE (12 month runs) + +The ECT process involves comparing runs generated with +the new scenario ( 3 for CAM-ECT and UF-CAM-ECT, and 1 for POP-ECT) +to an ensemble built on a trusted machine (currently +cheyenne). The python ECT tools are located in the pyCECT +subdirectory or https://github.com/NCAR/PyCECT/releases. + +-OR- + +We now provide a web server for CAM-ECT and UF-CAM-ECT, where +you can upload the (3) generated runs for comparison to our ensemble. +Please see the webpage at http://www.cesm.ucar.edu/models/cesm2/verification/ +for further instructions. + +----------------------------------- +Creating or obtaining a summary file: +----------------------------------- + +Before the test can be run, a summary file is needed of the ensemble +runs to which the comparison will be made. Ensemble summary files +(NetCDF) for existing tags for CAM-ECT, UF-CAM-ECT, and POP-ECT that +were created by CSEG are located (respectively) in the CESM input data +directories: + +$CESMDATAROOT/inputdata/validation/ensembles +$CESMDATAROOT/inputdata/validation/uf_ensembles +$CESMDATAROOT/inputdata/validation/pop_ensembles + +If none of our ensembles are suitable for your needs, then you may create +your own ensemble (and summary file) using the following instructions: + +(1) To create a new ensemble, use the ensemble.py script in this directory. +This script creates and compiles a case, then creates clones of the +original case, where the initial temperature perturbation is slightly modified +for each ensemble member. At this time, cime includes functionality +to create ensembles for CAM-ECT, UF-CAM-ECT, and POP-ECT. + +(2) Use --ect to specify whether ensemble is for CAM or POP. +(See 'python ensemble.py -h' for additional details). + +(3) Use --ensemble to specify the ensemble size. +Recommended ensemble sizes: +CAM-ECT: 151 +UF-CAM-ECT: 350 +POP-ECT 40 + +(4) Examples: + +CAM-ECT: + +python ensemble.py --case /glade/scratch/cesm_user/cesm_tag/ensemble/ensemble.cesm_tag.000 --mach cheyenne --ensemble 151 --ect cam --project P99999999 + + +UF-CAM-ECT: + +python ensemble.py --case /glade/scratch/cesm_user/cesm_tag/uf_ensemble/ensemble.cesm_tag.uf.000 --mach cheyenne --ensemble 350 --uf --ect cam --project P99999999 + +POP-ECT: + +python ensemble.py --case /glade/scratch/cesm_user/cesm_tag/uf_ensemble/ensemble.cesm_tag.000 --mach cheyenne --ensemble 40 --ect pop --project P99999999 + +Notes: + (a) ensemble.py accepts (most of) the argumenets of create_newcase + + (b) case name must end in ".000" and include the full path + + (c) ensemble size must be specified, and suggested defaults are listed + above. Note that for CAM-ECT and UF-CAM-ECT, the ensemble size + needs to be larger than the number of variables that ECT will evaluate. + + +(5) Once all ensemble simulations have run successfully, copy every cam history +file (*.cam.h0.*) for CAM-ECT and UF-CAM-ECT) or monthly pop history file +(*.pop.h.*) for POP-ECT from each ensemble run directory into a separate directory. +Next create the ensemble summary using the pyCECT tool pyEnsSum.py (for CAM-ECT and +UF-CAM-ECT) or pyEnsSumPop.py (for POP-ECT). For details see README_pyEnsSum.rst +and README_pyEnsSumPop.rst with the pyCECT tools. + +------------------- +Creating test runs: +------------------- + +(1) Once an ensemble summary file has been created or chosen to +use from $CESMDATAROOT/inputdata/validation, the simulation +run(s) to be verified by ECT must be created via script ensemble.py. + +NOTE: It is important that the **same** resolution and compset be used in the +individual runs as in the ensemble. The NetCDF ensemble summary file global +attributes give this information. + +(2) For example, for CAM-ECT: + +python ensemble.py --case /glade/scratch/cesm_user/cesm_tag/camcase.cesm_tag.000 --ect cam --mach cheyenne --project P99999999 +--compset F2000climo --res f19_f19 +For example, for UF-CAM-ECT: + +python ensemble.py --case /glade/scratch/cesm_user/cesm_tag/uf.camcase.cesm_tag.000 --ect cam --uf --mach cheyenne --project P99999999 --compset F2000climo --res f19_f19 + +For example, for POP-ECT: + +python ensemble.py --case /glade/scratch/cesm_user/cesm_tag/popcase.cesm_tag.000 --ect pop --mach cheyenne --project P99999999 --compset G --res T62_g17 + +(3) Next verify the new simulation(s) with the pyCECT tool pyCECT.py (see +README_pyCECT.rst with the pyCECT tools). diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/index.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/index.rst.txt new file mode 100644 index 00000000000..9130145901a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/index.rst.txt @@ -0,0 +1,30 @@ +.. _misc-tools: + +.. on documentation master file, created by + sphinx-quickstart on Tue Jan 31 19:46:36 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +##################################### + Miscellaneous Tools +##################################### + +In addition to basic infrastructure for a coupled model, CIME contains in its distribution several stand-alone +tools that are necessary and/or useful when building a climate model. Guides for using them will be here. + +.. toctree:: + :maxdepth: 3 + :numbered: + + + ect.rst + mapping-tools.rst + cprnc.rst + load-balancing-tool.rst + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/load-balancing-tool.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/load-balancing-tool.rst.txt new file mode 100644 index 00000000000..2701b9e1b6f --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/misc_tools/load-balancing-tool.rst.txt @@ -0,0 +1,379 @@ +.. _load_balancing_tool: + + +========================= + CIME Load Balancing Tool +========================= + + Originally Developed by Sheri Mickelson mickelso@ucar.edu + and Yuri Alekseev (ALCF/Argonne National Laboratory + + Updated 2017 Jason Sarich sarich@mcs.anl.gov (Argonne National Laboratory) + + +This Load Balancing tool performs several operations intended to find +a reasonable PE layout for CIME simulations. These operations involve two +steps:: + + 1. load_balancing_submit.py + Run a series of simulations in order to obtain timing data + + 2. load_balancing_solve.py + Using the data provided in the previous program, solve a mixed integer + linear program to optimize the model throughput. Requires installation + of PuLP and uses the included COIN-CBC solver. (https://pythonhosted.org/PuLP) + +Also in this documentation is:: + + 3. More about the algorithm used + + 4. Extending the solver for other models + + 5. Testing information for developers + + +*For the impatient* + + +1. set PYTHONPATH to include $CIME_DIR/scripts:$CIME_DIR/tools/load_balancing_tool + +2. create PE XML file to describe the PE layouts for the timing runs + +3. $ ./load_balancing_submit.py --res --compset --pesfile + +4. ... wait for jobs to run ... + +5. $ ./load_balancing_solve.py --total-tasks --blocksize 8 + + + +****************************************************************** +Running simulations using load_balancing_submit.py +****************************************************************** + +Simulations can be run on a given system by executing the load_balancing_tool.py +script, located in cime/tools/load_balancing_tool/load_balancing_tool_submit.py. +This creates timing files in the case directory which will be used to solve +a mixed integer linear program optimizing the layout. If there is already timing +information available, then a + +As with the create_newcase and create_test scripts, command line options +are used to tailor the simulations for a given model. These values will be +directly forwarded to the passed:: + + --compiler + --project + --compset (required) + --res (required) + --machine + +Other options include:: + + --pesfile (required) + This file is used to designated the pes layout that + are used to create the timing data. The format is the same used + by CIME pes_files, but note that the 'pesize' tag will be used + to generate the casename. Also, this file will not be directly + passed through to CIME, but rather it will trigger xmlchange + commands to execute based on the values in the file. + + --test-id + By default, the load balancing tool will use casenames: + PFS_I0.res.compset.lbt + PFS_I1.res.compset.lbt + ... + PFS_IN.res.compset.lbt + for each simulation requested. These casenames will be forwarded to + the create_test script. + + Using this option will instead direct the tool to use: + PFS_I0.res.compset.test-id + PFS_I1.res.compset.test-id + ... + PFS_IN.res.compset.test-id + + --force-purge + Force the tool to remove any existing case directories if they + exist. Removes PFS_I*.res.compset.test-id + + --extra-options-file + Add extra xml options to the timing runs from a user file, + these options will be set after create_newcase and before + case.setup. + This text file should have one variable per line in + the format =. Example: + + STOP_OPTION=ndays + STOP_N=7 + DOUT_S=FALSE + + +****************************************************************** +Optimizing the layout using load_balacing_solve.py +****************************************************************** + +Reads timing data created with load_balancing_submit.py (or otherwise, +see --timing-files option) and solves an mixed integer optimization problem +using these timings. The default layout (IceLndAtmOcn) minimizes the cost per +model day assuming the layout:: + + ____________________ + | ICE | LND | | + |______|_______| | + | | OCN | + | ATM | | + |______________|_____| + + +An IceLndWavAtmOcn layout is also available. It is possible to extend +this tool to solve for other layouts (See Section 1.4 Extending the Load +Balancing Tool) + +Note -- threading is not considered part of this optimization, it is assumed that +all timing data have the same threading structure (i.e. all ATM runs use two threads per PE) + +Options recognized by the solver:: + + --layout + Name of the class used to solve the layout problem. The only built-in + class at this time is the default IceLndAtmOcn, but this can be extended. + See section 4 Extending the Load Balancing Tool + + --total-tasks N (required) + The total number of PEs that can be assigned + + --timing-dir + Optional, read in all files from this directory as timing data + + --test-id + The test-id used when submitting the timing jobs. This option can also + be used to set a single directory where ALL of the timing data is. + The solver will extract data from timing files that match either pattern: + .test-id/timing/timing..test-id + .test-id/timing/timing..test-id + + --blocksize N + The blocksize is the granularity of processors that will be group + together, useful for when PEs to be multiples of 8, 16, etc. + + --blocksize-XXX N + Components don't all have to have the same blocksize. The default + blocksize given by --blocksize can be overridden for a given component + using this option, where XXX can be ATM, ICE, GLC, etc. + Example: + --blocksize 8 --blocksize-GLC 1 + will set the GLC blocksize to 1 and all other blocksizes to 8 + + --milp-output + After extracting data from timing files and before solving, write the + data to a .json file where is can be analyzed or manually edited. + + --milp-input + Read in the problem from the given .json file instead of extracting from + timing files. + + --pe-output + Write the solution PE layout to a potential pe xml file. + + +*************************** +More about the algorithm +*************************** + +Before solving the mixed-integer linear program, a model of the cost vs ntasks +function is constructed for each component. + +Given a component data set of costs (C1,C2,..,Cn) and nblocks (N1,N2,..,Nn), +then an piecewise set of n+1 linear constraints are created using the idea: + +If N < N1 (which means that N1 cannot be 1), then assume that there is +perfect scalability from N to N1. Thus the cost is on the line +defined by the points (1, C1*N1) - (N1, C1). + +If N is between N_i and N_{i+1}, then the cost is on the line defined by the +points (N_i, C_i) and (N_{i+1}, C_{i+1}. + +If N > Nn, then we want to extrapolate the cost at N=total_tasks +(we define N{n+1} = total_tasks, C{n+1} = estimated cost using all nodes) +Assuming perfect scalability is problematic at this level, so we instead +assume that the parallel efficiency drops at the same factor as it does + + from N=N{n-1} to N = Nn + + First solve for efficiency E: + C{n-1} - Cn = E * (C{n-1} * N{n-1} / Nn) + + Then E to find C{n+1} (cost at ntasks N{n+1}): + Cn - Ct = E * (Cn * Nn / Nt) + + Now cost is on the line defined by (Nn,Cn) - (Nt,Ct) + +Assuming that this piecewise linear function describes a convex function, we do +not have to explicitly construct this piecewise function and can instead use +each of the cost functions on the entire domain. + +These piecewise linear models give us the following linear constraints, where +the model time cost C as a function of N (ntasks) for each component +is constrained by:: + + C >= Ci - Ni * (C{i+1}-Ci) / (N{i+1}-Ni) + + N * (C{i+1}-Ci) / (N{i+1}-Ni) for i=0..n + + +These constraints should be in effect for any extensions of the solver (the +components involved may be different). + +There are options available in load_balancing_submit.py to inspect these +piecewise linear models:: + + --graph-models (requires matplotlib) + --print-models (debugging modes writes the models to the log) + + +Now that these constraints are defined, the mixed integer linear program (MILP) +follows from the layout:: + + NOTES: variable N[c] is number of tasks assigned for component c + variable NB[c] is the number of blocks assigned to component c + constant C[c]_i is the cost contributed by component c from + timing data set i + constant N[c]_i is the ntasks assigned to component c from + timing data set i + + ____________________ + | ICE | LND | | + T1 |______|_______| | + | | OCN | + | ATM | | + T |______________|_____| + + Min T + s.t. Tice <= T1 + Tlnd <= T1 + T1 + Tatm <= T + Tocn <= T + + NB[c] >= 1 for c in [ice,lnd,ocn,atm] + N[ice] + N[lnd] <= N[atm] + N[atm] + N[ocn] <= TotalTasks + N[c] = blocksize * NB[c], for c in [ice,lnd,ocn,atm] + + + T[c] >= C[c]_{i} - N[c]_{i} * + (C[c]_{i+1} - C[c]_{i}) / (N[c]_{i+1} - N[c]_{i}) + + N[c] * (C[c]_{i+1} - C[c]_{i}) + / (N[c]_{i+1} - N[c]_{i}), + for i=0..#data points (original + extrapolated, + c in [ice,lnd,ocn,atm] + all T vars >=0 + all N,NB vars integer + +This MILP is solved using the PuLP python interface to the COIN-CBC solver +https://pythonhosted.org/PuLP/ +https://www.coin-or.org/Cbc/ + + +************************************ +Extending the Load Balancing Tool +************************************ +The file $CIME_DIR/tools/load_balancing_tool/optimize_model.py +contains a base class OptimizeModel as well as an implementation class +IceLndAtmOcn. Any layout solver will look similar to IceLndAtmOcn +except for the components involved and the layout-specific constraints. + +Example class and inherited methods that should be overridden: + +file my_new_layout.py:: + + import optimize_model + + class MyNewLayout(optimize_model.OptimizeModel) + def get_required_components(self): + """ + Should be overridden by derived class. Return a list of required + components (capitalized) used in the layout. + Example: return ['ATM', 'LND', 'ICE'] + """ + + def optimize(self): + """ + Run the optimization. + Must set self.state using LpStatus object + LpStatusOptimal -> STATE_SOLVED_OK + LpStatusNotSolved -> STATE_UNSOLVED + LpStatusInfeasible -> STATE_SOLVED_BAD + LpStatusUnbounded -> STATE_SOLVED_BAD + LpStatusUndefined -> STATE_UNDEFINED + -- use self.set_state(lpstatus) -- + Returns state + + If solved, then solution will be stored in self.X dictionary, indexed + by variable name. Suggested convention: + 'Tice', 'Tlnd', ... for cost per component + 'Nice', 'Nlnd', ... for ntasks per component + 'NBice', 'NBlnd', ... for number of blocks per component + + The default implementation of get_solution() returns a dictionary + of these variable keys and their values. + """ + + def get_solution(self): + """ + Return a dictionary of the solution variables, can be overridden. + Default implementation returns values in self.X + """ + + +To use this new layout: + 1. save the class MyNewLayout in file my_new_layout.py + 2. make sure that my_new_layout.py is in PYTHONPATH + 3. Use those names in your execution command line argument to --layout + :: + + $ ./load_balancing_solve.py ... --layout my_new_layout.MyNewLayout + +To permanently add to CIME: + + 1. add MyNewLayout class to layouts.py + 2. run using '--layout MyNewLayout' + 3. add test in tests/load_balance_test.py that uses that name in command + line argument (see test for atm_lnd) + 4. make pull request + + +******* +Testing +******* + +To run the provided test suite: + + 1. set PYTHONPATH to include CIME libraries:: + + $ export CIME_DIR=/path/to/cime + $ export PYTHONPATH=$CIME_DIR/scripts:$CIME_DIR/tools/load_balancing_tool + + 2. To run an example:: + + $ cd $CIME_DIR/tools/load_balancing_tool + $ ./load_balancing_solve.py --json-input tests/example.json --blocksize 8 + Solving Mixed Integer Linear Program using PuLP interface to COIN-CBC + PuLP solver status: Solved + COST_ATM = 22.567587 + COST_ICE = 1.375768 + COST_LND = 1.316000 + COST_OCN = 15.745000 + COST_TOTAL = 23.943355 + NBLOCKS_ATM = 124 + NBLOCKS_ICE = 109 + NBLOCKS_LND = 15 + NBLOCKS_OCN = 4 + NTASKS_ATM = 992 + NTASKS_ICE = 872 + NTASKS_LND = 120 + NTASKS_OCN = 32 + NTASKS_TOTAL = 1024 + + 3. To run the test suite:: + + $ cd $CIME_DIR/tools/load_balancing_tool + $ ./tests/load_balancing_test.py diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/building-a-case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/building-a-case.rst.txt new file mode 100644 index 00000000000..3f820171bc5 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/building-a-case.rst.txt @@ -0,0 +1,149 @@ +.. _building-a-case: + +****************** +Building a Case +****************** + +Once the case has been created and setup, its time to build the executable. +Several directories full of source code must be built all with the same compiler and flags. +**case.build** performs all build operations (setting dependecies, invoking Make, +creating the executable). + +.. _building-the-model: + +======================== +Calling **case.build** +======================== + +After calling `case.setup <../Tools_user/case.setup.html>`_ , run `case.build <../Tools_user/case.build.html>`_ to build the model executable. Running this will: + +1. Create the component namelists in ``$RUNDIR`` and ``$CASEROOT/CaseDocs``. +2. Create the necessary compiled libraries used by coupler and component models ``mct``, ``pio``, ``gptl`` and ``csm_share``. + The libraries will be placed in a path below ``$SHAREDLIBROOT``. +3. Create the necessary compiled libraries for each component model. These are be placed in ``$EXEROOT/bld/lib``. +4. Create the model executable (``$MODEL.exe``), which is placed in ``$EXEROOT``. + +You do not need to change the default build settings to create the executable, but it is useful to become familiar with them in order to make optimal use of the system. The CIME scripts provide you with a great deal of flexibility in customizing the build process. + +The **env_build.xml** variables control various aspects of building the executable. Most of the variables should not be modified, but users can modify these: + +- ``$BUILD_THREADED`` : if TRUE, the model will be built with OpenMP. + +- ``$DEBUG`` : if TRUE, the model is compiled with debugging instead of optimization flags. + +- ``$GMAKE_J`` : How many threads GNUMake should use while building. + +The best way to see what xml variables are in your ``$CASEROOT`` directory is to use the `xmlquery <../Tools_user/xmlquery.html>`_ command. For usage information, run: +:: + + > ./xmlquery --help + +To build the model, change to your ``$CASEROOT`` directory and execute **case.build**. +:: + + > cd $CASEROOT + > ./case.build + +Diagnostic comments appear as the build proceeds. + +The `case.build <../Tools_user/case.build.html>`_ command generates the utility and component libraries and the model executable, and it generates build logs for each component. +Each log file is named form: **$component.bldlog.$datestamp**. They are located in ``$BLDDIR``. If they are compressed (as indicated by a .gz file extension), the build ran successfully. + +Invoking `case.build <../Tools_user/case.build.html>`_ creates the following directory structure in ``$EXEROOT`` if the Intel compiler is used: +:: + + atm/, cpl/, esp/, glc/, ice/, intel/, lib/, lnd/, ocn/, rof/, wav/ + +Except for **intel/** and **lib/**, each directory contains an **obj/** subdirectory for the target model component's compiled object files. + +The *mct*, *pio*, *gptl* and *csm_share* libraries are placed in a directory tree that reflects their dependencies. See the **bldlog** for a given component to locate the library. + +Special **include** modules are placed in **lib/include**. The model executable (**cesm.exe** or **e3sm.exe**, for example) is placed directly in ``$EXEROOT``. + +Component namelists, component logs, output data sets, and restart files are placed in ``$RUNDIR``. +It is important to note that ``$RUNDIR`` and ``$EXEROOT`` are independent variables that are set in the **$CASEROOT/env_run.xml** file. + +.. _rebuilding-the-model: + +======================== +Rebuilding the model +======================== + +Rebuild the model under the following circumstances: + +If either **env_build.xml** or **Macros.make** has been modified, and/or if code is added to **SourceMods/src.**, it's safest to clean the build and rebuild from scratch as shown here: +:: + + > cd $CASEROOT + > ./case.build --clean-all + +If you have ONLY modified the PE layout in **env_mach_pes.xml**, a clean may not be required. +:: + + > cd $CASEROOT + > ./case.build + +If the threading has been changed (turned on or off) in any component since the previous build, the build script should fail with the following error and suggestion that the model be rebuilt from scratch: +:: + + ERROR SMP STATUS HAS CHANGED + SMP_BUILD = a0l0i0o0g0c0 + SMP_VALUE = a1l0i0o0g0c0 + A manual clean of your obj directories is strongly recommended. + You should execute the following: + ./case.build --clean + ./case.build + + ---- OR ---- + + You can override this error message at your own risk by executing: + ./xmlchange SMP_BUILD=0 + Then rerun the build script interactively. + +If there is any doubt, rebuild. + +Run this to clean all of the model components (except for support libraries such as *mct* and *gptl*): + :: + + > case.build --clean + +Run this to clean everything associated with the build: + :: + + > case.build --clean-all + +You can also clean an individual component as shown here, where "compname" is the name of the component you want to clean (for example, atm, clm, pio and so on). + :: + + > case.build --clean compname + +Review the **help** text for more information. + +.. _inputdata: + +========== +Input data +========== + +All active components and data components use input data sets. In order to run CIME and the CIME-compliant active components, a local disk needs the directory tree that is specified by the xml variable ``$DIN_LOC_ROOT`` to be populated with input data. + +Input data is provided as part of the CIME release via data from a subversion input data server. It is downloaded from the server on an as-needed basis determined by the case. Data may already exist in the default local file system's input data area as specified by ``$DIN_LOC_ROOT``. + +Input data can occupy significant space on a system, so users should share a common ``$DIN_LOC_ROOT`` directory on each system if possible. + +The build process handles input data as follows: + +- The **buildnml** scripts in the various component ``cime_config`` directories create listings of required component input data sets in the ``Buildconf/$component.input_data_list`` files. + +- `check_input_data <../Tools_user/check_input_data.html>`_ , which is called by `case.build <../Tools_user/case.build.html>`_ , checks for the presence of the required input data files in the root directory ``$DIN_LOC_ROOT``. + +- If all required data sets are found on the local disk, the build can proceed. + +- If any of the required input data sets are not found locally, the + files that are missing are listed. At this point, you must obtain + the required data from the input data server with `check_input_data + <../Tools_user/check_input_data.html>`_ as shown here: :: + + check_input_data --download + +The **env_run.xml** variables ``$DIN_LOC_ROOT`` and ``$DIN_LOC_ROOT_CLMFORC`` determine where you should expect input data to reside on a local disk. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-change-namelist.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-change-namelist.rst.txt new file mode 100644 index 00000000000..54497898d2b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-change-namelist.rst.txt @@ -0,0 +1,367 @@ +.. _namelist-gen: + +Customizing your input variables +================================ + +CIME and CIME-compliant components primarily use Fortran namelists to control runtime options. Some components use +other text-based files for runtime options. + +All CIME-compliant components generate their input variable files using a **buildnml** script typically located in the +component's **cime_config** directory (or other location as set in **config_file.xml**). +**buildnml** may call other scripts to complete construction of the input file. + +For example, the CIME data atmosphere model (DATM) generates namelists using the script **$CIMEROOT/components/data_comps/datm/cime_config/buildnml**. + +You can customize a model's namelists in one of two ways: + +1. by editing the **$CASEROOT/user_nl_xxx** files + + These files should be modified via keyword-value pairs that correspond to new namelist or input data settings. They use the + syntax of Fortran namelists. + +2. by calling `xmlchange <../Tools_user/xmlchange.html>`_ to modify xml variables in your ``$CASEROOT``. + + Many of these variables are converted to Fortran namelist values for input by the models. Variables that have + to be coordinated between models in a coupled system (such as how many steps to run for) are usually in a CIME xml file. + +You can generate the component namelists by running `preview_namelists <../Tools_user/preview_namelists.html>`_ from ``$CASEROOT``. + +This results in the creation of component namelists (for example, atm_in, lnd_in, and so on) in ``$CASEROOT/CaseDocs/``. + +.. warning:: The namelist files in ``CaseDocs`` are there only for user reference and **SHOULD NOT BE EDITED** since they are overwritten every time `preview_namelists <../Tools_user/preview_namelists.html>`_ and `case.submit <../Tools_user/case.submit.html>`_ are called and the files read at runtime are not the ones in ``CaseDocs``. + +.. _use-cases-modifying-driver-namelists: + +Customizing driver input variables +------------------------------------------- + +The driver input namelists/variables are contained in the files, **drv_in**, **drv_flds_in** and **seq_maps.rc**. Note that **seq_maps.rc** has a different file format than the other two input files. + +All driver namelist variables are defined in the file **$CIMEROOT/src/drivers/mct/cime_config/namelist_definition_drv.xml**. + +The variables that can be changed only by modifying xml variables appear with the *entry* attribute ``modify_via_xml="xml_variable_name"``. + +All other driver namelist variables can be modified by by adding a keyword value pair at the end of ``user_nl_cpl``. + +For example, to change the driver namelist value of ``eps_frac`` to ``1.0e-15``, add the following line to the end of the ``user_nl_cpl``: + +:: + + eps_frac = 1.0e-15 + +On the hand, to change the driver namelist value of the starting year/month/day, ``start_ymd`` to ``18500901``, use the command: + +:: + + ./xmlchange RUN_STARTDATE=1850-09-01 + +Note that + +To see the result of change, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the new value appears in **CaseDocs/drv_in**. + +.. _basic_example: + +Setting up a multi-year run +~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This shows all of the steps necessary to do a multi-year simulation starting from a "cold start" for all components. The +compset and resolution in this example are for a CESM fully-coupled case but the steps are similar for other models and cases. + +1. Create a new case named EXAMPLE_CASE in your **$HOME** directory. + + :: + + > cd $CIME/scripts + > ./create_newcase --case ~/EXAMPLE_CASE --compset B1850 --res f09_g17 + +2. Check the pe-layout by running **./pelayout**. Make sure it is suitable for your machine. + If it is not use `xmlchange <../Tools_user/xmlchange.html>`_ or `pelayout <../Tools_user/pelayout.html>`_ to modify your pe-layout. + Then setup your case and build your executable. + + :: + + > cd ~/EXAMPLE_CASE + > ./case.setup + > ./case.build + + .. warning:: The case.build script can be compute intensive and may not be suitable to run on a login node. As an alternative you would submit this job to an interactive queue. + For example, on the NCAR cheyenne platform, you would use **qcmd -- ./case.build** to do this. + +3. In your case directory, set the job to run 12 model months, set the wallclock time, and submit the job. + + :: + + > ./xmlchange STOP_OPTION=nmonths + > ./xmlchange STOP_N=12 + > ./xmlchange JOB_WALLCLOCK_TIME=06:00 --subgroup case.run + > ./case.submit + +4. Make sure the run succeeded. + + You should see the following line or similar at the end of the **cpl.log** file in your run directory or your short term archiving directory, set by ``$DOUT_S_ROOT``. + + :: + + (seq_mct_drv): =============== SUCCESSFUL TERMINATION OF CPL7-cesm =============== + +5. In the same case directory, Set the case to resubmit itself 10 times so it will run a total of 11 years (including the initial year), and resubmit the case. (Note that a resubmit will automatically change the run to be a continuation run). + + :: + + > ./xmlchange RESUBMIT=10 + > ./case.submit + + By default resubmitted runs are not submitted until the previous run is completed. For 10 1-year runs as configured in this + example, CIME will first submit a job for one year, then when that job completes it will submit a job for another year. There will be + only one job in the queue at a time. + To change this behavior, and submit all jobs at once (with batch dependencies such that only one job is run at a time), use the command: + + :: + + > ./case.submit --resubmit-immediate + +Setting up a branch or hybrid run +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +A branch or hybrid run uses initialization data from a previous run. Here is an example in which a valid load-balanced scenario is assumed. + +1. The first step in setting up a branch or hybrid run is to create a new case. A CESM compset and resolution is assumed below. + + :: + + > cd $CIMEROOT/scripts + > create_newcase --case ~/NEW_CASE --compset B1850 --res f09_g17 + > cd ~/NEW_CASE + + +2. For a branch run, use the following `xmlchange <../Tools_user/xmlchange.html>`_ commands to make **NEW_CASE** be a branch off of **EXAMPLE_CASE** at year 0001-02-01. + + :: + + > ./xmlchange RUN_TYPE=branch + > ./xmlchange RUN_REFCASE=EXAMPLE_CASE + > ./xmlchange RUN_REFDATE=0001-02-01 + +3. For a hybrid run, use the following `xmlchange <../Tools_user/xmlchange.html>`_ command to start **NEW_CASE** from **EXAMPLE_CASE** at year 0001-02-01. + + :: + + > ./xmlchange RUN_TYPE=hybrid + > ./xmlchange RUN_REFCASE=EXAMPLE_CASE + > ./xmlchange RUN_REFDATE=0001-02-01 + + For a branch run, your **env_run.xml** file for **NEW_CASE** should be identical to the file for **EXAMPLE_CASE** except for the ``$RUN_TYPE`` setting. + + Also, modifications introduced into **user_nl_** files in **EXAMPLE_CASE** should be reintroduced in **NEW_CASE**. + +4. Next, set up and build your case executable. + :: + + > ./case.setup + > ./case.build + +5. Pre-stage the necessary restart/initial data in ``$RUNDIR``. Assume for this example that it was created in the **/rest/0001-02-01-00000** directory shown here: + + :: + > cd $RUNDIR + > cp /user/archive/EXAMPLE_CASE/rest/0001-02-01-00000/* . + + It is assumed that you already have a valid load-balanced scenario. + Go back to the case directory, set the job to run 12 model months, and submit the job. + :: + + > cd ~/NEW_CASE + > ./xmlchange STOP_OPTION=nmonths + > ./xmlchange STOP_N=12 + > ./xmlchange JOB_WALLCLOCK_TIME=06:00 + > ./case.submit + +6. Make sure the run succeeded (see above directions) and then change + the run to a continuation run. Set it to resubmit itself 10 times + so it will run a total of 11 years (including the initial year), + then resubmit the case. + :: + + > ./xmlchange CONTINUE_RUN=TRUE + > ./xmlchange RESUMIT=10 + > ./case.submit + +.. _changing-data-model-namelists: + +Customizing data model input variable and stream files +------------------------------------------------------ + +Each data model can be runtime-configured with its own namelist. + +Data Atmosphere (DATM) +~~~~~~~~~~~~~~~~~~~~~~ + +DATM is discussed in detail in :ref:`data atmosphere overview ` (**link currently broken**). +DATM can be user-customized by changing either its *namelist input files* or its *stream files*. +The namelist file for DATM is **datm_in** (or **datm_in_NNN** for multiple instances). + +- To modify **datm_in** or **datm_in_NNN**, add the appropriate keyword/value pair(s) for the namelist changes that you want at the end of the **user_nl_datm** file or the **user_nl_datm_NNN** file in ``$CASEROOT``. + +- To modify the contents of a DATM stream file, first run `preview_namelists <../Tools_user/preview_namelists.html>`_ to list the *streams.txt* files in the **CaseDocs/** directory. Then, in the same directory: + + 1. Make a *copy* of the file with the string *"user_"* prepended. + ``> cp datm.streams.txt.[extension] user_datm.streams.txt[extension.`` + 2. **Change the permissions of the file to be writeable.** (chmod 644) + ``chmod 644 user_datm.streams.txt[extension`` + 3. Edit the **user_datm.streams.txt.*** file. + +**Example** + +If the stream txt file is **datm.streams.txt.CORE2_NYF.GISS**, the modified copy should be **user_datm.streams.txt.CORE2_NYF.GISS**. +After calling `preview_namelists <../Tools_user/preview_namelists.html>`_ again, your edits should appear in **CaseDocs/datm.streams.txt.CORE2_NYF.GISS**. + +Data Ocean (DOCN) +~~~~~~~~~~~~~~~~~~~~~~ + +DOCN is discussed in detail in :ref:`data ocean overview ` (**link currently broken**). +DOCN can be user-customized by changing either its namelist input or its stream files. +The namelist file for DOCN is **docn_in** (or **docn_in_NNN** for multiple instances). + +- To modify **docn_in** or **docn_in_NNN**, add the appropriate keyword/value pair(s) for the namelist changes that you want at the end of the file in ``$CASEROOT``. + +- To modify the contents of a DOCN stream file, first run `preview_namelists <../Tools_user/preview_namelists.html>`_ to list the *streams.txt* files in the **CaseDocs/** directory. Then, in the same directory: + + 1. Make a *copy* of the file with the string *"user_"* prepended. + ``> cp docn.streams.txt.[extension] user_docn.streams.txt[extension.`` + 2. **Change the permissions of the file to be writeable.** (chmod 644) + ``chmod 644 user_docn.streams.txt[extension`` + 3. Edit the **user_docn.streams.txt.*** file. + +**Example** + +As an example, if the stream text file is **docn.stream.txt.prescribed**, the modified copy should be **user_docn.streams.txt.prescribed**. +After changing this file and calling `preview_namelists <../Tools_user/preview_namelists.html>`_ again, your edits should appear in **CaseDocs/docn.streams.txt.prescribed**. + +Data Sea-ice (DICE) +~~~~~~~~~~~~~~~~~~~~~~ + +DICE is discussed in detail in :ref:`data sea-ice overview ` (**link currently broken**). +DICE can be user-customized by changing either its namelist input or its stream files. +The namelist file for DICE is ``dice_in`` (or ``dice_in_NNN`` for multiple instances) and its values can be changed by editing the ``$CASEROOT`` file ``user_nl_dice`` (or ``user_nl_dice_NNN`` for multiple instances). + +- To modify **dice_in** or **dice_in_NNN**, add the appropriate keyword/value pair(s) for the namelist changes that you want at the end of the file in ``$CASEROOT``. + +- To modify the contents of a DICE stream file, first run `preview_namelists <../Tools_user/preview_namelists.html>`_ to list the *streams.txt* files in the **CaseDocs/** directory. Then, in the same directory: + + 1. Make a *copy* of the file with the string *"user_"* prepended. + ``> cp dice.streams.txt.[extension] user_dice.streams.txt[extension.`` + 2. **Change the permissions of the file to be writeable.** (chmod 644) + ``chmod 644 user_dice.streams.txt[extension`` + 3. Edit the **user_dice.streams.txt.*** file. + +Data Land (DLND) +~~~~~~~~~~~~~~~~~~~~~~ + +DLND is discussed in detail in :ref:`data land overview ` (**link currently broken**). +DLND can be user-customized by changing either its namelist input or its stream files. +The namelist file for DLND is ``dlnd_in`` (or ``dlnd_in_NNN`` for multiple instances) and its values can be changed by editing the ``$CASEROOT`` file ``user_nl_dlnd`` (or ``user_nl_dlnd_NNN`` for multiple instances). + +- To modify **dlnd_in** or **dlnd_in_NNN**, add the appropriate keyword/value pair(s) for the namelist changes that you want at the end of the file in ``$CASEROOT``. + +- To modify the contents of a DLND stream file, first run `preview_namelists <../Tools_user/preview_namelists.html>`_ to list the *streams.txt* files in the **CaseDocs/** directory. Then, in the same directory: + + 1. Make a *copy* of the file with the string *"user_"* prepended. + ``> cp dlnd.streams.txt.[extension] user_dlnd.streams.txt[extension.`` + 2. **Change the permissions of the file to be writeable.** (chmod 644) + ``chmod 644 user_dlnd.streams.txt[extension`` + 3. Edit the **user_dlnd.streams.txt.*** file. + +Data River (DROF) +~~~~~~~~~~~~~~~~~~~~~~ + +DROF is discussed in detail in :ref:`data river overview ` (**link currently broken**). +DROF can be user-customized by changing either its namelist input or its stream files. +The namelist file for DROF is ``drof_in`` (or ``drof_in_NNN`` for multiple instances) and its values can be changed by editing the ``$CASEROOT`` file ``user_nl_drof`` (or ``user_nl_drof_NNN`` for multiple instances). + +- To modify **drof_in** or **drof_in_NNN**, add the appropriate keyword/value pair(s) for the namelist changes that you want at the end of the file in ``$CASEROOT``. + +- To modify the contents of a DROF stream file, first run `preview_namelists <../Tools_user/preview_namelists.html>`_ to list the *streams.txt* files in the **CaseDocs/** directory. Then, in the same directory: + + 1. Make a *copy* of the file with the string *"user_"* prepended. + ``> cp drof.streams.txt.[extension] user_drof.streams.txt[extension.`` + 2. **Change the permissions of the file to be writeable.** (chmod 644) + ``chmod 644 user_drof.streams.txt[extension`` + 3. Edit the **user_drof.streams.txt.*** file. + + +Customizing CESM active component-specific namelist settings +------------------------------------------------------------ + +CAM +~~~ + +CIME calls **$SRCROOT/components/cam/cime_config/buildnml** to generate the CAM's namelist variables. + +CAM-specific CIME xml variables are set in **$SRCROOT/components/cam/cime_config/config_component.xml** and are used by CAM's **buildnml** script to generate the namelist. + +For complete documentation of namelist settings, see `CAM namelist variables `_. + +To modify CAM namelist settings, add the appropriate keyword/value pair at the end of the **$CASEROOT/user_nl_cam** file. (See the documentation for each file at the top of that file.) + +For example, to change the solar constant to 1363.27, modify **user_nl_cam** file to contain the following line at the end: +:: + + solar_const=1363.27 + +To see the result, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the new value appears in **CaseDocs/atm_in**. + +CLM +~~~ + +CIME calls **$SRCROOT/components/clm/cime_config/buildnml** to generate the CLM namelist variables. + +CLM-specific CIME xml variables are set in **$SRCROOT/components/clm/cime_config/config_component.xml** and are used by CLM's **buildnml** script to generate the namelist. + +For complete documentation of namelist settings, see `CLM namelist variables `_. + +To modify CLM namelist settings, add the appropriate keyword/value pair at the end of the **$CASEROOT/user_nl_clm** file. + +To see the result, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the changes appear correctly in **CaseDocs/lnd_in**. + +MOSART +~~~~~~ + +CIME calls **$SRCROOT/components/mosart/cime_config/buildnml** to generate the MOSART namelist variables. + +To modify MOSART namelist settings, add the appropriate keyword/value pair at the end of the **$CASEROOT/user_nl_rtm** file. + +To see the result of your change, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the changes appear correctly in **CaseDocs/rof_in**. + +CICE +~~~~ + +CIME calls **$SRCROOT/components/cice/cime_config/buildnml** to generate the CICE namelist variables. + +For complete documentation of namelist settings, see `CICE namelist variables `_. + +To modify CICE namelist settings, add the appropriate keyword/value pair at the end of the **$CASEROOT/user_nl_cice** file. +(See the documentation for each file at the top of that file.) +To see the result of your change, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the changes appear correctly in **CaseDocs/ice_in**. + +In addition, `case.setup <../Tools_user/case.setup.html>`_ creates CICE's compile time `block decomposition variables `_ in **env_build.xml**. + +POP2 +~~~~ + +CIME calls **$SRCROOT/components/pop2/cime_config/buildnml** to generate the POP2 namelist variables. + +For complete documentation of namelist settings, see `POP2 namelist variables `_. + +To modify POP2 namelist settings, add the appropriate keyword/value pair at the end of the **$CASEROOT/user_nl_pop2** file. +(See the documentation for each file at the top of that file.) +To see the result of your change, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the changes appear correctly in **CaseDocs/ocn_in**. + +CISM +~~~~ + +See `CISM namelist variables `_ for a complete description of the CISM runtime namelist variables. This includes variables that appear both in **cism_in** and in **cism.config**. + +To modify any of these settings, add the appropriate keyword/value pair at the end of the **user_nl_cism** file. (See the documentation for each file at the top of that file.) +Note that there is no distinction between variables that will appear in **cism_in** and those that will appear in **cism.config**: simply add a new variable setting in **user_nl_cism**, and it will be added to the appropriate place in **cism_in** or **cism.config**. +To see the result of your change, call `preview_namelists <../Tools_user/preview_namelists.html>`_ and verify that the changes appear correctly in **CaseDocs/cism_in** and **CaseDocs/cism.config**. + +Some CISM runtime settings are sets via **env_run.xml**, as documented in `CISM runtime variables `_. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-config.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-config.rst.txt new file mode 100644 index 00000000000..f2953f0e9e0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-config.rst.txt @@ -0,0 +1,108 @@ +.. _customizing-cime: + +=============================== +CIME user config directory +=============================== + +CIME recognizes a user-created custom configuration directory, ``$HOME/.cime``. The contents of this directory may include any of the following files: + +* ``config`` + + This file must have a format which follows the python config format. See `Python Config Parser Examples `_ + + In the [main] block you can set the following variables: + + * ``CIME_MODEL=[e3sm, cesm]`` + + * ``PROJECT=`` + + Used to specify a project id for compute accounting and directory permissions when on a batch system. + + * ``CHARGE_ACCOUNT=`` + + Used to override the accounting (only) aspect of PROJECT + + * ``MAIL_USER=`` + + Used to request a non-default email for batch summary output + + * ``MAIL_TYPE=[never,all,begin,fail,end]`` + + Any **or** all the above valid values can be set to list the batch events that emails will be sent for. + + * **create_test** input arguments + + Any argument to the **create_test** script can have its default changed by listing it here with the new default. + + * The following is an example ``config`` file: + + :: + + [main] + CIME_MODEL=cesm + SRCROOT=$CIMEROOT/.. + MAIL_TYPE=end + [create_test] + MAIL_TYPE=fail + +* ``config_machines.xml`` + + This file must the same format as **$CIMEROOT/config/$model/machines/config_machines.xml** with the appropriate definitions for your machine. + + If you have a customized version of this file in the directory ``$HOME/.cime``, it will **append** to the file in ``$CIMEROOT/config/$model/machines/config_machines.xml``. + + For an example of a **config_machines.xml** file for a linux cluster, look at **$CIMEROOT/config/xml_schemas/config_machines_template.xml**. + +* ``cmake_macros`` + + This subdirectory contains a hierarchy of cmake macros files which + are used to generate the flags to be used in the compilation of a + case. The cmake macro files are examined in the following order, with later files takeing precidence over earlier ones. + + * universal.cmake + * *COMPILER*.cmake + * *OS*.cmake + * *MACHINE*.cmake + * *COMPILER*_*OS*.cmake + * *COMPILER*_*MACHINE*.cmake + +* ``config_compilers.xml`` **DEPRECATED use cmake_macros** + + This file permits you to customize compiler settings for your machine and is appended to the file **$CIMEROOT/config/$model/machines/config_compilers.xml**. + + The following is an example of what would be needed for customized a ibm compiler flags on a BlueGeneQ machine. + + :: + + + + + -g -qfullpath -qmaxmem=-1 -qspillsize=2500 -qextname=flush + -O3 -qstrict -qinline=auto + -qsmp=omp + -qsmp=omp:noopt + -DLINUX + --build=powerpc-bgp-linux --host=powerpc64-suse-linux + -Wl,--relax -Wl,--allow-multiple-definition + + + +* ``config_batch.xml`` + + This file permits you to customize batch settings for you machine and is appended to the file **$CIMEROOT/config/$model/machines/config_batch.xml**. + + The following is an example of what would be needed to add batch settings for pbs on the machine brutus. + + :: + + + + + + -S {{ shell }} + + + batch + + + diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-customize.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-customize.rst.txt new file mode 100644 index 00000000000..6431f5c388a --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-customize.rst.txt @@ -0,0 +1,76 @@ +.. _customizing-cime: + +=========================== +CIME config and hooks +=========================== + +CIME provides the ability to define model specific config and hooks. + +The config alters CIME's runtime and the hooks are triggered during their event. + +----------------------------------- +How does CIME load customizations? +----------------------------------- + +CIME will search ``cime_config/customize`` and load any python found under this directory or it's children. + +Any variables, functions or classes loaded are available from the ``CIME.customize`` module. + +--------------------------- +CIME config +--------------------------- + +Available config and descriptions. + +================================= ======================= ===== ================================================================================================================================================================================================================================ +Variable Default Type Description +================================= ======================= ===== ================================================================================================================================================================================================================================ +additional_archive_components ('drv', 'dart') tuple Additional components to archive. +allow_unsupported True bool If set to `True` then unsupported compsets and resolutions are allowed. +baseline_store_teststatus True bool If set to `True` and GENERATE_BASELINE is set then a teststatus.log is created in the case's baseline. +build_cime_component_lib True bool If set to `True` then `Filepath`, `CIME_cppdefs` and `CCSM_cppdefs` directories are copied from CASEBUILD directory to BUILDROOT in order to build CIME's internal components. +build_model_use_cmake False bool If set to `True` the model is built using using CMake otherwise Make is used. +calculate_mode_build_cost False bool If set to `True` then the TestScheduler will set the number of processors for building the model to min(16, (($GMAKE_J * 2) / 3) + 1) otherwise it's set to 4. +case_setup_generate_namelist False bool If set to `True` and case is a test then namelists are created during `case.setup`. +check_invalid_args True bool If set to `True` then script arguments are checked for being valid. +check_machine_name_from_test_name True bool If set to `True` then the TestScheduler will use testlists to parse for a list of tests. +common_sharedlibroot True bool If set to `True` then SHAREDLIBROOT is set for the case and SystemTests will only build the shared libs once. +copy_cesm_tools True bool If set to `True` then CESM specific tools are copied into the case directory. +copy_cism_source_mods True bool If set to `True` then `$CASEROOT/SourceMods/src.cism/source_cism` is created and a README is written to directory. +copy_e3sm_tools False bool If set to `True` then E3SM specific tools are copied into the case directory. +create_bless_log False bool If set to `True` and comparing test to baselines the most recent bless is added to comments. +create_test_flag_mode cesm str Sets the flag mode for the `create_test` script. When set to `cesm`, the `-c` flag will compare baselines against a give directory. +default_short_term_archiving True bool If set to `True` and the case is not a test then DOUT_S is set to True and TIMER_LEVEL is set to 4. +driver_choices ('mct', 'nuopc') tuple Sets the available driver choices for the model. +driver_default nuopc str Sets the default driver for the model. +enable_smp True bool If set to `True` then `SMP=` is added to model compile command. +make_case_run_batch_script False bool If set to `True` and case is not a test then `case.run.sh` is created in case directory from `$MACHDIR/template.case.run.sh`. +mct_path {srcroot}/libraries/mct str Sets the path to the mct library. +serialize_sharedlib_builds True bool If set to `True` then the TestScheduler will use `proc_pool + 1` processors to build shared libraries otherwise a single processor is used. +set_comp_root_dir_cpl True bool If set to `True` then COMP_ROOT_DIR_CPL is set for the case. +share_exes False bool If set to `True` then the TestScheduler will share exes between tests. +shared_clm_component True bool If set to `True` and then the `clm` land component is built as a shared lib. +sort_tests False bool If set to `True` then the TestScheduler will sort tests by runtime. +test_custom_project_machine melvin str Sets the machine name to use when testing a machine with no PROJECT. +test_mode cesm str Sets the testing mode, this changes various configuration for CIME's unit and system tests. +ufs_alternative_config False bool If set to `True` and UFS_DRIVER is set to `nems` then model config dir is set to `$CIMEROOT/../src/model/NEMS/cime/cime_config`. +use_kokkos False bool If set to `True` and CAM_TARGET is `preqx_kokkos`, `theta-l` or `theta-l_kokkos` then kokkos is built with the shared libs. +use_nems_comp_root_dir False bool If set to `True` then COMP_ROOT_DIR_CPL is set using UFS_DRIVER if defined. +use_testreporter_template True bool If set to `True` then the TestScheduler will create `testreporter` in $CIME_OUTPUT_ROOT. +verbose_run_phase False bool If set to `True` then after a SystemTests successful run phase the elapsed time is recorded to BASELINE_ROOT, on a failure the test is checked against the previous run and potential breaking merges are listed in the testlog. +xml_component_key COMP_ROOT_DIR_{} str The string template used as the key to query the XML system to find a components root directory e.g. the template `COMP_ROOT_DIR_{}` and component `LND` becomes `COMP_ROOT_DIR_LND`. +================================= ======================= ===== ================================================================================================================================================================================================================================ + +--------------------------- +CIME hooks +--------------------------- + +Available hooks and descriptions. + +======================================= ================================= +Function Description +======================================= ================================= +``save_build_provenance(case, lid)`` Called after the model is built. +``save_prerun_provenance(case, lid)`` Called before the model is run. +``save_postrun_provenance(case, lid)`` Called after the model is run. +======================================= ================================= diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-dir.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-dir.rst.txt new file mode 100644 index 00000000000..15643de5f05 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-dir.rst.txt @@ -0,0 +1,49 @@ +.. _cime-dir: + +****************** +Directory content +****************** + +If you use CIME as part of a climate model or standalone, the content of the **cime** directory is the same. + +If you are using it as part of a climate model, **cime** is usually one of the first subdirectories under the main directory. + +.. table:: **CIME directory in a climate model** + + ====================== =================================== + Directory or Filename Description + ====================== =================================== + README, etc. typical top-level directory content + components/ source code for active models + cime/ All of CIME code + ====================== =================================== + +CIME's content is split into several subdirectories. Users should start in the **scripts/** subdirectory. + +.. table:: **CIME directory content** + + ========================== ================================================================== + Directory or Filename Description + ========================== ================================================================== + CMakeLists.txt For building with CMake + ChangeLog Developer-maintained record of changes to CIME + ChangeLog_template Template for an entry in ChangeLog + LICENSE.TXT The CIME license + README Brief intro to CIME + README.md README in markdown language + README.unit_testing Instructions for running unit tests with CIME + **config/** **Shared and model-specific configuration files** + config/cesm/ CESM-specific configuration options + config/e3sm/ E3SM-specific configuration options + **scripts/** **The CIME user interface** + scripts/lib/ Infrastructure source code for CIME scripts and functions + scripts/Tools/ Auxiliary tools; scripts and functions + **src/** **Model source code provided by CIME** + src/components/ CIME-provided components including data and stub models + src/drivers/ CIME-provided main driver for a climate model + src/externals/ Software provided with CIME for building a climate model + src/share/ Model source code provided by CIME and used by multiple components + **tests/** **Tests** + **tools/** **Standalone climate modeling tools** + utils/ Some Perl source code needed by some prognostic components + ========================== ================================================================== diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-internals.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-internals.rst.txt new file mode 100644 index 00000000000..3f31dd7cac6 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cime-internals.rst.txt @@ -0,0 +1,50 @@ +.. _cime-internals: + +======================== +Main Configuration File +======================== + +The file **$CIMEROOT/config/[cesm,e3sm]/config_files.xml** contains all model-specific information that CIME uses to determine compsets, compset component settings, model grids, machines, batch queue settings, and compiler settings. It contains the following xml nodes, which are discussed below or in subsequent sections of this guide. +:: + + compset definitions: + + + component specific compset settings: + + + + + + + + + + + + pe-settings: + + + grid definitions: + + + machine specific definitions: + + + + + + testing: + + + + + + archiving: + + + CIME components namelists definitions: + + + user-mods directories: + diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cloning-a-case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cloning-a-case.rst.txt new file mode 100644 index 00000000000..948628464e4 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/cloning-a-case.rst.txt @@ -0,0 +1,66 @@ +.. _cloning-a-case: + +************************** +Cloning a Case +************************** + +If you have access to a run that you want to clone, the +`create_clone <../Tools_user/create_clone.html>`_ command will create a new case and run `case.setup <../Tools_user/case.setup.html>`_ +while preserving local modifications to the case. + +Here is a simple example: +:: + + > cd $CIMEROOT/scripts + > create_clone --case $CASEROOT --clone $CLONEROOT + > cd $CASEROOT + > case.build + > case.submit + +The `create_clone <../Tools_user/create_clone.html>`_ script preserves any local namelist modifications +made in the **user_nl_xxxx** files as well as any source code +modifications in the **SourceMods/** directory tree. Otherwise, your **$CASEROOT** directory +directory will appear as if `create_newcase <../Tools_user/create_newcase.html>`_ had just been run. + +**Important**: Do not change anything in the **env_case.xml** file. + +See the **help** text for more usage information. + +:: + + > create_clone --help + +`create_clone <../Tools_user/create_clone.html>`_ has several useful optional arguments. To point to +the executable of the original case you are cloning from. + +:: + + > create_clone --case $CASEROOT --clone $CLONEROOT --keepexe + > cd $CASEROOT + > case.submit + +If the ``--keepexe`` optional argument is used, then no SourceMods +will be permitted in the cloned directory. A link will be made when +the cloned case is created pointing the cloned SourceMods/ directory +to the original case SourceMods directory. + +.. warning:: No changes should be made to ``env_build.xml`` or ``env_mach_pes.xml`` in the cloned directory. + +`create_clone <../Tools_user/create_clone.html>`_ also permits you to invoke the ``shell_commands`` + and ``user_nl_xxx`` files in a user_mods directory by calling: + +:: + + > create_clone --case $CASEROOT --clone $CLONEROOT --user-mods-dir USER_MODS_DIR [--keepexe] + +Note that an optional ``--keepexe`` flag can also be used in this case. + +.. warning:: If there is a ``shell_commands`` file, it should not have any changes to xml variables in either ``env_build.xml`` or ``env_mach_pes.xml``. + +Another approach to duplicating a case is to use the information in +the case's **README.case** and **CaseStatus** files to create a new +case and duplicate the relevant `xmlchange <../Tools_user/xmlchange.html>`_ commands that were +issued in the original case. This alternative will *not* preserve any +local modifications that were made to the original case, such as +source-code or build-script revisions; you will need to import those +changes manually. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/compsets.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/compsets.rst.txt new file mode 100644 index 00000000000..f57a3d9b65c --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/compsets.rst.txt @@ -0,0 +1,194 @@ +.. _compsets: + +=============== +Component sets +=============== + +In CIME, multiple components can define compsets that are targeted to their model development needs. + +Each component supports a set of compset longnames that are used in testing and supported in out of the box configurations. + +To determine if the compset name to `create_newcase <../Tools_user/create_newcase.html>`_ is a supported component, CIME looks in the **config_files.xml** file and parses the +the xml element ``COMPSETS_SPEC_FILE`` in order to determine which component is defining the compset. + +In the case of CESM, this xml element has the contents shown here, where ``$SRCROOT`` is the root of your CESM sandbox and contains ``$CIMEROOT`` as a subdirectory: + +:: + + + char + unset + + $SRCROOT/cime_config/config_compsets.xml + $CIMEROOT/src/drivers/mct/cime_config/config_compsets.xml + $SRCROOT/components/cam/cime_config/config_compsets.xml + $SRCROOT/components/cism/cime_config/config_compsets.xml + $SRCROOT/components/clm/cime_config/config_compsets.xml + $SRCROOT/components/cice/cime_config/config_compsets.xml + $SRCROOT/components/pop/cime_config/config_compsets.xml + + case_last + env_case.xml + file containing specification of all compsets for primary component (for documentation only - DO NOT EDIT) + $CIMEROOT/config/xml_schemas/config_compsets.xsd + + + +Every file listed in COMPSETS_SPEC_FILE will be searched for the compset specified in the call to create_newcase. + +CIME will note which component's config_compsets.xml had the matching compset name and that component will be treated as +the **primary component** As an example, the primary component for a compset that has a prognostic atmosphere, +land and cice (in prescribed mode) and a data ocean is the atmosphere component (for cesm this is CAM) because the compset +is defined, using the above example, in ``$SRCROOT/components/cam/cime_config/config_compsets.xml`` +In a compset where all components are prognostic, the primary component will be **allactive**. + +.. _defining-compsets: + +Compset longname +------------------- + +Each config_compsets.xml file has a list of allowed component sets in the form of a longname and an alias. + +A compset longname has this form:: + + TIME_ATM[%phys]_LND[%phys]_ICE[%phys]_OCN[%phys]_ROF[%phys]_GLC[%phys]_WAV[%phys]_ESP[_BGC%phys] + +Supported values for each element of the longname:: + + TIME = model time period (e.g. 1850, 2000, 20TR, SSP585...) + + CIME supports the following values for ATM,LND,ICE,OCN,ROF,GLC,WAV and ESP. + ATM = [DATM, SATM, XATM] + LND = [DLND, SLND, XLND] + ICE = [DICE, SICE, SICE] + OCN = [DOCN, SOCN, XOCN] + ROF = [DROF, SROF, XROF] + GLC = [SGLC, XGLC] + WAV = [SWAV, XWAV] + ESP = [SESP] + +A CIME-driven model may have other options available. Use `query_config <../Tools_user/query_config.html>`_ to determine the available options. + +The OPTIONAL %phys attributes specify sub-modes of the given system. +For example, DOCN%DOM is the DOCN data ocean (rather than slab-ocean) mode. +**All** the possible %phys choices for each component are listed by calling `query_config --compsets <../Tools_user/query_config.html>`_. +**All** data models have a %phys option that corresponds to the data model mode. + +.. _defining-component-specific-compset-settings: + +Component specific settings in a compset +----------------------------------------- + +Every model component also contains a **config_component.xml** file that has two functions: + +1. Specifying the component-specific definitions of what can appear after the ``%`` in the compset longname, (for example, ``DOM`` in ``DOCN%DOM``). + +2. Specifying the compset-specific ``$CASEROOT`` xml variables. + +CIME first parses the following nodes to identify appropriate **config_component.xml** files for the driver. There are two such files; one is model-independent and the other is model-specific. +:: + + + ... + $CIMEROOT/driver_cpl/cime_config/config_component.xml + .. + + + + $CIMEROOT/driver_cpl/cime_config/config_component_$MODEL.xml + + +CIME then parses each of the nodes listed below, using using the value of the *component* attribute to determine which xml files to use for the requested compset longname. +:: + + + + + + + + + + +As an example, the possible atmosphere components for CESM have the following associated xml files. +:: + + + char + unset + + $SRCROOT/components/cam/cime_config/config_component.xml + $CIMEROOT/components/data_comps/datm/cime_config/config_component.xml + $CIMEROOT/components/stub_comps/satm/cime_config/config_component.xml + $CIMEROOT/components/xcpl_comps/xatm/cime_config/config_component.xml + + case_last + env_case.xml + file containing specification of component specific definitions and values(for documentation only - DO NOT EDIT) + $CIMEROOT/cime_config/xml_schemas/entry_id.xsd + + +If the compset's atm component attribute is ``datm``, the file ``$CIMEROOT/components/data_comps/datm/cime_config/config_component.xml`` specifies all possible component settings for ``DATM``. + +The schema for every **config_component.xml** file has a ```` node that specifies all possible values that can follow the ``%`` character in the compset name. + +To list the possible values, use the `query_config --component datm <../Tools_user/query_config.html>`_ command. + +.. _creating-new-compsets: + +Creating New Compsets +----------------------- + +A description of how CIME interprets a compset name is given in the section :ref:`defining-compsets` . + +To create a new compset, you will at a minimum have to: + +1. edit the approprite ``config_components.xml`` file(s) to add your new requirements +2. edit associate ``namelist_definitions_xxx.xml`` in the associated ``cime_config`` directories. + (e.g. if a change is made to the the ``config_components.xml`` for ``DOCN`` then ``namelist_definitions_docn.xml`` file will also need to be modified). + +It is important to point out, that you will need expertise in the target component(s) you are trying to modify in order to add new compset functionality for that particular component. +We provide a few examples below that outline this process for a few simple cases. + + +Say you want to add a new mode, ``FOO``, to the data ocean model, ``DOCN``. Lets call this mode, ``FOO``. +This would imply when parsing the compset longname, CIME would need to be able to recognize the string ``_DOCN%FOO_``. +To enable this, you will need to do the following: + +1. edit ``$CIMEROOT/src/components/data_comps/docn/cime_config/config_component.xml`` (see the ``FOO`` additions below). + + * add an entry to the ```` block as shown below :: + + + DOCN + ... + new mode + .... + + + * add an entry to the ```` block as shown below:: + + + .... + + .... + prescribed + ... + + + * modify any of the other xml entries that need a new dependence on ``FOO`` + +2. edit ``$CIMEROOT/src/components/data_comps/docn/cime_config/namelist_definition_docn.xml`` (see the ``FOO`` additions below). + + * add an entry to the ``datamode`` block as shown below. :: + + + .... + ...FOO + ... + + + * add additional changes to ``namelist_definition_docn.xml`` for the new mode + + +.. todo:: Add additional examples for creating a case diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/create-a-case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/create-a-case.rst.txt new file mode 100644 index 00000000000..c9257da19a6 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/create-a-case.rst.txt @@ -0,0 +1,223 @@ +.. _creating-a-case: + +********************************* +Creating a Case +********************************* + +This and following sections provide more detail about the basic commands of the CIME Case Control System: **create_newcase**, +**case.setup**, **case.build** and **case.submit**. On a supported system, you can configure, build and run many complex +climate model configurations with only these 4 commands. + +To see if your machine is supported try:: + + > query_config --machines + +If you are not on an out-of-the box CIME-supported platform, you will need to :ref:`port ` CIME to your system before proceeding. + +=================================== +Calling **create_newcase** +=================================== + +The first step in creating a CIME-based experiment is to use `create_newcase <../Tools_user/create_newcase.html>`_. + +See the options for `create_newcase <../Tools_user/create_newcase.html>`_ in the **help** text.:: + + > create_newcase --help + +The only required arguments to `create_newcase <../Tools_user/create_newcase.html>`_ are:: + + > create_newcase --case CASENAME --compset COMPSET --res GRID + +Creating a CIME experiment or *case* requires, at a minimum, specifying a compset and a model grid and a case directory. +CIME supports out-of-the-box *component sets*, *model grids* and *hardware platforms* (machines). + +.. warning:: + The ``--case`` argument must be a string and may not contain any of the following special characters + :: + > + * ? < > { } [ ] ~ ` @ : + +The ``--case`` argument is used to define the name of your case, a very important piece of +metadata that will be used in filenames, internal metadata and directory paths. The +``CASEROOT`` is a directory create_newcase will create with the same name as the +``CASENAME``. If ``CASENAME`` is simply a name (not a path), ``CASEROOT`` is created in +the directory where you execute create_newcase. If ``CASENAME`` is a relative or absolute +path, ``CASEROOT`` is created there, and the name of the case will be the last component +of the path. + +====================================== +Results of calling **create_newcase** +====================================== + +Suppose **create_newcase** was called as follows. +Here, $CIMEROOT is the full pathname of the root directory of the CIME distribution:: + + > cd $CIMEROOT/scripts + > create_newcase --case ~/cime/example1 --compset A --res f09_g16_rx1 + +In the example, the command creates a ``$CASEROOT`` directory: ``~/cime/example1``. +If that directory already exists, a warning is printed and the command aborts. + +In the argument to ``--case``, the case name is taken from the string after the last slash +--- so here the case name is ``example1``. + +The output from create_newcase includes information such as. + +- The compset longname is ``2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV`` +- The model resolution is ``a%0.9x1.25_l%0.9x1.25_oi%gx1v6_r%r05_m%gx1v6_g%null_w%null`` + +`create_newcase <../Tools_user/create_newcase.html>`_ installs files in ``$CASEROOT`` that will build and run the model and to optionally archive the case on the target platform. + +Running `create_newcase <../Tools_user/create_newcase.html>`_ creates the following scripts, files and directories in ``$CASEROOT``: + +**User Scripts** + +- `case.build <../Tools_user/case.build.html>`_ + Script to build component and utility libraries and model executable. + +- `case.setup <../Tools_user/case.setup.html>`_ + Script used to set up the case (create the case.run script, Macros file and user_nl_xxx files). + +- `case.st_archive <../Tools_user/case.st_archive.html>`_ + Script to perform short term archiving to disk for your case output. Note that this script is run automatically by the normal CIME workflow. + +- `case.submit <../Tools_user/case.submit.html>`_ + Script to submit the case to run using the machine's batch queueing system. + +- `case.cmpgen_namelist <../Tools_user/case.submit.html>`_ + Script to perform namelist baseline operations (compare, generate, or both)." + +- `xmlchange <../Tools_user/xmlchange.html>`_ + Script to modify values in the xml files. + +- `xmlquery <../Tools_user/xmlquery.html>`_ + Script to query values in the xml files. + +- `preview_namelists <../Tools_user/preview_namelists.html>`_ + Script for users to see their component namelists in ``$CASEROOT/CaseDocs`` before running the model. + +- `preview_run <../Tools_user/preview_run.html>`_ + Script for users to see batch submit and mpirun command." + +- `check_input_data <../Tools_user/check_input_data.html>`_ + Script for checking for various input data sets and moving them into place. + +- `check_case <../Tools_user/check_case.html>`_ + Script to verify case is set up correctly. + +- `pelayout <../Tools_user/pelayout.html>`_ + Script to query and modify the NTASKS, ROOTPE, and NTHRDS for each component model. + This a convenience script that can be used in place of `xmlchange <../Tools_user/xmlchange.html>`_ and `xmlquery <../Tools_user/xmlquery.html>`_. + +**XML Files** + +- env_archive.xml + Defines patterns of files to be sent to the short-term archive. + You can edit this file at any time. You **CANNOT** use `xmlchange <../Tools_user/xmlchange.html>`_ to modify variables in this file." + +- env_mach_specific.xml + Sets a number of machine-specific environment variables for building and/or running. + You **CANNOT** use `xmlchange <../Tools_user/xmlchange.html>`_ to modify variables in this file. + +- env_build.xml + Sets model build settings. This includes component resolutions and component compile-time configuration options. + You must run the case.build command after changing this file. + +- env_run.xml + Sets runtime settings such as length of run, frequency of restarts, output of coupler diagnostics, and short-term and long-term archiving. + This file can be edited at any time before a job starts. + +- env_mach_pes.xml + Sets component machine-specific processor layout (see changing pe layout ). + The settings in this are critical to a well-load-balanced simulation (see :ref:`load balancing `). + +- env_batch.xml + Sets batch system settings such as wallclock time and queue name." + +**User Source Mods Directory** + +- SourceMods + Top-level directory containing subdirectories for each compset component where you can place modified source code for that component. + You may also place modified buildnml and buildlib scripts here." + +**Provenance** + +- README.case + File detailing `create_newcase <../Tools_user/create_newcase.html>`_ usage. + This is a good place to keep track of runtime problems and changes." + +- CaseStatus + File containing a list of operations done in the current case. + + +**Non-modifiable work directories** + +- Buildconf, + Work directory containing scripts to generate component namelists and component and utility libraries (PIO or MCT, for example). You should never have to edit the contents of this directory. + +- LockedFiles/ + Work directory that holds copies of files that should not be changed. Certain xml files are *locked* after their variables have been used by should no longer be changed (see below). + +- Tools/ + Work directory containing support utility scripts. You should never need to edit the contents of this directory." + +=================================== +Locked files in your case directory +=================================== + +The ``$CASEROOT`` xml files are organized so that variables can be +locked at certain points after they have been resolved (used) in other +parts of the scripts system. + +CIME does this by *locking* a file in ``$CASEROOT/LockedFiles`` and +not permitting you to modify that file unless, depending on the file, +you call `case.setup --clean <../Tools_user/case.setup.html>`_ or +`case.build --clean <../Tools_user/case.build.html>`_ . + +CIME locks your ``$CASEROOT`` files according to the following rules: + +- Locks variables in **env_case.xml** after `create_newcase <../Tools_user/create_newcase.html>`_. + The **env_case.xml** file can never be unlocked. + +- Locks variables in **env_mach_pes.xml** after `case.setup <../Tools_user/case.setup.html>`_. + To unlock **env_mach_pes.xml**, run `case.setup --clean <../Tools_user/case.setup.html>`_. + +- Locks variables in **env_build.xml** after completion of `case.build <../Tools_user/case.build.html>`_. + To unlock **env_build.xml**, run `case.build --clean <../Tools_user/case.build.html>`_ + +- Variables in **env_run.xml**, **env_batch.xml** and **env_archive.xml** are never locked, and most can be changed at any time. + +- There are some exceptions in the **env_batch.xml** file. + +=================================== +Adding a --user-mods-dir argument to **create_newcase** +=================================== + +A user may want to customize a target case with a combination of +``user_nl_xxx`` file modifications and/or ``SourceMods`` for some +components and/or **xmlchange** commands. As an example, the user +might want to carry out a series of experiments based on a common set +of changes to the namelists, source code and/or case xml settings. +Rather than make these changes each time a new experimental +``CASEROOT`` is generated, the user can create a directory on local +disk with a set of changes that will be applied to each case. + +As an example, the directory could contain the following files: :: + + > user_nl_cpl + > shell_commands (this would contain ./xmlchange commands) + > SourceMods/src.cam/dyncomp.F90 + +It is important to note that the file containing the **xmlchange** +commands must be named ``shell_commands`` in order for it to be recognised +and run upon case creation. + +The structure of the component directories do not need to be the +same as in the component source code. As an example, should the user +want to modify the ``src/dynamics/eul/dyncomp.F90`` file within the +CAM source code, the modified file should be put into the directory +``SourceMods/src.cam`` directly. There is no need to mimic the source +code structure, such as ``SourceMods/src.cam/dynamics/eul``. + +When the user calls **create_newcase** with the ``--user-mods-dir`` pointing to the +full pathname of the directory containing these changes, then the ``CASEROOT`` will be +created with these changes applied. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/grids.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/grids.rst.txt new file mode 100644 index 00000000000..1ebe44f8171 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/grids.rst.txt @@ -0,0 +1,214 @@ +.. _grids: + +======================== +Model grids +======================== + +CIME looks at the xml node ``GRIDS_SPEC_FILE`` in **$CIMEROOT/config/$models/config_files.xml** file to identify supported out-of-the-box model grids for the target model. + +The node has the following contents: +:: + + + char + $CIMEROOT/cime_config/$MODEL/config_grids.xml + case_last + env_case.xml + file containing specification of all supported model grids, domains and mapping files (for documentation only - DO NOT EDIT) + $CIMEROOT/cime_config/xml_schemas/config_grids_v2.xsd + + +Grid longname +------------- + +CIME model grids generally are associated with a specific combination of atmosphere, land, land-ice, river-runoff and ocean/ice grids. The naming convention for these grids uses only atmosphere, land, and ocean/ice grid specifications. + +A model grid longname has the form:: + + a%name_l%name_oi%name_r%name_m%mask_g%name_w%name + +For reference:: + + a% = atmosphere grid + l% = land grid + oi% = ocean/sea-ice grid (must be the same) + r% = river grid + m% = ocean mask grid + g% = internal land-ice grid + w% = wave component grid + +The ocean mask grid determines land/ocean boundaries in the model. +On the ocean grid, a grid cell is assumed to be either all ocean or all land. +The land mask on the land grid is obtained by mapping the ocean mask +(using first-order conservative mapping) from the ocean grid to the land grid. + +From the point of view of model coupling, the glc grid is assumed to +be identical to the land grid. The internal land-ice grid can be different, +however, and is specified by the g% value. + +As an example, examine this actual grid longname:: + + a%ne30np4_l%ne30np4_oi%gx1v7_r%r05_m%gx1v7_g%null_w%null + +It refers to a model grid with a ne30np4 spectral element (approximately 1-degree) atmosphere and land grids, gx1v7 Greenland pole, 1-degree ocean and sea-ice grids, a 1/2 degree river routing grid, null wave and internal cism grids, and an gx1v7 ocean mask. +The alias for this grid is ne30_g16. + +CIME also permits users to introduce their own :ref:`user-defined grids `. + +Component grids are denoted by the following naming convention: + +- "[dlat]x[dlon]" are regular lon/lat finite volume grids where dlat and dlon are the approximate grid spacing. The shorthand convention is "fnn" where nn generally is a pair of numbers indicating the resolution. An example is 1.9x2.5 or f19 for the approximately "2-degree" finite-volume grid. Note that CAM uses an [nlat]x[nlon] naming convention internally for this grid. + +- "Tnn" are spectral lon/lat grids where nn is the spectral truncation value for the resolution. The shorthand name is identical. Example: T85. + +- "ne[X]np[Y]" are cubed sphere resolutions where X and Y are integers. The short name generally is ne[X]. Examples: ne30np4 or ne30. + +- "pt1" is a single grid point. + +- "gx[D]v[n]" is a POP displaced pole grid where D is the approximate resolution in degrees and n is the grid version. The short name generally is g[D][n]. An example is gx1v7 or g17 for a grid of approximately 1-degree resolution. +- "tx[D]v[n]" is a POP tripole grid where D is the approximate resolution in degrees and n is the grid version. + +- "oRSS[x]to[y]" is an MPAS grid with grid spacing from x to y kilometers. + +- "oEC[x]to[y]" is an MPAS grid with grid spacing from x to y kilometers. + +.. _adding-cases: + +Adding grids +------------- + +.. _adding-a-grid: + +CIME supports numerous out-of-the box model resolutions. To see the grids that are supported, call `query_config <../Tools_user/query_config.html>`_ as shown below. + :: + + > query_config --grids + +The most common resolutions have the atmosphere and land components on one grid and the ocean and ice on a second grid. The following overview assumes that this is the case. +The naming convention looks like *f19_g17*, where the f19 indicates that the atmosphere and land are on the 1.9x2.5 (finite volume dycore) grid while the g17 means the ocean and ice are on the gx1v6 one-degree displaced pole grid. + +CIME enables users to add their own component grid combinations. +The steps for adding a new component grid to the model system follow. This process can be simplified if the atmosphere and land are running on the same grid. + +1. The first step is to generate SCRIP grid files for the atmosphere, land, ocean, land-ice, river and wave component grids that will comprise your model grid. + If you are introducing just one new grid, you can leverage SCRIP grid files that are already in place for the other components. + There is no supported functionality for creating the SCRIP format file. + +2. Build the **check_map** utility by following the instructions in **$CIMEROOT/tools/mapping/check_maps/INSTALL**. Also confirm that the ESMF toolkit is installed on your machine. + + When you add new user-defined grid files, you also need to generate a set of mapping files so the coupler can send data from a component on one grid to a component on another grid. + There is an ESMF tool that tests the mapping file by comparing a mapping of a smooth function to its true value on the destination grid. + We have tweaked this utility to test a suite of smooth functions, as well as ensure conservation (when the map is conservative). + Before generating mapping functions it is *highly recommended* that you build this utility. + +3. Generate these mapping files: + :: + + atm <-> ocn + atm <-> wav + lnd <-> rof + lnd <-> glc + ocn <-> wav + rof -> ocn + + Using the SCRIP grid files from Step 1, generate a set of conservative (area-averaged) and non-conservative (patch and bilinear) mapping files. + + You can do this by calling **gen_cesm_maps.sh** in ``$CIMEROOT/tools/mapping/gen_mapping_files/``. + This script generates all the mapping files needed except ``rof -> ocn``, which is discussed below. + This script uses the ESMF offline weight generation utility, which you must build *prior* to running **gen_cesm_maps.sh**. + + The **README** file in the **gen_mapping_files/** directory describes how to run **gen_cesm_maps.sh**. The basic usage is shown here: + :: + + > cd $CIMEROOT/tools/mapping/gen_mapping_files + > ./gen_cesm_maps.sh \ + --fileocn \ + --fileatm \ + --filelnd \ + --filertm \ + --nameocn \ + --nameatm \ + --namelnd \ + --namertm + + This command generates the following mapping files: + :: + + map_atmname_TO_ocnname_aave.yymmdd.nc + map_atmname_TO_ocnname_blin.yymmdd.nc + map_atmname_TO_ocnname_patc.yymmdd.nc + map_ocnname_TO_atmname_aave.yymmdd.nc + map_ocnname_TO_atmname_blin.yymmdd.nc + map_atmname_TO_lndname_aave.yymmdd.nc + map_atmname_TO_lndname_blin.yymmdd.nc + map_lndname_TO_atmname_aave.yymmdd.nc + map_ocnname_TO_lndname_aave.yymmdd.nc + map_lndname_TO_rtmname_aave.yymmdd.nc + map_rtmname_TO_lndname_aave.yymmdd.nc + + .. note:: You do not need to specify all four grids. For example, if you are running with the atmosphere and land on the same grid, then you do not need to specify the land grid (and atm<->rtm maps will be generated). + If you also omit the runoff grid, then only the 5 atm<->ocn maps will be generated. + + .. note:: ESMF_RegridWeightGen runs in parallel, and the ``gen_cesm_maps.sh`` script has been written to run on yellowstone. + To run on any other machine, you may need to add some environment variables to ``$CIMEROOT/tools/mapping/gen_mapping_files/gen_ESMF_mapping_file/create_ESMF_map.sh`` -- search for hostname to see where to edit the file. + +4. Generate atmosphere, land and ocean / ice domain files. + + Using the conservative ocean to land and ocean to atmosphere mapping files created in the previous step, you can create domain files for the atmosphere, land, and ocean; these are basically grid files with consistent masks and fractions. + You make these files by calling **gen_domain** in **$CIMEROOT/tools/mapping/gen_domain_files**. + The **INSTALL** file in the **gen_domain_files/** directory describes how to build the **gen_domain** executable. The **README** file in the same directory explains how to use the tool. The basic usage is: + :: + + > ./gen_domain -m ../gen_mapping_files/map_ocnname_TO_lndname_aave.yymmdd.nc -o ocnname -l lndname + > ./gen_domain -m ../gen_mapping_files/map_ocnname_TO_atmname_aave.yymmdd.nc -o ocnname -l atmname + + These commands generate the following domain files: + :: + + domain.lnd.lndname_ocnname.yymmdd.nc + domain.ocn.lndname_ocnname.yymmdd.nc + domain.lnd.atmname_ocnname.yymmdd.nc + domain.ocn.atmname_ocnname.yymmdd.nc + domain.ocn.ocnname.yymmdd.nc + + .. note:: The input atmosphere grid is assumed to be unmasked (global). Land cells whose fraction is zero will have land mask = 0. + + .. note:: If the ocean and land grids *are identical* then the mapping file will simply be unity and the land fraction will be one minus the ocean fraction. + +5. If you are adding a new ocn or rtm grid, create a new rtm->ocn mapping file. (Otherwise you can skip this step.) + The process for mapping from the runoff grid to the ocean grid is currently undergoing many changes. + At this time, if you are running with a new ocean or runoff grid, please contact Michael Levy (mlevy_AT_ucar_DOT_edu) for assistance. If you are running with standard ocean and runoff grids, the mapping file should already exist and you do not need to generate it. + + +6. CESM specific: If you are adding a new atmosphere grid, this means you are also generating a new land grid, and you will need to create a new CLM surface dataset. (Otherwise you can skip this step). + You need to first generate mapping files for CLM surface dataset (since this is a non-standard grid). + :: + + > cd $CIMEROOT/../components/clm/tools/mkmapdata + > ./mkmapdata.sh --gridfile --res --gridtype global + + These mapping files are then used to generate CLM surface dataset. Below is an example for a current day surface dataset (model year 2000). + + :: + + > cd $CIMEROOT/../components/clm/tools/mksurfdata_map + > ./mksurfdata.pl -res usrspec -usr_gname -usr_gdate yymmdd -y 2000 + +7. Create grid file needed for create_newcase. + The next step is to add the necessary new entries in the appropriate ``config_grids.xml`` file. + You will need to modify ``$CIMEROOT/config/cesm/config_grids.xml`` or ``$CIMEROOT/config/e3sm/config_grids.xml`` depending on the value of ``$CIME_MODEL``. + You will need to: + + - add a single ```` entry + - add possibly multiple ```` entries for every new component grid that you have added + - add possibly multiple ```` entries for all the new component combinations that require new mapping files + +8. Test new grid. + + Below assume that the new grid is an atmosphere grid. + :: + + Test the new grid with all data components. + (write an example) + Test the new grid with CAM(newgrid), CLM(newgrid), DOCN(gx1v6), DICE(gx1v6) + (write an example) diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/index.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/index.rst.txt new file mode 100644 index 00000000000..9df9b084490 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/index.rst.txt @@ -0,0 +1,55 @@ +.. on documentation master file, created by + sphinx-quickstart on Tue Jan 31 19:46:36 2017. + You can adapt this file completely to your liking, but it should at least + contain the root `toctree` directive. + +.. _users-guide1: + +####################################### +Case Control System Part 1: Basic Usage +####################################### + +.. toctree:: + :maxdepth: 2 + :numbered: + + introduction-and-overview.rst + create-a-case.rst + setting-up-a-case.rst + building-a-case.rst + running-a-case.rst + cloning-a-case.rst + cime-change-namelist.rst + cime-config.rst + cime-customize.rst + troubleshooting.rst + +.. _users-guide2: + +####################################################################################### +Case Control System Part 2: Configuration, Porting, Testing and Use Cases +####################################################################################### + +.. toctree:: + :maxdepth: 2 + :numbered: + + cime-internals.rst + compsets.rst + grids.rst + machine.rst + pes-threads.rst + porting-cime.rst + timers.rst + testing.rst + unit_testing.rst + multi-instance.rst + workflows.rst + cime-dir.rst + +Indices and tables +================== + +* :ref:`genindex` +* :ref:`modindex` +* :ref:`search` diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/introduction-and-overview.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/introduction-and-overview.rst.txt new file mode 100644 index 00000000000..2371ef0fddc --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/introduction-and-overview.rst.txt @@ -0,0 +1,210 @@ +.. _introduction-and-overview: + +.. role:: red + + +************* +Introduction +************* + +Part 1 of this guide explains the basic commands in the CIME Case Control System +that are needed to get a model running. + +Prerequisites +============= + +Part 1 of this guide assumes that CIME or a CIME-driven model and the necessary input files +have been installed on the computer you are using. If that is not the case, see :ref:`Porting CIME`. + +Other prerequisites: + +- Familiarity with basic climate modeling concepts. + +- Familiarity with UNIX command line terminals and the UNIX development environment. + +- A correct version of the Python interpreter. + +CIME's commands are Python scripts and require a correct version of +the Python interpreter to be installed. The Python version must be +greater than 2.7. Determine which version you have +like this: +:: + + > python --version + +Consult your local documentation if you need to update your python version. + +Key Terms and concepts +====================== + +The following key terms and concepts are ingrained in CIME and used frequently in this documentation. +See the :ref:`glossary` for a more complete list of terms. + +**components** + + In CIME, a coupled earth system model is made up of *components* that interact through a coupler and are all controlled by a driver. + + In the current version of CIME, there are 7 physical components allowed. They are: + + atmosphere, ocean, sea-ice, land surface, river, ice sheet, ocean waves + + Components are also referred to as "models". The choice of 7 is partly historical and partly determined by the physics of the + Earth system: these 7 components + occupy physically distinct domains in the Earth system and/or require different numerical grids for solving. + + +**component types** + + For each of the 7 physical components (models), there can be three different implementations in a CIME-driven coupled model. + + *active*: Solve a complex set of equations to describe the model's behavior. Also called *prognostic* or *full* models. + These can be full General Circulation Models. Multiple active models might be available (for example POP and MPAS-ocean to represent the global ocean) but only one ocean or atmosphere model at a time can be used in a component set. + + *data*: For some climate problems, it is necessary to reduce feedbacks within the system by replacing an active model with a + version that sends and receives the same variables to and from other models, but with the values read from files rather + than computed from the equations. The values received are ignored. These active-model substitutes are called *data models*. + CIME provides data models for each of the possible components. You could add your own data model implementation of a component + but as for active models only one at a time can be used. + + *stub*: For some configurations, no data model is needed, so CIME provides *stub* versions that simply occupy the + required place in the driver and do not send or receive any data. + +**component set** or **compset**: The particular combination of active, data and stub versions of the 7 components is referred to + as a *component set* or *compset*. The Case Control System allows one to define + several possible compsets and configure and run them on supported platforms. See :ref:`Component Sets` for more information. + +**grid** or **model grid**: + Each active model must solve its equations on a numerical grid. CIME allows models within the system to have + different grids. The resulting set of all numerical grids is called the *model grid* or sometimes just the *grid*, where + *grid* is a unique name that denotes a set of numerical grids. Sometimes the *resolution* also refers to a specific set + of grids. + +**machine and compilers**: + The *machine* is the computer you are using to run CIME and build and run the climate model. It could be a workstation + or a national supercomputer. The exact name of *machine* is typically the UNIX hostname but it could be any string. A machine + may have one more more versions of Fortran, C and C++ *compilers* that are needed to compile the model's source code and CIME + +**case**: + To build and execute a CIME-enabled climate model, you have to make choices of compset, model grid, + machine and compiler. The collection of these choices, and any additional + customizations you may make, is called the *case*. + +**out-of-the-box**: + Any case that can be defined by the coupled model's CIME configuration files and built with only basic commands in the + CIME Case Control System is an "out-of-the-box" case. Since CIME and its configuration files are kept with + the model source code and version-controlled together, its possible to match supported out-of-the-box cases with specific + versions of the model source code, promoting reproducibility and provenance. An out-of-the-box case is also called a *base case* + +CIME and your environment +========================= + +Before using any CIME commands, set the ``CIME_MODEL`` environment variable. In bash, use **export** as shown and replace +**** with the appropriate text. Current possibilities are "e3sm" or "cesm." +:: + + > export CIME_MODEL= + +There are a number of possible ways to set CIME variables. +For variables that can be set in more than one way, the order of precedence is: + +- variable appears in a command line argument to a CIME command + +- variable is set as an environment variable + +- variable is set in ``$HOME/.cime/config`` as explained further :ref:`here`. + +- variable is set in a ``$CASEROOT`` xml file + +Quick start +================== + +To see an example of how a case is created, configured, built and run with CIME, execute the following commands. (This assumes that CIME has been ported to your current machine). +:: + + > cd cime/scripts + > ./create_newcase --case mycase --compset X --res f19_g16 + > cd mycase + > ./case.setup + > ./case.build + > ./case.submit + +The output from each command is explained in the following sections. + +After you submit the case, you can follow the progress of your run by monitoring the **CaseStatus** file. + +:: + + > tail CaseStatus + +Repeat the command until you see the message ``case.run success``. + + +Discovering available cases with **query_config** +================================================= + +Your CIME-driven model has many more possible cases besides the simple one in the above Quick Start. + +Use the utility `query_config <../Tools_user/query_config.html>`_ to see which out-of-the-box compsets, components, grids and machines are available for your model. + +If CIME is downloaded in standalone mode, only standalone CIME compsets can be queried. + +If CIME is part of a CIME-driven model, `query_config <../Tools_user/query_config.html>`_ will allow you to query all prognostic component compsets. + +To see lists of available compsets, components, grids and machines, look at the **help** text:: + + > query_config --help + +To see all available component sets, try:: + + > query_config --compsets all + +**Usage examples** + +To run `query_config <../Tools_user/query_config.html>`_ for compset information, follow this example, where **drv** is the component name:: + + > query_config --compsets drv + +The output will be similar to this:: + + -------------------------------------- + Compset Short Name: Compset Long Name + -------------------------------------- + A : 2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV + ADWAV : 2000_SATM_SLND_SICE_SOCN_SROF_SGLC_DWAV%CLIMO + S : 2000_SATM_SLND_SICE_SOCN_SROF_SGLC_SWAV_SESP + ADLND : 2000_SATM_DLND%SCPL_SICE_SOCN_SROF_SGLC_SWAV + ADESP_TEST : 2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_DESP%TEST + X : 2000_XATM_XLND_XICE_XOCN_XROF_XGLC_XWAV + ADESP : 2000_DATM%NYF_SLND_DICE%SSMI_DOCN%DOM_DROF%NYF_SGLC_SWAV_DESP + AIAF : 2000_DATM%IAF_SLND_DICE%IAF_DOCN%IAF_DROF%IAF_SGLC_SWAV + +Each model component specifies its own definitions of what can appear after the **%** modifier in the compset longname (for example, **DOM** in **DOCN%DOM**). + +To see what supported modifiers are for **DOCN**, run `query_config <../Tools_user/query_config.html>`_ as in this example:: + + > query_config --component docn + +The output will be similar to this:: + + ========================================= + DOCN naming conventions + ========================================= + + _DOCN%AQP1 : docn prescribed aquaplanet sst - option 1 + _DOCN%AQP10 : docn prescribed aquaplanet sst - option 10 + _DOCN%AQP2 : docn prescribed aquaplanet sst - option 2 + _DOCN%AQP3 : docn prescribed aquaplanet sst - option 3 + _DOCN%AQP4 : docn prescribed aquaplanet sst - option 4 + _DOCN%AQP5 : docn prescribed aquaplanet sst - option 5 + _DOCN%AQP6 : docn prescribed aquaplanet sst - option 6 + _DOCN%AQP7 : docn prescribed aquaplanet sst - option 7 + _DOCN%AQP8 : docn prescribed aquaplanet sst - option 8 + _DOCN%AQP9 : docn prescribed aquaplanet sst - option 9 + _DOCN%DOM : docn prescribed ocean mode + _DOCN%IAF : docn interannual mode + _DOCN%NULL : docn null mode + _DOCN%SOM : docn slab ocean mode + _DOCN%SOMAQP : docn aquaplanet slab ocean mode + _DOCN%SST_AQUAP : docn aquaplanet mode: + +For more details on how CIME determines the output for query_config, see :ref:`Component Sets`. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/machine.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/machine.rst.txt new file mode 100644 index 00000000000..b349c61f6b0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/machine.rst.txt @@ -0,0 +1,213 @@ +.. _machine: + +======================== +Defining the machine +======================== + +CIME looks at the xml node ``MACHINE_SPEC_FILE`` in the **config_files.xml** file to identify supported out-of-the-box machines for the target model. The node has the following contents: +:: + + + char + $CIMEROOT/cime_config/$MODEL/machines/config_machines.xml + case_last + env_case.xml + file containing machine specifications for target model primary component (for documentation only - DO NOT EDIT) + $CIMEROOT/cime_config/xml_schemas/config_machines.xsd + + +You can supplement what is in the MACHINES_SPEC_FILE by adding a config_machines.xml file to your CIME config directory. + +.. _machinefile: + +config_machines.xml - machine specific file +-------------------------------------------- + +Each ```` tag requires the following input: + +* ``DESC``: a text description of the machine +* ``NODENAME_REGEX``: a regular expression used to identify the machine. It must work on compute nodes as well as login nodes. + | Use the ``machine`` option for **create_test** or **create_newcase** if this flag is not available. +* ``OS``: the machine's operating system +* ``PROXY``: optional http proxy for access to the internet +* ``COMPILERS``: compilers supported on the machine, in comma-separated list, default first +* ``MPILIBS``: mpilibs supported on the machine, in comma-separated list, default first +* ``PROJECT``: a project or account number used for batch jobs; can be overridden in environment or in **$HOME/.cime/config** +* ``SAVE_TIMING_DIR``: (E3SM only) target directory for archiving timing output +* ``SAVE_TIMING_DIR_PROJECTS``: (E3SM only) projects whose jobs archive timing output +* ``CIME_OUTPUT_ROOT``: Base directory for case output; the **bld** and **run** directories are written below here +* ``DIN_LOC_ROOT``: location of the input data directory +* ``DIN_LOC_ROOT_CLMFORC``: optional input location for clm forcing data +* ``DOUT_S_ROOT``: root directory of short-term archive files +* ``DOUT_L_MSROOT``: root directory on mass store system for long-term archive files +* ``BASELINE_ROOT``: root directory for system test baseline files +* ``CCSM_CPRNC``: location of the cprnc tool, which compares model output in testing +* ``GMAKE``: gnu-compatible make tool; default is "gmake" +* ``GMAKE_J``: optional number of threads to pass to the gmake flag +* ``TESTS``: (E3SM only) list of tests to run on the machine +* ``BATCH_SYSTEM``: batch system used on this machine (none is okay) +* ``SUPPORTED_BY``: contact information for support for this system +* ``MAX_TASKS_PER_NODE``: maximum number of threads/tasks per shared memory node on the machine +* ``MAX_MPITASKS_PER_NODE``: number of physical PES per shared node on the machine. In practice the MPI tasks per node will not exceed this value. +* ``PROJECT_REQUIRED``: Does this machine require a project to be specified to the batch system? +* ``mpirun``: The mpi exec to start a job on this machine. + This is itself an element that has sub-elements that must be filled: + + * Must have a required ```` element + * May have optional attributes of ``compiler``, ``mpilib`` and/or ``threaded`` + * May have an optional ```` element which in turn contains one or more ```` elements. + These specify the arguments to the mpi executable and are dependent on your mpi library implementation. + * May have an optional ```` element which overrides the ``default_run_exe`` + * May have an optional ```` element which overrides the ``default_run_misc_suffix`` + * May have an optional ```` element which controls how CIME generates arguments when ```` contains ``aprun``. + + The ```` element can be one of the following. The default value is ``ignore``. + + * ``ignore`` will cause CIME to ignore it's aprun module and join the values found in ````. + * ``default`` will use CIME's aprun module to generate arguments. + * ``override`` behaves the same as ``default`` expect it will use ```` to mutate the generated arguments. When using this mode a ``position`` attribute can be placed on ```` tags to specify how it's used. + + The ``position`` attribute on ```` can take one of the following values. The default value is ``per``. + + * ``global`` causes the value of the ```` element to be used as a global argument for ``aprun``. + * ``per`` causes the value of the ```` element to be appended to each separate binaries arguments. + + Example using ``override``: + :: + + aprun + override + + -e DEBUG=true + -j 20 + + + Sample command output: + :: + + aprun -e DEBUG=true ... -j 20 e3sm.exe : ... -j 20 e3sm.exe + +* ``module_system``: How and what modules to load on this system. Module systems allow you to easily load multiple compiler environments on a machine. CIME provides support for two types of module tools: `module `_ and `soft `_. If neither of these is available on your machine, simply set ````. + +* ``environment_variables``: environment_variables to set on the system + This contains sub-elements ```` with the ``name`` attribute specifying the environment variable name, and the element value specifying the corresponding environment variable value. If the element value is not set, the corresponding environment variable will be unset in your shell. + + For example, the following sets the environment variable ``OMP_STACKSIZE`` to 256M: + :: + + 256M + + The following unsets this environment variable in the shell: + :: + + + + .. note:: These changes are **ONLY** activated for the CIME build and run environment, **BUT NOT** for your login shell. To activate them for your login shell, source either **$CASEROOT/.env_mach_specific.sh** or **$CASEROOT/.env_mach_specific.csh**, depending on your shell. + + + +Batch system definition +----------------------- + +CIME looks at the xml node ``BATCH_SPEC_FILE`` in the **config_files.xml** file to identify supported out-of-the-box batch system details for the target model. The node has the following contents: +:: + + + char + $CIMEROOT/cime_config/$MODEL/machines/config_batch.xml + case_last + env_case.xml + file containing batch system details for target system (for documentation only - DO NOT EDIT) + $CIMEROOT/cime_config/xml_schemas/config_batch.xsd + + +.. _batchfile: + +config_batch.xml - batch directives +------------------------------------------------- + +The **config_batch.xml** schema is defined in **$CIMEROOT/config/xml_schemas/config_batch.xsd**. + +CIME supports these batch systems: pbs, cobalt, lsf and slurm. + +The entries in **config_batch.xml** are hierarchical. + +#. General configurations for each system are provided at the top of the file. + +#. Specific modifications for a given machine are provided below. In particular each machine should define its own queues. + +#. Following is a machine-specific queue section. This section details the parameters for each queue on the target machine. + +#. The last section describes several things: + + - each job that will be submitted to the queue for a CIME workflow, + + - the template file that will be used to generate that job, + + - the prerequisites that must be met before the job is submitted, and + + - the dependencies that must be satisfied before the job is run. + +By default the CIME workflow consists of two jobs (**case.run**, **case.st_archive**). + +In addition, there is **case.test** job that is used by the CIME system test workflow. + + +.. _defining-compiler-settings: + +Compiler settings +----------------- + +CIME looks at the xml element ``CMAKE_MACROS_DIR`` in the **config_files.xml** file to identify supported out-of-the-box compiler details for the target model. The node has the following contents: +:: + + + char + $CIMEROOT/config/$MODEL/machines/cmake_macros + case_last + env_case.xml + Directory containing cmake macros (for documentation only - DO NOT EDIT) + + +Additional compilers are made avilable by adding cmake macros files to the directory pointed to by CMAKE_MACROS_DIR or to your $HOME/.cime directory. + +.. _compilerfile: + +config_compilers.xml - compiler paths and options **DEPRECATED use cmake_macros** +------------------------------------------------- +The **config_compilers.xml** file defines compiler flags for building CIME (and also CESM and E3SM prognostic CIME-driven components). + +#. General compiler flags (e.g., for the gnu compiler) that are machine- and componen-independent are listed first. + +#. Compiler flags specific to a particular operating system are listed next. + +#. Compiler flags that are specific to particular machines are listed next. + +#. Compiler flags that are specific to particular CIME-driven components are listed last. + +The order of listing is a convention and not a requirement. + +The possible elements and attributes that can exist in the file are documented in **$CIME/config/xml_schemas/config_compilers_v2.xsd**. + +To clarify several conventions: + +- The ```` element implies that any previous definition of that element's parent will be appended with the new element value. + As an example, the following entry in **config_compilers.xml** would append the value of ``CPPDEFS`` with ``-D $OS`` where ``$OS`` is the environment value of ``OS``. + + :: + + + + -DOS + + + +- The ```` element overwrites its parent element's value. For example, the following entry would overwrite the ``CONFIG_ARGS`` for machine ``melvin`` with a ``gnu`` compiler to be ``--host=Linux``. + + :: + + + + --host=Linux + + diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/multi-instance.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/multi-instance.rst.txt new file mode 100644 index 00000000000..747e578f533 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/multi-instance.rst.txt @@ -0,0 +1,110 @@ +.. _multi-instance: + +Multi-instance component functionality +====================================== + +The CIME coupling infrastructure is capable of running multiple +component instances (ensembles) under one model executable. There are +two modes of ensemble capability, single driver in which all component +instances are handled by a single driver/coupler component or +multi-driver in which each instance includes a separate driver/coupler +component. In the multi-driver mode the entire model is duplicated +for each instance while in the single driver mode only active +components need be duplicated. In most cases the multi-driver mode +will give better performance and should be used. + +The primary motivation for this development was to be able to run an +ensemble Kalman-Filter for data assimilation and parameter estimation +(UQ, for example). However, it also provides the ability to run a set +of experiments within a single model executable where each instance +can have a different namelist, and to have all the output go to one +directory. + +An F compset is used in the following example. Using the +multiple-instance code involves the following steps: + +1. Create the case. +:: + + > create_newcase --case Fmulti --compset F2000_DEV --res f19_f19_mg17 + > cd Fmulti + +2. Assume this is the out-of-the-box pe-layout: +:: + + Comp NTASKS NTHRDS ROOTPE + CPL : 144/ 1; 0 + ATM : 144/ 1; 0 + LND : 144/ 1; 0 + ICE : 144/ 1; 0 + OCN : 144/ 1; 0 + ROF : 144/ 1; 0 + GLC : 144/ 1; 0 + WAV : 144/ 1; 0 + ESP : 1/ 1; 0 + +The atm, lnd, rof and glc are active components in this compset. The ocn is +a prescribed data component, cice is a mixed prescribed/active +component (ice-coverage is prescribed), and wav and esp are stub +components. + +Let's say we want to run two instances of CAM in this experiment. We +will also have to run two instances of CLM, CICE, RTM and GLC. However, we +can run either one or two instances of DOCN, and we can ignore the +stub components since they do not do anything in this compset. + +To run two instances of CAM, CLM, CICE, RTM, GLC and DOCN, invoke the following :ref: `xmlchange` commands in your **$CASEROOT** directory: +:: + + > ./xmlchange NINST_ATM=2 + > ./xmlchange NINST_LND=2 + > ./xmlchange NINST_ICE=2 + > ./xmlchange NINST_ROF=2 + > ./xmlchange NINST_GLC=2 + > ./xmlchange NINST_OCN=2 + +As a result, you will have two instances of CAM, CLM and CICE (prescribed), RTM, GLC, and DOCN, each running concurrently on 72 MPI tasks and all using the same driver/coupler component. In this single driver/coupler mode the number of tasks for each component instance is NTASKS_COMPONENT/NINST_COMPONENT and the total number of tasks is the same as for the single instance case. + +Now consider the multi driver model. +To use this mode change +:: + + > ./xmlchange MULTI_DRIVER=TRUE + +This configuration will run each component instance on the original 144 tasks but will generate two copies of the model (in the same executable) for a total of 288 tasks. + +3. Set up the case +:: + + > ./case.setup + +A new **user_nl_xxx_NNNN** file is generated for each component instance when case.setup is called (where xxx is the component type and NNNN is the number of the component instance). +When calling **case.setup** with the **env_mach_pes.xml** file specifically, these files are created in **$CASEROOT**: +:: + + user_nl_cam_0001 user_nl_clm_0001 user_nl_docn_0001 user_nl_cice_0001 + user_nl_cism_0001 user_nl_mosart_0001 + user_nl_cam_0002 user_nl_clm_0002 user_nl_docn_0002 user_nl_cice_0002 + user_nl_cism_0002 user_nl_mosart_0002 + user_nl_cpl + +The namelist for each component instance can be modified by changing the corresponding **user_nl_xxx_NNNN** file. +Modifying **user_nl_cam_0002** will result in your namelist changes being active ONLY for the second instance of CAM. +To change the DOCN stream txt file instance 0002, copy **docn.streams.txt.prescribed_0002** to your **$CASEROOT** directory with the name **user_docn.streams.txt.prescribed_0002** and modify it accordlingly. + +Also keep these important points in mind: + +#. Note that these changes can be made at create_newcase time with option --ninst # where # is a positive integer, use the additional logical option --multi-driver to invoke the multi-driver mode. + +#. **Multiple component instances can differ ONLY in namelist settings; they ALL use the same model executable.** + +#. Calling **case.setup** with ``--clean`` *DOES NOT* remove the **user_nl_xxx_NN** (where xxx is the component name) files created by **case.setup**. + +#. A special variable NINST_LAYOUT is provided for some experimental compsets, its value should be + 'concurrent' for all but a few special cases and it cannot be used if MULTI_DRIVER=TRUE. + +#. In **create_test** these options can be invoked with testname modifiers _N# for the single driver mode and _C# for the multi-driver mode. These are mutually exclusive options, they cannot be combined. + +#. In create_newcase you may use --ninst # to set the number of instances and --multi-driver for multi-driver mode. + +#. In multi-driver mode you will always get 1 instance of each component for each driver/coupler, if you change a case using xmlchange MULTI_COUPLER=TRUE you will get a number of driver/couplers equal to the maximum NINST value over all components. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/pes-threads.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/pes-threads.rst.txt new file mode 100644 index 00000000000..43642840ac0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/pes-threads.rst.txt @@ -0,0 +1,286 @@ +.. _pesthreads: + +================================== +Controlling processors and threads +================================== + +Once a compset and resolution for a case has been defined, CIME +provides ways to define the processor layout the case will use. + +CIME cases have significant flexibility with respect to the layout of +components across different hardware processors. There are up to eight +unique models (atm, lnd, rof, ocn, ice, glc, wav, cpl) that are +managed independently by the CIME driver, each with a unique MPI +communicator. In addition, the driver runs on the union of all +processors and controls the sequencing and hardware partitioning. + +.. _defining-pes: + +pe-settings for a case +------------------------- + +CIME looks at the xml element ``PES_SPEC_FILE`` in the **$CIMEROOT/config/$model/config_files.xml** file to determine where +to find the supported out-of-the-box model pe-settings for the primary component (See :ref:`Compsets` for definition of primary component.) + +When your run `create_newcase <../Tools_user/create_newcase.html>`_, CIME identifies the primary component and the setting of the ``PES_SPEC_FILE`` in the standard output. + +By default, each primary component has a **config_pes.xml** file in +its **cime_config** directory. That file specifies out-of-the-box +pe-layout for compsets that the primary component defines. Currently, +the pe-layout can have dependencies on the compset, the model grid and +the target machine. Finally, there might be more than one +out-of-the-box pe-layout that could be used for a compset/grid/machine +combination: one for a low processor setting and one for a high +processor setting. + +A typical entry in a **config_pes.xml** looks like this: + +:: + + + + + ....... + + + + +Currently, the pesize can have values of ``[any,S,M,L,X1,X2]``. + +Given the various dependencies, CIME uses an order of precedence to determine the optimal match. This order is as follows: + +1. grid match + + | CIME first searches the grid nodes for a grid match in **config_grids.xml**. + | The search is based on a regular expression match for the grid longname. + | All grid matches are then used in the subsequent search. + | If there is no grid match, all nodes that have ```` are used in the subsequent search. + +2. machine match + + | CIME next uses the list of nodes obtained in the grid match to search for the machine name using the ```` nodes. + | If there is no machine match, then all nodes with ```` are used in the subsequent search. + +3. pesize and compset match + + | CIME next uses the list of nodes obtained in the machine match to search for pesize and compset using the ```` nodes. + | If there is no match, the node with ```` is used. + +When `create_newcase <../Tools_user/create_newcase.html>`_ is called, it outputs the matches that are found in determining the best out-of-the-box pe-layout. + +Setting the PE layout +--------------------- + +Optimizing the throughput and efficiency of a CIME experiment often +involves customizing the processor (PE) layout. (See :ref:`load +balancing `.) CIME provides significant +flexibility with respect to the layout of components across different +hardware processors. In general, the CIME components -- atm, lnd, +ocn, and so on -- can run on overlapping or mutually unique +processors. While each component is associated with a unique MPI +communicator, the CIME driver runs on the union of all processors and +controls the sequencing and hardware partitioning. + +The pe-layout settings are controlled by the ``$CASEROOT`` file +**env_mach_pes.xml** file. Variables in this file determine the number +of MPI tasks and OpenMP threads for each component, the number of +instances of each component and the layout of the components across +the hardware processors. The entries in **env_mach_pes.xml** have the +following meanings: + +.. list-table:: Entries in **env_mach_pes.xml** + :widths: 10 40 + :header-rows: 1 + + * - XML variable + - Description + * - MAX_MPITASKS_PER_NODE + - The maximum number of MPI tasks per node. This is defined in **config_machines.xml** and therefore given a default setting, but can be user modified. + * - MAX_TASKS_PER_NODE + - The total number of (MPI tasks) * (OpenMP threads) allowed on a node. This is defined in **config_machines.xml** and therefore given a default setting, but can be user modified. Some computational platforms use a special software customized for the target hardware called symmetric multi-threading (SMT). This allows for over-subscription of the hardware cores. In cases where this is beneficial to model performance, the variable ``MAX_TASKS_PER_NODE`` will be greater than the hardware cores per node as specified by ``MAX_MPITASKS_PER_NODE``. + * - NTASKS + - Total number of MPI tasks. A negative value indicates nodes rather than tasks, where *MAX_MPITASKS_PER_NODE \* -NTASKS* equals the number of MPI tasks. + * - NTHRDS + - Number of OpenMP threads per MPI task. ``NTHRDS`` must be greater than or equal to 1. If ``NTHRDS`` = 1, this generally means threading parallelization will be off for the given component. + * - ROOTPE + - The global MPI task of the component root task; if negative, indicates nodes rather than tasks. The root processor for each component is set relative to the MPI global communicator. + * - PSTRID + - The stride of MPI tasks across the global set of pes (for now set to 1). This variable is currently not used and is a placeholder for future development. + * - NINST + - The number of component instances, which are spread evenly across NTASKS. + * - COST_PER_NODE + - The numbers of cores/node used for accounting purposes. The user should not normally need to set this - but it is useful for understanding how you will be charged. + +Each CIME component has corresponding entries for ``NTASKS``, ``NTHRDS``, ``ROOTPE`` and ``NINST`` in the **env_mach_pes.xml** file. The layout of components on processors has no impact on the science. +If all components have identical ``NTASKS``, ``NTHRDS``, and ``ROOTPE`` settings, all components will exectute sequentially on the same hardware processors. + +.. hint:: To view the current settings, use the `pelayout <../Tools_user/pelayout.html>`_ tool + +The time sequencing is hardwired into the driver. Changing +processor layouts does not change intrinsic coupling lags or coupling +sequencing. + +The coupler component has its own processor set for doing +computations such as mapping, merging, diagnostics, and flux +calculation. This is distinct from the driver, which always +runs on the union of all processors to manage model concurrency and +sequencing. + +For a **fully active configuration**, the atmosphere component is +hardwired in the driver to never run concurrently with the land or ice +component. Performance improvements associated with processor layout +concurrency therefore are constrained in this case such that there is +never a performance reason not to overlap the atmosphere component +with the land and ice components. Beyond that constraint, the land, +ice, coupler and ocean models can run concurrently, and the ocean +model can also run concurrently with the atmosphere model. + +.. note:: if **env_mach_pes.xml** is modified after `case.setup <../Tools_user/case.setup.html>`_ has been called, then you must run `case.setup --reset <../Tools_user/case.setup.html>`_ and the call `case.build <../Tools_user/case.build.html>`_. **case.build** will only recompile any source code that depends on values in **env_mach_pes.xml** + +Case Resource Allocation +------------------------ + +Resources for your case will be allocated according to the following logic. + +* ``NTASKS`` * ``NTHRDS`` is the total number of hardware processors allocated to a component. + +* The total number of cores that are allocated will be based on the product of (1) and (2) below where + + 1. ``MAX(ROOTPE(comp) + NTASKS(comp))`` across all components + 2. ``MAX(NTHRDS)`` across all components + +In the following example, the atmosphere and ocean will run concurrently. The atmosphere will use 16 MPI tasks each with 4 threads per task for a total of 64 cores. The ocean will use 16 MPI tasks with 1 thread per task. BUT since the atmosphere has 4 threads, the ocean will use 64 total cores. The total number of cores will be 128. The atmosphere will run on MPI tasks 0-15 and the ocean will run on MPI tasks 16-31 in the global MPI communicators. + + :: + + NTASKS_ATM=16 NTHRDS_ATM=4 ROOTPE_ATM=0 + NTASKS_OCN=16 NTHRDS_OCN=1 ROOTPE_OCN=16 + +CIME ensures that the batch submission script (`case.submit +<../Tools_user/case.submit.html>`_ ) will automatically requests 128 +hardware processors, and the first 16 MPI tasks will be laid out on +the first 64 hardware processors with a stride of 4. The next 16 MPI +tasks are laid out on the second set of 64 hardware processors in the +same manner, even though the ocean is not threaded. If you had set +``ROOTPE_OCN`` to 64 in this example, a total of 312 processors would +be requested, the atmosphere would be laid out on the first 64 +hardware processors in 16x4 fashion, and the ocean model would be laid +out on hardware processors 255-311. Hardware processors 64-254 would +be allocated but completely idle. + +We strongly encourage you to use the `preview_run +<../Tools_user/preview_run.html>`_ script to review the environment +and job submit commands for your case. + +.. _optimizing-processor-layout: + +Optimizing processor layout +---------------------------- + +Load balancing is the practice of specifying a processor layout for a given model configuration +(compset, grid, and so on) to maximize simulation speed while minimizing processor idle time. +For a fixed total number of processors, the goal of this optimization is to achieve maximum throughput. +For a set of processor counts, the purpose is to find several "sweet spots" where +the model is minimally idle, cost is relatively low, and the throughput is relatively high. + +As with most models, increasing total processors normally results in both increased throughput +and increased cost. +If models scaled linearly, the cost would remain constant across different processor counts, +but models generally don't scale linearly and the cost increases as processor count increases. + +Performing a load-balancing exercise on a proposed case before +undertaking a long production run is recommended practice. Load +balancing requires you to consider a number of factors, such as which +components are run; their absolute and relative resolution; cost, +scaling and processor count sweet spots for each component; and +internal load imbalance within a component. + +It is often best to load balance a system with all significant +run-time I/O turned off because it occurs infrequently, typically just +one timestep per simulated month. It is best treated as a separate cost as it +can otherwise bias interpretation of the overall balance. Also, the +use of OpenMP threading in some or all of the components is dependent +on the hardware/OS support as well as whether the system supports +running all MPI and mixed MPI/OpenMP on overlapping processors for +different components. + +Finally, decide whether components should run sequentially, concurrently, or in some combination. + +Typically, a series of short test runs with the desired production +configuration can establish a reasonable load balance setup for the +production job. The timing output can be used to compare test runs to +help determine the optimal load balance. + +Changing the pe layout of the model has NO IMPACT on the scientific +results. The basic order of operations and calling sequence are +hardwired into the driver and do not change with the pe +layout. However, both CESM and E3SM do impose some contraints in the +tempororal evolution of the components. For example, the prognostic +atmosphere model always run sequentially with the ice and land models +for scientific reasons. As a result, running the atmosphere +concurrently with the ice and land will result in idle processors at +some point in the timestepping sequence. + +.. hint:: If you need to load balance a fully coupled case, use the :ref:`Load Balancing Tool` + +**One approach to load balancing** + +Carry out a :ref:`PFS test `. This test is by default a +20-day model run with restarts and history output turned off. This +should help you find the layout that has the best load balance for the +targeted number of processors. This provides a reasonable performance +estimate for the production run for most of the runtime. + +Seasonal variation and spin-up costs can change performance over time, +so even after a production run has started, review the timing output +occasionally to see if any layout changes might improve throughput or +decrease cost. + +In determining an optimal load balance for a specific configuration, +two pieces of information are useful. + +* Which components are most expensive. + +* How individual components scale. Do they run faster with all MPI or + mixed MPI/OpenMP decomposition strategies? What are their optimal + decompositions at each processor count? If the cost and scaling of + the components are unknown, several short tests with arbitrary + component pe counts can help establish component scaling and sweet + spots. + +**Determining an optimal load balance** + +* Start with the most expensive component and a fixed optimal processor count and decomposition for that component. + +* Vary the concurrency and pe counts of the other components. + +* Identify a few potential load balance configurations, then run each a few times to establish run-to-run variability and determine the best layout. + +In all cases, review the component run times in the timing output file for both overall throughput and independent component timings. Identify idle processors by considering the component concurrency in conjunction with the component timing. + +In general, a few component layout options are most reasonable: + +* fully sequential, +* fully sequential except the ocean running concurrently, +* fully concurrent except the atmosphere running sequentially with the ice, rof, and land components. + +The concurrency is limited in part by hardwired sequencing in the +driver. The sequencing is set by scientific constraints, although +there may be some addition flexibility with respect to concurrency +when running with mixed active and data models. + +**Some general rules for finding optimal configurations** + +- Make sure you have set a processor layout where each hardware processor is assigned to at least one component. There is rarely a reason to have completely idle processors. + +- Make sure your cheapest components keep up with your most expensive components. In other words, a component that runs on 1024 processors should not be waiting on a component running on 16 processors. + +- Before running the job, make sure the batch queue settings are set correctly for your run. Review the account numbers, queue names and time limits. The ideal time limit, queue and run length are dependent on each other and on the current model throughput. + +- Take full advantage of the hardware resources. If you are charged by the 32-way node, you might as well target a total processor count that is a multiple of 32. + +- Keep a single component on a single node, if possible, to minimize internal component communication cost. + +- Assume that hardware performance can vary due to contention on the interconnect, file systems, or other areas. If you are unsure of a timing result, run cases multiple times. + +The pe-layout and the associated timings are found in the :ref:`timing files ` generated for your run. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/porting-cime.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/porting-cime.rst.txt new file mode 100644 index 00000000000..1e9236a646b --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/porting-cime.rst.txt @@ -0,0 +1,198 @@ +.. _porting: + +============================================== +Porting and validating CIME on a new platform +============================================== + +One of the first steps for many users is getting CIME-based models running on their local machine. +This section describes that process. + +Required libraries/packages +--------------------------- + +The machine needs to have: + +- a functioning MPI environment (unless you plan to run on a single core with the CIME mpi-serial library). +- build tools gmake and cmake, +- a netcdf library version 4.3 or newer built with the same compiler you will use for CIME. + +A pnetcdf library is optional. + +If you are using MPI, make sure you can run a basic MPI parallel program on your machine before you attempt a CIME port. You can use this :ref:`MPI example ` to check. + +.. _mpi-example: + +An MPI example +--------------- + +It is usually very helpful to assure that you can run a basic mpi parallel program on your machine prior to attempting a CIME port. +Understanding how to compile and run the program fhello_world_mpi.F90 shown here could potentially save many hours of frustration. +:: + + program fhello_world_mpi.F90 + use mpi + implicit none + integer ( kind = 4 ) error + integer ( kind = 4 ) id + integer p + character(len=MPI_MAX_PROCESSOR_NAME) :: name + integer clen + integer, allocatable :: mype(:) + real ( kind = 8 ) wtime + + call MPI_Init ( error ) + call MPI_Comm_size ( MPI_COMM_WORLD, p, error ) + call MPI_Comm_rank ( MPI_COMM_WORLD, id, error ) + if ( id == 0 ) then + wtime = MPI_Wtime ( ) + + write ( *, '(a)' ) ' ' + write ( *, '(a)' ) 'HELLO_MPI - Master process:' + write ( *, '(a)' ) ' FORTRAN90/MPI version' + write ( *, '(a)' ) ' ' + write ( *, '(a)' ) ' An MPI test program.' + write ( *, '(a)' ) ' ' + write ( *, '(a,i8)' ) ' The number of processes is ', p + write ( *, '(a)' ) ' ' + end if + call MPI_GET_PROCESSOR_NAME(NAME, CLEN, ERROR) + write ( *, '(a)' ) ' ' + write ( *, '(a,i8,a,a)' ) ' Process ', id, ' says "Hello, world!" ',name(1:clen) + + call MPI_Finalize ( error ) + end program + +As an example, on a MAC with 2 cores that has mpich with gnu fortran you would issue the following two commands: + +:: + + > mpif90 fhello_world_mpi.F90 -o hello_world + > mpirun -np 2 ./hello_world + +CESM Linux and Mac Support +--------------------------- + +The distribution of CESM includes machines called **homebrew** and **centos7-linux** in the file **$CIMEROOT/config/cesm/machines/config_machines.xml**. +Please see the instructions in the file to create the directory structure and use these generic machine definitions. + +Steps for porting +--------------------------- + +Porting CIME involves several steps. The first step is to define your machine. You can do this in one of two ways: + +1. You can edit **$CIMEROOT/config/$model/machines/config_machines.xml** and add an appropriate section for your machine. + +2. You can use your **$HOME/.cime** directory (see :ref:`customizing-cime`). + In particular, you can create a **$HOME/.cime/config_machines.xml** file with the definition for your machine. + A template to create this definition is provided in **$CIMEROOT/config/xml_schemas/config_machines_template.xml**. More details are provided in the template file. + In addition, if you have a batch system, you will also need to add a **config_batch.xml** file to your **$HOME/.cime** directory. + All files in **$HOME/.cime/** are appended to the xml objects that are read into memory from the **$CIME/config/$model**, where **$model** is either ``e3sm`` or ``cesm``. + + .. note:: If you use method (2), you can download CIME updates without affecting your machine definitions in **$HOME/.cime**. + + .. note:: If you will be supporting many users on your new machine, then we recommend using method (1) and issuing a GitHub pull request with your machine updates. + +In what follows we outline the process for method (2) above: + +- Create a **$HOME/.cime** directory and create a **config_machines.xml** file in that directory. + + This file contains all the information you must set in order to configure a new machine to be CIME-compliant. + + Fill in the contents of **$HOME/.cime/config_machines.xml** that are specific to your machine. For more details see :ref:`the config_machines.xml file `. + + Check to ensure that your **config_machines.xml** file conforms to the CIME schema definition by doing the following: + :: + + xmllint --noout --schema $CIME/config/xml_schemas/config_machines.xsd $HOME/.cime/config_machines.xml + +- If you find that you need to introduce compiler settings specific to your machine, create a **$HOME/.cime/*.cmake** file. + The default compiler settings are defined in **$CIME/config/$model/machines/cmake_macros/**. + +- If you have a batch system, you may also need to create a **$HOME/.cime/config_batch.xml** file. + Out-of-the-box batch settings are set in **$CIME/config/$model/machines/config_batch.xml**. + +- Once you have defined a basic configuration for your machine in your **$HOME/.cime** xml files, run **scripts_regression_test.py** interactively. This test is found and must be run in the directory **$CIMEROOT/scripts/tests/**. + This performs a number of basic unit tests starting from the simplest and working toward more complicated ones. If you have problems running **scripts_regression_tests.py**, see :ref:`scripts_regression_tests`. + +After running those steps correctly, you are ready to try a case at your target compset and resolution. + +Validating a CESM port with prognostic components +------------------------------------------------- + +The following port validation is recommended for any new machine. +Carrying out these steps does not guarantee the model is running +properly in all cases nor that the model is scientifically valid on +the new machine. + +In addition to these tests, detailed validation should be carried out +for any new production run. That means verifying that model restarts +are bit-for-bit identical with a baseline run, that the model is +bit-for-bit reproducible when identical cases are run for several +months, and that production cases are monitored carefully as they +integrate forward to identify any potential problems as early as +possible. + +Users are responsible for their own validation process, +especially with respect to science validation. + +These are the recommended steps for validating a port for the CESM model: + +1. Verify basic functionality of your port by performing the cheyenne "prealpha" tests on your machine. This can be done by issuing the following command: + + :: + + ./create_test --xml-category prealpha --xml-machine cheyenne --xml-compiler intel --machine --compiler + + This command will run the prealpha tests *defined* for cheyenne with the intel compiler, but will run them on *your* machine with *your* compiler. + These tests will be run in the **$CIME_OUTPUT_ROOT**. To see the results of tests, you need to do the following: + + :: + + > $CIME_OUTPUT_ROOT/cs.status.[testid] + + where testid was indicated in the output when calling `create_test <../Tools_user/create_test.html>`_ + +2. Carry out ensemble consistency tests: + + This is described in **$CIMEROOT/tools/statistical_ensemble_test/README**. + The CESM-ECT (CESM Ensemble Consistency Test) determines whether a new simulation set up (new machine, compiler, etc.) is statistically distinguishable from an accepted ensemble. + The ECT process involves comparing several runs (3) generated with the new scenario to an ensemble built on a trusted machine (currently cheyenne). + The python ECT tools are located in the pyCECT subdirectory **$CIMEROOT/tools/statistical_ensemble_test/pyCECT. + + The verification tools in the CESM-ECT suite are: + + ``CAM-ECT``: detects issues in CAM and CLM (12 month runs) + + ``UF-CAM-ECT``: detects issues in CAM and CLM (9 time step runs) + + ``POP-ECT``: detects issues in POP and CICE (12 month runs) + + Follow the instructions in the **README** file to generate three ensemble runs for any of the above tests that are most relevant to your port. + Then please go to the `CESM2 ensemble verification website `_, where you can upload your files and subsequently obtain a quick response as to the success or failure of your verification. + +Performance tuning of a CESM port +------------------------------------------------- + +Once you have performed the verification that your port is successful, +you will want to determine the optimal pe-layout for your target +configurations (i.e. compset/resolution combinations). See the file +**$CIMEROOT/tools/load_balancing_tools/README** to understand how to +utilize the load balancing utilty. This utility finds reasonable PE +layouts for CIME-driven models. It will find these from timing files +you provide or from runs done by the tool. + +Once you are happy with the PE-layout for your target configuration, +you can it to the relevant **config_pes.xml** file for the component +that is responsible for generating the PE-layout for the target +configuration (this is normally referred to as the "primary" +component). + +Timing summaries for every successful case run, are located in the +case subdirectory **$CASEROOT/timing**. In addition, every +**cpl.log.timestamp** output file contains diagnostic timing +information. Search for ``tStamp`` in this file to see this +information. The timing information is useful for tracking down +temporal variability in model cost due to either inherent model +variability cost (I/O, spin-up, seasonal, and so on) or hardware. The +model daily cost generally is pretty constant unless I/O is written +intermittently, such as at the end of the month. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/running-a-case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/running-a-case.rst.txt new file mode 100644 index 00000000000..b8366775716 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/running-a-case.rst.txt @@ -0,0 +1,616 @@ +.. _running-a-case: + +*************** +Running a Case +*************** + +.. _case-submit: + +======================== +Calling **case.submit** +======================== + +The script `case.submit <../Tools_user/case.submit.html>`_ will submit your run to the batch queueing system on your machine. +If you do not have a batch queueing system, `case.submit <../Tools_user/case.submit.html>`_ will start the job interactively, given that you have a proper MPI environment defined. +Running `case.submit <../Tools_user/case.submit.html>`_ is the **ONLY** way you should start a job. + +To see the options to `case.submit <../Tools_user/case.submit.html>`_, issue the command +:: + + > ./case.submit --help + +A good way to see what `case.submit <../Tools_user/case.submit.html>`_ will do, is to first call `preview_run <../Tools_user/preview_run.html>`_ +:: + + > ./preview_run + +which will output the environment for your run along with the batch submit and mpirun commands. +As an example, on the NCAR machine, cheyenne, for an A compset at the f19_g17_rx1 resolution, the following is output from `preview_run <../Tools_user/preview_run.html>`_: +:: + + CASE INFO: + nodes: 1 + total tasks: 36 + tasks per node: 36 + thread count: 1 + + BATCH INFO: + FOR JOB: case.run + ENV: + module command is /glade/u/apps/ch/opt/lmod/7.5.3/lmod/lmod/libexec/lmod python purge + module command is /glade/u/apps/ch/opt/lmod/7.5.3/lmod/lmod/libexec/lmod python load ncarenv/1.2 intel/17.0.1 esmf_libs mkl esmf-7.0.0-defio-mpi-O mpt/2.16 netcdf-mpi/4.5.0 pnetcdf/1.9.0 ncarcompilers/0.4.1 + Setting Environment OMP_STACKSIZE=256M + Setting Environment TMPDIR=/glade/scratch/mvertens + Setting Environment MPI_TYPE_DEPTH=16 + SUBMIT CMD: + qsub -q regular -l walltime=12:00:00 -A P93300606 .case.run + + FOR JOB: case.st_archive + ENV: + module command is /glade/u/apps/ch/opt/lmod/7.5.3/lmod/lmod/libexec/lmod python purge + module command is /glade/u/apps/ch/opt/lmod/7.5.3/lmod/lmod/libexec/lmod python load ncarenv/1.2 intel/17.0.1 esmf_libs mkl esmf-7.0.0-defio-mpi-O mpt/2.16 netcdf-mpi/4.5.0 pnetcdf/1.9.0 ncarcompilers/0.4.1 + Setting Environment OMP_STACKSIZE=256M + Setting Environment TMPDIR=/glade/scratch/mvertens + Setting Environment MPI_TYPE_DEPTH=16 + Setting Environment TMPDIR=/glade/scratch/mvertens + Setting Environment MPI_USE_ARRAY=false + SUBMIT CMD: + qsub -q share -l walltime=0:20:00 -A P93300606 -W depend=afterok:0 case.st_archive + + MPIRUN: + mpiexec_mpt -np 36 -p "%g:" omplace -tm open64 /glade/scratch/mvertens/jim/bld/cesm.exe >> cesm.log.$LID 2>&1 + +Each of the above sections is defined in the various **$CASEROOT** xml files and the associated variables can be modified using the +`xmlchange <../Tools_user/xmlchange.html>`_ command (or in the case of tasks and threads, this can also be done with the `pelayout <../Tools_user/pelayout.html>`_ command). + +- The PE layout is set by the xml variables **NTASKS**, **NTHRDS** and **ROOTPE**. To see the exact settings for each component, issue the command + :: + + ./xmlquery NTASKS,NTHRDS,ROOTPE + + To change all of the **NTASKS** settings to say 30 and all of the **NTHRDS** to 4, you can call + :: + + ./xmlchange NTASKS=30,NTHRDS=4 + + To change JUST the ATM NTASKS to 8, you can call + :: + + ./xmlchange NTASKS_ATM=8 + +- Submit parameters are set by the xml variables in the file **env_batch.xml**. This file is special in certain xml variables can appear in more than one group. + NOTE: The groups are the list of jobs that are submittable for a case. + Normally, the minimum set of groups are **case.run** and **case.st_archive**. + We will illustrate how to change an xml variable in **env_batch.xml** using the xml variable ``JOB_WALLCLOCK_TIME``. + + - To change ``JOB_WALLCLOCK_TIME`` for all groups to 2 hours for cheyenne, use + :: + + ./xmlchange JOB_WALLCLOCK_TIME=02:00:00 + + - To change ``JOB_WALLCLOCK_TIME`` to 20 minutes for cheyenne for just **case.run**, use + :: + + ./xmlchange JOB_WALLCLOCK_TIME=00:20:00 --subgroup case.run + + +Before you submit the case using `case.submit <../Tools_user/case.submit.html>`_, make sure the batch queue variables are set correctly for your run +In particular, make sure that you have appropriate account numbers (``PROJECT``), time limits (``JOB_WALLCLOCK_TIME``), and queue (``JOB_QUEUE``). + +Also modify **$CASEROOT/env_run.xml** for your case using **xmlchange**. + +Once you have executed `case.setup <../Tools_user/case.setup.html>`_ and `case.build <../Tools_user/case.build.html>`_ , call `case.submit <../Tools_user/case.submit.html>`_ +to submit the run to your machine's batch queue system. +:: + + > cd $CASEROOT + > ./case.submit + +--------------------------------- +Result of running case.submit +--------------------------------- + +When called, the `case.submit <../Tools_user/case.submit.html>`_ script will: + +- Load the necessary environment. + +- Confirm that locked files are consistent with the current xml files. + +- Run `preview_namelist <../Tools_user/preview_namelist.html>`_, which in turn will run each component's **cime_config/buildnml** script. + +- Run :ref:`check_input_data` to verify that the required data are present. + +- Submit the job to the batch queue. which in turn will run the `case.run <../Tools_user/case.run.html>`_ script. + +Upon successful completion of the run, `case.run <../Tools_user/case.run.html>`_ will: + +- Put timing information in **$CASEROOT/timing**. + See :ref:`model timing data` for details. + +- Submit the short-term archiver script `case.st_archive <../Tools_user/case.st_archive.html>`_ to the batch queue if ``$DOUT_S`` is TRUE. + Short-term archiving will copy and move component history, log, diagnostic, and restart files from ``$RUNDIR`` to the short-term archive directory ``$DOUT_S_ROOT``. + +- Resubmit `case.run <../Tools_user/case.run.html>`_ if ``$RESUBMIT`` > 0. + + +--------------------------------- +Monitoring case job statuses +--------------------------------- + +The **$CASEROOT/CaseStatus** file contains a log of all the job states and `xmlchange <../Tools_user/xmlchange.html>`_ commands in chronological order. +Below is an example of status messages: +:: + + 2017-02-14 15:29:50: case.setup starting + --------------------------------------------------- + 2017-02-14 15:29:54: case.setup success + --------------------------------------------------- + 2017-02-14 15:30:58: xmlchange success ./xmlchange STOP_N=2,STOP_OPTION=nmonths + --------------------------------------------------- + 2017-02-14 15:31:26: xmlchange success ./xmlchange STOP_N=1 + --------------------------------------------------- + 2017-02-14 15:33:51: case.build starting + --------------------------------------------------- + 2017-02-14 15:53:34: case.build success + --------------------------------------------------- + 2017-02-14 16:02:35: case.run starting + --------------------------------------------------- + 2017-02-14 16:20:31: case.run success + --------------------------------------------------- + 2017-02-14 16:20:45: st_archive starting + --------------------------------------------------- + 2017-02-14 16:20:58: st_archive success + --------------------------------------------------- + +.. note:: + After a successful first run, set the **env_run.xml** variable ``$CONTINUE_RUN`` to ``TRUE`` before resubmitting or the job will not + progress. + + You may also need to modify the **env_run.xml** variables + ``$STOP_OPTION``, ``$STOP_N`` and/or ``$STOP_DATE`` as well as + ``$REST_OPTION``, ``$REST_N`` and/or ``$REST_DATE``, and ``$RESUBMIT`` + before resubmitting. + +See the :ref:`basic example` for a complete example of how to run a case. + +--------------------------------- +Troubleshooting a job that fails +--------------------------------- + +There are several places to look for information if a job fails. +Start with the **STDOUT** and **STDERR** file(s) in **$CASEROOT**. +If you don't find an obvious error message there, the +**$RUNDIR/$model.log.$datestamp** files will probably give you a +hint. + +First, check **cpl.log.$datestamp**, which will often tell you +*when* the model failed. Then check the rest of the component log +files. See :ref:`troubleshooting run-time problems` for more information. + +.. _input_data: + +==================================================== +Input data +==================================================== + +The **check_input_data** script determines if the required data files +for your case exist on local disk in the appropriate subdirectory of +``$DIN_LOC_ROOT``. It automatically downloads missing data required for your simulation. + +.. note:: It is recommended that users on a given system share a common ``$DIN_LOC_ROOT`` directory to avoid duplication on + disk of large amounts of input data. You may need to talk to your system administrator in order to set this up. + +The required input data sets needed for each component are found in the +**$CASEROOT/Buildconf** directory. These files are generated by a call +to **preview_namlists** and are in turn created by each component's +**buildnml** script. For example, for compsets consisting only of data +models (i.e. ``A`` compsets), the following files are created: +:: + + cpl.input_data_list + datm.input_data_list + dice.input_data_list + docn.input_data_list + drof.input_data_list + +You can independently verify the presence of the required data by +using the following commands: +:: + + > cd $CASEROOT + > ./check_input_data --help + > ./check_input_data + +If data sets are missing, obtain them from the input data server(s) via the commands: +:: + + > cd $CASEROOT + > ./check_input_data --download + +``check_input_data`` is automatically called by the case control +system, when the case is built and submitted. So manual usage of this +script is optional. + +----------------------------------- +Distributed Input Data Repositories +----------------------------------- + +CIME has the ability to utilize multiple input data repositories, with +potentially different protocols. The repositories are defined in the +file **$CIMEROOT/config/$model/config_inputdata.xml**. The currently +supported server protocols are: ``gridftp``, ``subversion``, ``ftp`` and +``wget``. These protocols may not all be supported on your machine, +depending on software configuration. + +.. note:: You now have the ability to create your own input data + repository and add it to the **config_inputdata.xml**. This + will permit you to easily collaborate by sharing your + required inputdata with others. + + +.. _controlling-start-stop-restart: + +==================================================== +Starting, Stopping and Restarting a Run +==================================================== + +The file **env_run.xml** contains variables that may be modified at +initialization or any time during the course of a model run. Among +other features, the variables comprise coupler namelist settings for +the model stop time, restart frequency, coupler history frequency, and +a flag to determine if the run should be flagged as a continuation run. + +At a minimum, you will need to set the variables ``$STOP_OPTION`` and +``$STOP_N``. Other driver namelist settings then will have consistent and +reasonable default values. The default settings guarantee that +restart files are produced at the end of the model run. + +By default, the stop time settings are: +:: + + STOP_OPTION = ndays + STOP_N = 5 + STOP_DATE = -999 + +The default settings are appropriate only for initial testing. Before +starting a longer run, update the stop times based on the case +throughput and batch queue limits. For example, if the model runs 5 +model years/day, set ``RESUBMIT=30, STOP_OPTION= nyears, and STOP_N= +5``. The model will then run in five-year increments and stop after +30 submissions. + +.. _run-type-init: + +--------------------------------------------------- +Run-type initialization +--------------------------------------------------- + +The case initialization type is set using the ``$RUN_TYPE`` variable in +**env_run.xml**. A CIME run can be initialized in one of three ways: + +``startup`` + + In a startup run (the default), all components are initialized using + baseline states. These states are set independently by each component + and can include the use of restart files, initial files, external + observed data files, or internal initialization (that is, a "cold start"). + In a startup run, the coupler sends the start date to the components + at initialization. In addition, the coupler does not need an input data file. + In a startup initialization, the ocean model does not start until the second + ocean coupling step. + +``branch`` + + In a branch run, all components are initialized using a consistent + set of restart files from a previous run (determined by the + ``$RUN_REFCASE`` and ``$RUN_REFDATE`` variables in **env_run.xml**). + The case name generally is changed for a branch run, but it + does not have to be. In a branch run, the ``$RUN_STARTDATE`` setting is + ignored because the model components obtain the start date from + their restart data sets. Therefore, the start date cannot be changed + for a branch run. This is the same mechanism that is used for + performing a restart run (where ``$CONTINUE_RUN`` is set to TRUE in + the **env_run.xml** file). Branch runs typically are used when + sensitivity or parameter studies are required, or when settings for + history file output streams need to be modified while still + maintaining bit-for-bit reproducibility. Under this scenario, the + new case is able to produce an exact bit-for-bit restart in the same + manner as a continuation run if no source code or component namelist + inputs are modified. All models use restart files to perform this + type of run. ``$RUN_REFCASE`` and ``$RUN_REFDATE`` are required for + branch runs. To set up a branch run, locate the restart tar file or + restart directory for ``$RUN_REFCASE`` and ``$RUN_REFDATE`` from a + previous run, then place those files in the ``$RUNDIR`` directory. + See :ref:`Starting from a reference case`. + +``hybrid`` + + A hybrid run is initialized like a startup but it uses + initialization data sets from a previous case. It is similar + to a branch run with relaxed restart constraints. + A hybrid run allows users to bring together + combinations of initial/restart files from a previous case + (specified by ``$RUN_REFCASE``) at a given model output date + (specified by ``$RUN_REFDATE``). Unlike a branch run, the starting + date of a hybrid run (specified by ``$RUN_STARTDATE``) can be + modified relative to the reference case. In a hybrid run, the model + does not continue in a bit-for-bit fashion with respect to the + reference case. The resulting climate, however, should be + continuous provided that no model source code or namelists are + changed in the hybrid run. In a hybrid initialization, the ocean + model does not start until the second ocean coupling step, and the + coupler does a "cold start" without a restart file. + +The variable ``$RUN_TYPE`` determines the initialization type. This +setting is only important for the initial production run when +the ``$CONTINUE_RUN`` variable is set to FALSE. After the initial +run, the ``$CONTINUE_RUN`` variable is set to TRUE, and the model +restarts exactly using input files in a case, date, and bit-for-bit +continuous fashion. + +The variable ``$RUN_STARTDATE`` is the start date (in yyyy-mm-dd format) +for either a startup run or a hybrid run. If the run is targeted to be +a hybrid or branch run, you must specify values for ``$RUN_REFCASE`` and +``$RUN_REFDATE``. + +.. _starting_from_a_refcase: + +---------------------------------------- +Starting from a reference case (REFCASE) +---------------------------------------- + +There are several xml variables that control how either a branch or a hybrid case can start up from another case. +The initial/restart files needed to start up a run from another case are required to be in ``$RUNDIR``. +The xml variable ``$GET_REFCASE`` is a flag that if set will automatically prestaging the refcase restart data. + +- If ``$GET_REFCASE`` is ``TRUE``, then the the values set by ``$RUN_REFDIR``, ``$RUN_REFCASE``, ``$RUN_REFDATE`` and ``$RUN_TOD`` are + used to prestage the data by symbolic links to the appropriate path. + + The location of the necessary data to start up from another case is controlled by the xml variable ``$RUN_REFDIR``. + + - If ``$RUN_REFDIR`` is an absolute pathname, then it is expected that initial/restart files needed to start up a model run are in ``$RUN_REFDIR``. + + - If ``$RUN_REFDIR`` is a relative pathname, then it is expected that initial/restart files needed to start up a model run are in a path relative to ``$DIN_LOC_ROOT`` with the absolute pathname ``$DIN_LOC_ROOT/$RUN_REFDIR/$RUN_REFCASE/$RUN_REFDATE``. + + - If ``$RUN_REFDIR`` is a relative pathname AND is not available in ``$DIN_LOC_ROOT`` then CIME will attempt to download the data from the input data repositories. + + +- If ``$GET_REFCASE`` is ``FALSE`` then the data is assumed to already exist in ``$RUNDIR``. + +.. _controlling-output-data: + +========================= +Controlling output data +========================= + +During a model run, each model component produces its own output +data sets in ``$RUNDIR`` consisting of history, initial, restart, diagnostics, output +log and rpointer files. Component history files and restart files are +in netCDF format. Restart files are used to either restart the same +model or to serve as initial conditions for other model cases. The +rpointer files are ascii text files that list the component history and +restart files that are required for restart. + +Archiving (referred to as short-term archiving here) is the phase of a model run when output data are +moved from ``$RUNDIR`` to a local disk area (short-term archiving). +It has no impact on the production run except to clean up disk space +in the ``$RUNDIR`` which can help manage user disk quotas. + +Several variables in **env_run.xml** control the behavior of +short-term archiving. This is an example of how to control the +data output flow with two variable settings: +:: + + DOUT_S = TRUE + DOUT_S_ROOT = /$SCRATCH/$user/$CASE/archive + + +The first setting above is the default, so short-term archiving is enabled. The second sets where to move files at the end of a successful run. + +Also: + +- All output data is initially written to ``$RUNDIR``. + +- Unless you explicitly turn off short-term archiving, files are + moved to ``$DOUT_S_ROOT`` at the end of a successful model run. + +- Users generally should turn off short-term archiving when developing new code. + +Standard output generated from each component is saved in ``$RUNDIR`` +in a *log file*. Each time the model is run, a single coordinated datestamp +is incorporated into the filename of each output log file. +The run script generates the datestamp in the form YYMMDD-hhmmss, indicating +the year, month, day, hour, minute and second that the run began +(ocn.log.040526-082714, for example). + +By default, each component also periodically writes history files +(usually monthly) in netCDF format and also writes netCDF or binary +restart files in the ``$RUNDIR`` directory. The history and log files +are controlled independently by each component. History output control +(for example, output fields and frequency) is set in each component's namelists. + +The raw history data does not lend itself well to easy time-series +analysis. For example, CAM writes one or more large netCDF history +file(s) at each requested output period. While this behavior is +optimal for model execution, it makes it difficult to analyze time +series of individual variables without having to access the entire +data volume. Thus, the raw data from major model integrations usually +is post-processed into more user-friendly configurations, such as +single files containing long time-series of each output fields, and +made available to the community. + +For CESM, refer to the `CESM2 Output Filename Conventions +`_ +for a description of output data filenames. + +.. _restarting-a-run: + +====================== +Restarting a run +====================== + +Active components (and some data components) write restart files +at intervals that are dictated by the driver via the setting of the +``$REST_OPTION`` and ``$REST_N`` variables in **env_run.xml**. Restart +files allow the model to stop and then start again with bit-for-bit +exact capability; the model output is exactly the same as if the model +had not stopped. The driver coordinates the writing of restart +files as well as the time evolution of the model. + +Runs that are initialized as branch or hybrid runs require +restart/initial files from previous model runs (as specified by the +variables ``$RUN_REFCASE`` and ``$RUN_REFDATE``). Pre-stage these files +to the case ``$RUNDIR`` (normally ``$EXEROOT/../run``) before the model +run starts. Normally this is done by copying the contents of the +relevant **$RUN_REFCASE/rest/$RUN_REFDATE.00000** directory. + +Whenever a component writes a restart file, it also writes a restart +pointer file in the format **rpointer.$component**. Upon a restart, each +component reads the pointer file to determine which file to read in +order to continue the run. These are examples of pointer files created +for a component set using full active model components. +:: + + - rpointer.atm + - rpointer.drv + - rpointer.ice + - rpointer.lnd + - rpointer.rof + - rpointer.cism + - rpointer.ocn.ovf + - rpointer.ocn.restart + + +If short-term archiving is turned on, the model archives the +component restart data sets and pointer files into +**$DOUT_S_ROOT/rest/yyyy-mm-dd-sssss**, where yyyy-mm-dd-sssss is the +model date at the time of the restart. (See below for more details.) + +--------------------------------- +Backing up to a previous restart +--------------------------------- + +If a run encounters problems and crashes, you will normally have to +back up to a previous restart. If short-term archiving is enabled, +find the latest **$DOUT_S_ROOT/rest/yyyy-mm-dd-ssss/** directory +and copy its contents into your run directory (``$RUNDIR``). + +Make sure that the new restart pointer files overwrite older files in +in ``$RUNDIR`` or the job may not restart in the correct place. You can +then continue the run using the new restarts. + +Occasionally, when a run has problems restarting, it is because the +pointer and restart files are out of sync. The pointer files +are text files that can be edited to match the correct dates +of the restart and history files. All of the restart files should +have the same date. + +============================ +Archiving model output data +============================ + +The output data flow from a successful run depends on whether or not +short-term archiving is enabled, as it is by default. + +------------- +No archiving +------------- + +If no short-term archiving is performed, model output data remains +remain in the run directory as specified by ``$RUNDIR``. + +--------------------- +Short-term archiving +--------------------- + +If short-term archiving is enabled, component output files are moved +to the short-term archiving area on local disk, as specified by +``$DOUT_S_ROOT``. The directory normally is **$EXEROOT/../../archive/$CASE.** +and has the following directory structure: :: + + rest/yyyy-mm-dd-sssss/ + logs/ + atm/hist/ + cpl/hist + glc/hist + ice/hist + lnd/hist + ocn/hist + rof/hist + wav/hist + .... + +The **logs/** subdirectory contains component log files that were +created during the run. Log files are also copied to the short-term +archiving directory and therefore are available for long-term archiving. + +The **rest/** subdirectory contains a subset of directories that each contains +a *consistent* set of restart files, initial files and rpointer +files. Each subdirectory has a unique name corresponding to the model +year, month, day and seconds into the day when the files were created. +The contents of any restart directory can be used to create a branch run +or a hybrid run or to back up to a previous restart date. + +--------------------- +Long-term archiving +--------------------- + +Users may choose to follow their institution's preferred method for long-term +archiving of model output. Previous releases of CESM provided an external +long-term archiver tool that supported mass tape storage and HPSS systems. +However, with the industry migration away from tape archives, it is no longer +feasible for CIME to support all the possible archival schemes available. + +================================================ +Data Assimilation and other External Processing +================================================ + +CIME provides a capability to run a task on the compute nodes either +before or after the model run. CIME also provides a data assimilation +capability which will cycle the model and then a user defined task for +a user determined number of cycles. + + +------------------------- +Pre and Post run scripts +------------------------- + +Variables ``PRERUN_SCRIPT`` and ``POSTRUN_SCRIPT`` can each be used to name +a script which should be exectuted immediately prior starting or +following completion of the CESM executable within the batch +environment. The script is expected to be found in the case directory +and will recieve one argument which is the full path to that +directory. If the script is written in python and contains a +subroutine with the same name as the script, it will be called as a +subroutine rather than as an external shell script. + +------------------------- +Data Assimilation scripts +------------------------- + +Variables ``DATA_ASSIMILATION``, ``DATA_ASSIMILATION_SCRIPT``, and +``DATA_ASSIMILATION_CYCLES`` may also be used to externally control +model evolution. If ``DATA_ASSIMILATION`` is true after the model +completes the ``DATA_ASSIMILATION_SCRIPT`` will be run and then the +model will be started again ``DATA_ASSIMILATION_CYCLES`` times. The +script is expected to be found in the case directory and will recieve +two arguments, the full path to that directory and the cycle number. +If the script is written in python and contains a subroutine with the +same name as the script, it will be called as a subroutine rather than +as an external shell script. + +..: A simple example pre run script. + +:: + + #!/usr/bin/env python3 + import sys + from CIME.case import Case + + def myprerun(caseroot): + with Case(caseroot) as case: + print ("rundir is ",case.get_value("RUNDIR")) + + if __name__ == "__main__": + caseroot = sys.argv[1] + myprerun(caseroot) diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/setting-up-a-case.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/setting-up-a-case.rst.txt new file mode 100644 index 00000000000..feff58aaf49 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/setting-up-a-case.rst.txt @@ -0,0 +1,61 @@ +.. _setting-up-a-case: + +********************************* +Setting up a Case +********************************* + +After creating a case, some aspects of the case are fixed (any variables in env_case.xml). Changing the pe-layout +(see :ref:`Changing Pes`) or some aspects of the batch system you may be using must be modified before running +**case.setup**. + +=================================== +Calling **case.setup** +=================================== + +After creating a case or changing aspects of a case, such as the pe-layout, call the `case.setup <../Tools_user/case.setup.html>`_ command from ``$CASEROOT``. +This creates the following additional files and directories in ``$CASEROOT``: + + ============================= =============================================================================================================================== + .case.run A (hidden) file with the commands that will be used to run the model (such as “mpirun”) and any batch directives needed. + The directive values are generated using the contents + of **env_mach_pes.xml**. Running `case.setup --clean <../Tools_user/case.setup.html>`_ will remove this file. + This file should not be edited directly and instead controlled through XML variables in **env_batch.xml**. It should also + *never* be run directly. + + Macros.make File containing machine-specific makefile directives for your target platform/compiler. + This file is created if it does not already exist. + + The user can modify the file to change certain aspects of the build, such as compiler flags. + Running `case.setup --clean <../Tools_user/case.setup.html>`_ will not remove the file once it has been created. + However. if you remove or rename the Macros.make file, running `case.setup <../Tools_user/case.setup.html>`_ recreates it. + + user_nl_xxx[_NNNN] Files where all user modifications to component namelists are made. + + **xxx** is any one of the set of components targeted for the case. + For example, for a full active CESM compset, **xxx** is cam, clm or rtm, and so on. + + NNNN goes from 0001 to the number of instances of that component. + (See :ref:`multiple instances`) + + For a case with 1 instance of each component (default), NNNN will not appear + in the user_nl file names. + + A user_nl file of a given name is created only once. + + Calling `case.setup --clean <../Tools_user/case.setup.html>`_ will *not remove* any user_nl files. + + Changing the number of instances in the **env_mach_pes.xml** file will cause only + new user_nl files to be added to ``$CASEROOT``. + + CaseDocs/ Directory that contains all the component namelists for the run. + + This is for reference only and files in this directory SHOULD NOT BE EDITED since they will + be overwritten at build time and runtime. + + .env_mach_specific.* Files summarizing the **module load** commands and environment variables that are set when + the scripts in ``$CASEROOT`` are called. These files are not used by the case but can be + useful for debugging **module load** and environment settings. + + software_environment.txt This file records some aspects of the computing system on which the case is built, + such as the shell environment. + ============================= =============================================================================================================================== diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/testing.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/testing.rst.txt new file mode 100644 index 00000000000..061c62e3152 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/testing.rst.txt @@ -0,0 +1,581 @@ +.. _testing: + +********** +Testing +********** + +`create_test <../Tools_user/create_test.html>`_ +is the tool we use to test both CIME and CIME-driven models. +It can be used as an easy way to run a single basic test or an entire suite of tests. +`create_test <../Tools_user/create_test.html>`_ runs a test suite in parallel for improved performance. +It is the driver behind the automated nightly testing of cime-driven models. + +Running create_test is generally resource intensive, so run it in a manner appropriate for your system, +e.g. using 'nice', batch queues, nohup, the ``--parallel-jobs`` option to create_test, etc. +It will create and submit additional jobs to the batch queue (if one is available). + +.. _individual: + +An individual test can be run as:: + + $CIMEROOT/scripts/create_test $test_name + +Multiple tests can be run similarly, by listing all of the test names on the command line:: + + $CIMEROOT/scripts/create_test $test_name $test_name2 + +or by putting the test names into a file, one name per line:: + + $CIMEROOT/scripts/create_test -f $file_of_test_names + +A pre-defined suite of tests can by run using the ``--xml`` options to create_test, +which harvest test names from testlist*.xml files. +As described in https://github.com/ESCOMP/ctsm/wiki/System-Testing-Guide, +to determine what pre-defined test suites are available and what tests they contain, +you can run query_testlists_. + +Test suites are retrieved in create_test via 3 selection attributes:: + + --xml-category your_category The test category. + --xml-machine your_machine The machine. + --xml-compiler your_compiler The compiler. + +| If none of these 3 are used, the default values are 'none'. +| If any of them are used, the default for the unused options is 'all'. +| Existing values of these attributes can be seen by running query_testlists_. + +The search for test names can be restricted to a single test list using:: + + --xml-testlist your_testlist + +Omitting this results in searching all testlists listed in:: + + cime/config/{cesm,e3sm}/config_files.xml + +================= +Testname syntax +================= +.. _`Test naming`: + +Tests must be named with the following forms, [ ]=optional:: + + TESTTYPE[_MODIFIERS].GRID.COMPSET[.MACHINE_COMPILER][.GROUP-TESTMODS] + +================= ===================================================================================== +NAME PART +================= ===================================================================================== +TESTTYPE_ the general type of test, e.g. SMS. Options are listed in the following table and config_tests.xml. +MODIFIERS_ These are changes to the default settings for the test. + See the following table and test_scheduler.py. +GRID The model grid (can be an alias). +COMPSET alias of the compset, or long name, if no ``--xml`` arguments are used. +MACHINE This is optional; if this value is not supplied, `create_test <../Tools_user/create_test.html>`_ + will probe the underlying machine. +COMPILER If this value is not supplied, use the default compiler for MACHINE. +GROUP-TESTMODS_ This is optional. This points to a directory with ``user_nl_xxx`` files or a ``shell_commands`` + that can be used to make namelist and ``XML`` modifications prior to running a test. + | + +================= ===================================================================================== + +.. _TESTTYPE: + +============ ===================================================================================== +TESTTYPE Description +============ ===================================================================================== + ERS Exact restart from startup (default 6 days + 5 days) + | Do an 11 day initial test - write a restart at day 6. (file suffix: base) + | Do a 5 day restart test, starting from restart at day 6. (file suffix: rest) + | Compare component history files '.base' and '.rest' at day 11. + | They should be identical. + + ERS2 Exact restart from startup (default 6 days + 5 days). + + | Do an 11 day initial test without making restarts. (file suffix: base) + | Do an 11 day restart test stopping at day 6 with a restart, + then resuming from restart at day 6. (file suffix: rest) + | Compare component history files ".base" and ".rest" at day 11. + + ERT Exact restart from startup, default 2 month + 1 month (ERS with info DBUG = 1). + + IRT Exact restart from startup, (default 4 days + 7 days) with restart from interim file. + + ERIO Exact restart from startup with different PIO methods, (default 6 days + 5 days). + + ERR Exact restart from startup with resubmit, (default 4 days + 3 days). + + ERRI Exact restart from startup with resubmit, (default 4 days + 3 days). Tests incomplete logs option for st_archive. + + ERI hybrid/branch/exact restart test, default (by default STOP_N is 22 days) + ref1case + Do an initial run for 3 days writing restarts at day 3. + ref1case is a clone of the main case. + Short term archiving is on. + ref2case (Suffix hybrid) + Do a hybrid run for default 19 days running with ref1 restarts from day 3, + and writing restarts at day 10. + ref2case is a clone of the main case. + Short term archiving is on. + case + Do a branch run, starting from restarts written in ref2case, + for 9 days and writing restarts at day 5. + Short term archiving is off. + case (Suffix base) + Do a restart run from the branch run restarts for 4 days. + Compare component history files '.base' and '.hybrid' at day 19. + Short term archiving is off. + + ERP PES counts hybrid (OPENMP/MPI) restart bit for bit test from startup, (default 6 days + 5 days). + Initial PES set up out of the box + Do an 11 day initial test - write a restart at day 6. (file suffix base) + Half the number of tasks and threads for each component. + Do a 5 day restart test starting from restart at day 6. (file suffix rest) + Compare component history files '.base' and '.rest' at day 11. + This is just like an ERS test but the tasks/threading counts are modified on restart + + PEA Single PE bit for bit test (default 5 days) + Do an initial run on 1 PE with mpi library. (file suffix: base) + Do the same run on 1 PE with mpiserial library. (file suffix: mpiserial) + Compare base and mpiserial. + + PEM Modified PE counts for MPI(NTASKS) bit for bit test (default 5 days) + Do an initial run with default PE layout (file suffix: base) + Do another initial run with modified PE layout (NTASKS_XXX => NTASKS_XXX/2) (file suffix: modpes) + Compare base and modpes + + PET Modified threading OPENMP bit for bit test (default 5 days) + Do an initial run where all components are threaded by default. (file suffix: base) + Do another initial run with NTHRDS=1 for all components. (file suffix: single_thread) + Compare base and single_thread. + + PFS Performance test setup. History and restart output is turned off. (default 20 days) + + ICP CICE performance test. + + OCP POP performance test. (default 10 days) + + MCC Multi-driver validation vs single-driver (both multi-instance). (default 5 days) + + NCK Multi-instance validation vs single instance - sequential PE for instances (default length) + Do an initial run test with NINST 1. (file suffix: base) + Do an initial run test with NINST 2. (file suffix: multiinst for both _0001 and _0002) + Compare base and _0001 and _0002. + + REP Reproducibility: Two identical runs are bit for bit. (default 5 days) + + SBN Smoke build-namelist test (just run preview_namelist and check_input_data). + + SMS Smoke startup test (default 5 days) + Do a 5 day initial test. (file suffix: base) + + SEQ Different sequencing bit for bit test. (default 10 days) + Do an initial run test with out-of-box PE-layout. (file suffix: base) + Do a second run where all root pes are at pe-0. (file suffix: seq) + Compare base and seq. + + DAE Data assimilation test, default 1 day, two DA cycles, no data modification. + + PRE Pause-resume test: by default a bit for bit test of pause-resume cycling. + Default 5 hours, five pause/resume cycles, no data modification. + | + +============ ===================================================================================== + +.. _MODIFIERS: + +============ ===================================================================================== +MODIFIERS Description +============ ===================================================================================== + _C# Set number of instances to # and use the multi driver (can't use with _N). + + _CG CALENDAR set to "GREGORIAN" + + _D XML variable DEBUG set to "TRUE" + + _I Marker to distinguish tests with same name - ignored. + + _Lo# Run length set by o (STOP_OPTION) and # (STOP_N). + | o = {"y":"nyears", "m":"nmonths", "d":"ndays", + | \ "h":"nhours", "s":"nseconds", "n":"nsteps"} + + _Mx Set MPI library to x. + + _N# Set number of instances to # and use a single driver (can't use with _C). + + _Px Set create_newcase's ``--pecount`` to x, which is usually N (tasks) or NxM (tasks x threads per task). + + _R For testing in PTS_MODE or Single Column Model (SCM) mode. + For PTS_MODE, compile with mpi-serial. + + _Vx Set driver to x. + | + +============ ===================================================================================== + +.. _GROUP-TESTMODS: + +============ ===================================================================================== +TESTMODS Description +============ ===================================================================================== +GROUP A subdirectory of testmods_dirs and the parent directory of various testmods. +`-` Replaces '/' in the path name where the testmods are found. +TESTMODS A subdirectory of GROUP containing files which set non-default values + of the set-up and run-time variables via namelists or xml_change commands. + See "Adding tests": CESM_. + Examples include + + | GROUP-TESTMODS = cam-outfrq9s points to + | $cesm/components/cam/cime_config/testdefs/testmods_dirs/cam/outfrq9s + | while allactive-defaultio points to + | $cesm/cime_config/testmods_dirs/allactive/defaultio + +============ ===================================================================================== + + + +Each test run by `create_test <../Tools_user/create_test.html>`_ includes the following mandatory steps: + +* CREATE_NEWCASE: creating the create +* XML: xml changes to case based on test settings +* SETUP: setup case (case.setup) +* SHAREDLIB_BUILD: build sharedlibs +* MODEL_BUILD: build module (case.build) +* SUBMIT: submit test (case.submit) +* RUN: run test test + +And the following optional phases: + +* NLCOMP: Compare case namelists against baselines +* THROUGHPUT: Compare throughput against baseline throughput +* MEMCOMP: Compare memory usage against baseline memory usage +* MEMLEAK: Check for memleak +* COMPARE: Used to track test-specific comparions, for example, an ERS test would have a COMPARE_base_rest phase representing the check that the base result matched the restart result. +* GENERATE: Generate baseline results +* BASELINE: Compare results against baselines + +Each test may be in one of the following states: + +* PASS: The phase was executed successfully +* FAIL: We attempted to execute this phase, but it failed. If this phase is mandatory, no further progress will be made on this test. A detailed explanation of the failure should be in TestStatus.log. +* PEND: This phase will be run or is currently running but not complete + +The current state of a test is represented in the file $CASEROOT/TestStatus + +All output from the CIME infrastructure regarding this test will be put in the file $CASEROOT/TestStatus.log + +A cs.status.$testid script will be put in the test root. This script will allow you to see the +current status of all your tests. + +=================== +Query_testlists +=================== +.. _query_testlists: + +**$CIMEROOT/scripts/query_testlists** gathers descriptions of the tests and testlists available +for CESM, the components, and projects. + +The ``--xml-{compiler,machine,category,testlist}`` arguments can be used +as in create_test (above) to focus the search. +The 'category' descriptor of a test can be used to run a group of associated tests at the same time. +The available categories, with the tests they encompass, can be listed by:: + + ./query_testlists --define-testtypes + +The ``--show-options`` argument does the same, but displays the 'options' defined for the tests, +such as queue, walltime, etc.. + +============================ +Using **create_test** (E3SM) +============================ +.. _`Using create_test (E3SM)`: + + +Usage will differ slightly depending on if you're using E3SM or CESM. + +Using examples to illustrate common use cases + +To run a test:: + + ./create_test SMS.f19_f19.A + +To run a test with a non-default compiler:: + + ./create_test SMS.f19_f19.A --compiler intel + +To run a test with baseline comparisons against baseline name 'master':: + + ./create_test SMS.f19_f19.A -c -b master + +To run a test and update baselines with baseline name 'master':: + + ./create_test SMS.f19_f19.A -g -b master + +To run a test with a non-default test-id:: + + ./create_test SMS.f19_f19.A -t my_test_id + +To run a test and use a non-default test-root for your case dir:: + + ./create_test SMS.f19_f19.A -t $test_root + +To run a test and use and put case, build, and run dirs all in the same root:: + + ./create_test SMS.f19_f19.A --output-root $output_root + +To run a test and force it to go into a certain batch queue:: + + ./create_test SMS.f19_f19.A -q myqueue + +To run a test and use a non-default project (can impact things like directory paths and acct for batch system):: + + ./create_test SMS.f19_f19.A -p myproj + +To run two tests:: + + ./create_test SMS.f19_f19.A SMS.f19_f19.B + +To run a test suite:: + + ./create_test e3sm_developer + +To run a test suite excluding a specific test:: + + ./create_test e3sm_developer ^SMS.f19_f19.A + +See create_test -h for the full list of options + +Interpreting test output is pretty easy, looking at an example:: + + % ./create_test SMS.f19_f19.A + + Creating test directory /home/jgfouca/e3sm/scratch/SMS.f19_f19.A.melvin_gnu.20170504_163152_31aahy + RUNNING TESTS: + SMS.f19_f19.A.melvin_gnu + Starting CREATE_NEWCASE for test SMS.f19_f19.A.melvin_gnu with 1 procs + Finished CREATE_NEWCASE for test SMS.f19_f19.A.melvin_gnu in 4.170537 seconds (PASS) + Starting XML for test SMS.f19_f19.A.melvin_gnu with 1 procs + Finished XML for test SMS.f19_f19.A.melvin_gnu in 0.735993 seconds (PASS) + Starting SETUP for test SMS.f19_f19.A.melvin_gnu with 1 procs + Finished SETUP for test SMS.f19_f19.A.melvin_gnu in 11.544286 seconds (PASS) + Starting SHAREDLIB_BUILD for test SMS.f19_f19.A.melvin_gnu with 1 procs + Finished SHAREDLIB_BUILD for test SMS.f19_f19.A.melvin_gnu in 82.670667 seconds (PASS) + Starting MODEL_BUILD for test SMS.f19_f19.A.melvin_gnu with 4 procs + Finished MODEL_BUILD for test SMS.f19_f19.A.melvin_gnu in 18.613263 seconds (PASS) + Starting RUN for test SMS.f19_f19.A.melvin_gnu with 64 procs + Finished RUN for test SMS.f19_f19.A.melvin_gnu in 35.068546 seconds (PASS). [COMPLETED 1 of 1] + At test-scheduler close, state is: + PASS SMS.f19_f19.A.melvin_gnu RUN + Case dir: /home/jgfouca/e3sm/scratch/SMS.f19_f19.A.melvin_gnu.20170504_163152_31aahy + test-scheduler took 154.780044079 seconds + +You can see that `create_test <../Tools_user/create_test.html>`_ informs the user of the case directory and of the progress and duration +of the various test phases. + +========= +Baselines +========= +.. _`Baselines`: + +A big part of testing is managing your baselines (sometimes called gold results). We have provided tools to help the user do this without having to repeat full runs of test cases with `create_test <../Tools_user/create_test.html>`_ . + +------------------- +Creating a baseline +------------------- +.. _`Creating a baseline`: + +A baseline can be generated by passing ``-g`` to `create_test <../Tools_user/create_test.html>`_. There are additional options to control generating baselines.:: + + ./scripts/create_test -b master -g SMS.ne30_f19_g16_rx1.A + +-------------------- +Comparing a baseline +-------------------- +.. _`Comparing a baseline`: + +Comparing the output of a test to a baseline is achieved by passing ``-c`` to `create_test <../Tools_user/create_test.html>`_.:: + + ./scripts/create_test -b master -c SMS.ne30_f19_g16_rx1.A + +------------------ +Managing baselines +------------------ +.. _`Managing baselines`: + +Once a baseline has been generated it can be managed using the `bless_test_results <../Tools_user/bless_test_results.html>`_ tool. The tool provides the ability to bless different features of the baseline. The currently supported features are namelist files, history files, and performance metrics. The performance metrics are separated into throughput and memory usage. + +The following command can be used to compare a test to a baseline and bless an update to the history file.:: + + ./CIME/Tools/bless_test_results -b master --hist-only SMS.ne30_f19_g16_rx1.A + +The `compare_test_results <../Tools_user/compare_test_results.html>_` tool can be used to quickly compare tests to baselines and report any `diffs`.:: + + ./CIME/Tools/compare_test_results -b master SMS.ne30_f19_g16_rx1.A + +--------------------- +Performance baselines +--------------------- +.. _`Performance baselines`: +By default performance baselines are generated by parsing the coupler log and comparing the throughput in SYPD (Simulated Years Per Day) and the memory usage high water. + +This can be customized by creating a python module under ``$DRIVER_ROOT/cime_config/customize``. There are four hooks that can be used to customize the generation and comparison. + +- perf_get_throughput +- perf_get_memory +- perf_compare_throughput_baseline +- perf_compare_memory_baseline + +.. + TODO need to add api docs and link +The following pseudo code is an example of this customization.:: + + # $DRIVER/cime_config/customize/perf_baseline.py + + def perf_get_throughput(case): + """ + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + + Returns + ------- + str + Storing throughput value. + str + Open baseline file for writing. + """ + current = analyze_throughput(...) + + return json.dumps(current), "w" + + def perf_get_memory(case): + """ + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + + Returns + ------- + str + Storing memory value. + str + Open baseline file for writing. + """ + current = analyze_memory(case) + + return json.dumps(current), "w" + + def perf_compare_throughput_baseline(case, baseline, tolerance): + """ + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + baseline : str + Baseline throughput value. + tolerance : float + Allowed difference tolerance. + + Returns + ------- + bool + Whether throughput diff is below tolerance. + str + Comments about the results. + """ + current = analyze_throughput(case) + + baseline = json.loads(baseline) + + diff, comments = generate_diff(...) + + return diff, comments + + def perf_compare_memory_baseline(case, baseline, tolerance): + """ + Parameters + ---------- + case : CIME.case.case.Case + Current case object. + baseline : str + Baseline memory value. + tolerance : float + Allowed difference tolerance. + + Returns + ------- + bool + Whether memory diff is below tolerance. + str + Comments about the results. + """ + current = analyze_memory(case) + + baseline = json.loads(baseline) + + diff, comments = generate_diff(...) + + return diff, comments + +============= +Adding tests +============= +.. _`Adding tests`: + +E3SM + +Open the config/e3sm/tests.py file, you'll see a python dict at the top +of the file called _TESTS, find the test category you want to +change in this dict and add your testcase to the list. Note the +comment at the top of this file indicating that you add a test with +this format: test>.., and then there is a second +argument for mods. + +CESM + +.. _CESM: + +Select a compset to test. If you need to test a non-standard compset, +define an alias for it in the most appropriate config_compsets.xml in :: + + $cesm/components/$component/cime_config + $cesm/cime/src/drivers/mct/cime_config + $cesm/cime_config + +If you want to test non-default namelist or xml variable values for your chosen compset, +you might find them in a suitable existing testmods directory (see "branching", this section, for locations). +If not, then populate a new testmods directory with the needed files (see "contents", below). +Note; do not use '-' in the testmods directory name because it has a special meaning to create_test. +Testlists and testmods live in different paths for cime, drv, and components. +The relevant directory branching looks like +:: + + components/$component/cime_config/testdefs/ + testlist_$component.xml + testmods_dirs/$component/{TESTMODS1,TESTMODS2,...} + cime/src/drivers/mct/cime_config/testdefs/ + testlist_drv.xml + testmods_dirs/drv/{default,5steps,...} + cime_config/ + testlist_allactive.xml + testmods_dirs/allactive/{defaultio,...} + +The contents of each testmods directory can include +:: + + user_nl_$components namelist variable=value pairs + shell_commands xmlchange commands + user_mods a list of other GROUP-TESTMODS which should be imported + but at a lower precedence than the local testmods. + +If this test will only be run as a single test, you can now create a test name +and follow the individual_ test instructions for create_test. +If you want this test to be part of a suite, then it must be described in the relevant testlists_YYY.xml file. + +====================== +CIME Developer's guide +====================== +.. _`CIME Developer's guide`: + +The CIME Developer's guide can be found on the project's GitHub `wiki `_. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/timers.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/timers.rst.txt new file mode 100644 index 00000000000..3614d4f4c87 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/timers.rst.txt @@ -0,0 +1,180 @@ +.. _timers: + +=================== +Timers and timing +=================== + +CIME includes a copy of the General Purpose Timing Library (GPTL) and timers are placed throughout the CIME driver. CIME-driven models typically +also have GPTL timers in their code and very detailed timing information can be obtained. + +.. _model-timing-data: + +Model timing data +------------------ + +Every model run produces three types of timing output that you can examine: + +1. **$CASEROOT/timing/$model_timing.$CASE.$datestamp** + + This is the most useful way to quickly determine timing summaries across components + The following describes the most important parts of this timing file: + + An example timing file of this type is: + + :: + + ---------------- TIMING PROFILE --------------------- + Case : b.e20.BHIST.f09_g17.20thC.297_02 + LID : 9459679.chadmin1.180517-114852 + Machine : cheyenne + Caseroot : /glade/p/cesmdata/cseg/runs/cesm2_0/b.e20.BHIST.f09_g17.20thC.297_02 + Timeroot : /glade/p/cesmdata/cseg/runs/cesm2_0/b.e20.BHIST.f09_g17.20thC.297_02/Tools + User : hannay + Curr Date : Thu May 17 12:42:27 2018 + grid : a%0.9x1.25_l%0.9x1.25_oi%gx1v7_r%r05_g%gland4_w%ww3a_m%gx1v7 + compset : HIST_CAM60_CLM50%BGC-CROP_CICE_POP2%ECO_MOSART_CISM2%NOEVOLVE_WW3_BGC%BDRD + run_type : hybrid, continue_run = FALSE (inittype = TRUE) + stop_option : nyears, stop_n = 1 + run_length : 365 days (364.958333333 for ocean) + + component comp_pes root_pe tasks x threads instances (stride) + --------- ------ ------- ------ ------ --------- ------ + cpl = cpl 3456 0 1152 x 3 1 (1 ) + atm = cam 3456 0 1152 x 3 1 (1 ) + lnd = clm 2592 0 864 x 3 1 (1 ) + ice = cice 864 864 288 x 3 1 (1 ) + ocn = pop 768 1152 256 x 3 1 (1 ) + rof = mosart 2592 0 864 x 3 1 (1 ) + glc = cism 3456 0 1152 x 3 1 (1 ) + wav = ww 96 1408 32 x 3 1 (1 ) + esp = sesp 1 0 1 x 1 1 (1 ) + + total pes active : 12960 + mpi tasks per node : 36 + pe count for cost estimate : 4320 + + Overall Metrics: + Model Cost: 3541.30 pe-hrs/simulated_year + Model Throughput: 29.28 simulated_years/day + + Init Time : 242.045 seconds + Run Time : 2951.082 seconds 8.085 seconds/day + Final Time : 0.008 seconds + + Actual Ocn Init Wait Time : 768.737 seconds + Estimated Ocn Init Run Time : 0.248 seconds + Estimated Run Time Correction : 0.000 seconds + (This correction has been applied to the ocean and total run times) + + Runs Time in total seconds, seconds/model-day, and model-years/wall-day + CPL Run Time represents time in CPL pes alone, not including time associated with data exchange with other components + + TOT Run Time: 2951.082 seconds 8.085 seconds/mday 29.28 myears/wday + CPL Run Time: 248.696 seconds 0.681 seconds/mday 347.41 myears/wday + ATM Run Time: 2097.788 seconds 5.747 seconds/mday 41.19 myears/wday + LND Run Time: 545.991 seconds 1.496 seconds/mday 158.24 myears/wday + ICE Run Time: 389.173 seconds 1.066 seconds/mday 222.01 myears/wday + OCN Run Time: 2169.399 seconds 5.944 seconds/mday 39.83 myears/wday + ROF Run Time: 42.241 seconds 0.116 seconds/mday 2045.41 myears/wday + GLC Run Time: 1.049 seconds 0.003 seconds/mday 82364.16 myears/wday + WAV Run Time: 517.414 seconds 1.418 seconds/mday 166.98 myears/wday + ESP Run Time: 0.000 seconds 0.000 seconds/mday 0.00 myears/wday + CPL COMM Time: 2464.660 seconds 6.752 seconds/mday 35.06 myears/wday + + ---------------- DRIVER TIMING FLOWCHART --------------------- + ............. + + + TIMING PROFILE is the first section in the timing output. It + summarizes general timing information for the run. The total run + time and cost are given in several metrics to facilitate analysis + and comparisons with other runs. These metrics includ pe-hrs per + simulated year (cost), simulated years per wall day (thoughput), + seconds, and seconds per model day. The total run time for each + component and the time for initialization of the model also are + provided. These times are the aggregate over the total run and do + not take into account any temporal or processor load imbalances. + + DRIVER TIMING FLOWCHART is the second section in the timing + output. It provides timing information for the driver in + sequential order and indicates which processors are involved in + the cost. Finally, the timings for the coupler are broken out at + the bottom of the timing output file. + + +2. **$CASEROOT/timing/$model_timing_stats.$date** + + Provides an overall detailed timing summary for each component, including the minimum and maximum of all the model timers. + +3. **cpl.log.$datestamp** + + Contains the run time for each model day during the run and is + output during the run. You can search for ``tStamp`` in the cpl.log + file to see the information, which is useful for tracking down + temporal variability in cost due to inherent model variability or + to hardware. The model daily cost generally is pretty constant + unless I/O is written intermittently, such as at the end of the + month. This file will appear either in **$RUNDIR** or in + **DOUT_S_ROOT/logs** for your run. + +The xml variable ``CHECK_TIMING``, if set to ``TRUE`` (the default) will produce the timing files in the **$CASEROOT/timing** directory. + + +Controlling timers +------------------ + +User customization of timers is done via the xml variables ``TIMER_LEVEL`` and ``TIMER_DETAIL``. + +* ``TIMER_LEVEL``: + + This is the maximum code stack depth of enabled timers. + +* ``TIMER_DETAIL``: + + This is an integer indicating maximum detail level to profile. This + xml variable is used to set the namelist variable timing_detail_limit. + This namelist variable is used by perf_mod (in + $CIMEROOT/src/share/timing/perf_mod.F90) to turn timers off and on + depending on calls to the routine t_adj_detailf. If in the code a + statement appears like t_adj_detailf(+1), then the current timer + detail level is incremented by 1 and compared to the time_detail_limit + obtained from the namelist. If the limit is exceeded then the timer + is turned off. + +Further control of timers is then done via modifications of the **prof_inparm namelists** in the file **drv_in**. This is done +via keyword-value settings in user_nl_cpl. As an example, if you want to set the namelist variable ``profile_barriers`` to ``.true.``, +add the following line in your **$CASEROOT/user_nl_cpl**: + +:: + + profile_barriers = .true. + + +Advice on setting your wallclock time +------------------------------------- + +When you look at the **$model_timing.$CASE.$datestamp** file for "Model Throughput", you will find output like this: + :: + + Overall Metrics: + Model Cost: 327.14 pe-hrs/simulated_year (scale= 0.50) + Model Throughput: 4.70 simulated_years/day + +The model throughput is the estimated number of model years that you +can run in a wallclock day. Based on this, you can maximize your queue +limit and change ``$STOP_OPTION`` and ``$STOP_N``. + +For example, say a model's throughput is 4.7 simulated_years/day, and +the maximum runtime limit on your machine is 12 hours. 4.7 model +years/24 hours * 12 hours = 2.35 years. On the massively parallel +computers, there is always some variability in how long it will take +a job to run. On some machines, you may need to leave as much as 20% +buffer time in your run to guarantee that jobs finish reliably before +the time limit. For that reason, set your model to run only one model +year/job. In this example, set your wallclock at 12 hours and invoke +`xmlchange <../Tools_user/xmlchange.html>`_ in ``CASEROOT`` as shown here: :: + + >./xmlchange STOP_OPTION=nyears + >./xmlchange STOP_N=1 + >./xmlchange REST_OPTION=nyears + >./xmlchange REST_N=1 diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/troubleshooting.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/troubleshooting.rst.txt new file mode 100644 index 00000000000..77ef823d9c0 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/troubleshooting.rst.txt @@ -0,0 +1,126 @@ +.. _troubleshooting: + +Troubleshooting +=============== + +Troubleshooting case creation +----------------------------- + +Generally, `create_newcase <../Tools_user/create_newcase.html>`_ errors are reported to the terminal and should provide some guidance about what caused them. + +If `create_newcase <../Tools_user/create_newcase.html>`_ fails on a relatively generic error, first check to make sure the command-line arguments match the interface's specification. See the help text to review usage. +:: + + > create_newcase --help + +Troubleshooting problems in cime scripts +---------------------------------------- + +If any of the python-based cime scripts are dying in a mysterious way, more information can be obtained by rerunning the script with the ``--debug`` option. + +Troubleshooting job submission +------------------------------- + +Most problems associated with submission or launch are site-specific. +The batch and run aspects of the `case.submit <../Tools_user/case.submit.html>`_ script are created by parsing the variables in **$CASEROOT/env_batch.xml** file. + +Take these steps to check for problems: + +1. Review the batch submission options in **$CASEROOT/env_batch.xml**. Confirm that they are consistent with the site-specific batch environment, and that the queue names, time limits, and hardware processor request make sense and are consistent with the case. + +2. Make sure that `case.submit <../Tools_user/case.submit.html>`_ uses the correct batch job tool for submitting the `case.submit <../Tools_user/case.submit.html>`_ script. Depending on the batch environment, it might be **bsub**, **qsub** or another command. Also confirm if a redirection "<" character is required. The information for how **case.submit** submits jobs appears at the end of the standard output stream. + +Troubleshooting runtime problems +--------------------------------- + +To see if a run completed successfully, check the last several lines of the **cpl.log** file for a string like ``SUCCESSFUL TERMINATION``. A successful job also usually copies the log files to the **$CASEROOT/logs** directory. + +Check these things first when a job fails: + +- Did the model time out? + +- Was a disk quota limit hit? + +- Did a machine go down? + +- Did a file system become full? + +If any of those things happened, take appropriate corrective action (see suggestions below) and resubmit the job. + +If it is not clear that any of the above caused a case to fail, there are several places to look for error messages. + +- Check component log files in your run directory (``$RUNDIR``). + This directory is set in the **env_run.xml** file. + Each component writes its own log file, and there should be log files for every component in this format: **cpl.log.yymmdd-hhmmss**. + Check each log file for an error message, especially at or near the end. + +- Check for a standard out and/or standard error file in ``$CASEROOT``. + The standard out/err file often captures a significant amount of extra model output and also often contains significant system output when a job terminates. + Useful error messages sometimes are found well above the bottom of a large standard out/err file. Backtrack from the bottom in search of an error message. + +- Check for core files in your run directory and review them using an appropriate tool. + +- Check any automated email from the job about why a job failed. Some sites' batch schedulers send these. + +- Check the archive directory: **$DOUT_S_ROOT/$CASE**. If a case failed, the log files + or data may still have been archived. + +**Common errors** + +One common error is for a job to time out, which often produces minimal error messages. +Review the daily model date stamps in the **cpl.log** file and the timestamps of files in your run directory to deduce the start and stop time of a run. +If the model was running fine, but the wallclock limit was reached, either reduce the run length or increase the wallclock setting. + +If the model hangs and then times out, that usually indicates an MPI or file system problem or possibly a model problem. If you suspect an intermittent system problem, try resubmitting the job. Also send a help request to local site consultants to provide them with feedback about system problems and to get help. + +Another error that can cause a timeout is a slow or intermittently slow node. +The **cpl.log** file normally outputs the time used for every model simulation day. To review that data, grep the **cpl.log** file for the string ``tStamp`` as shown here: +:: + + > grep tStamp cpl.log.* | more + +The output looks like this: +:: + + tStamp_write: model date = 10120 0 wall clock = 2009-09-28 09:10:46 avg dt = 58.58 dt = 58.18 + tStamp_write: model date = 10121 0 wall clock = 2009-09-28 09:12:32 avg dt = 60.10 dt = 105.90 + + +Review the run times at the end of each line for each model day. +The "avg dt =" is the average time to simulate a model day and "dt = " is the time needed to simulate the latest model day. + +The model date is printed in YYYYMMDD format and the wallclock is the local date and time. +In the example, 10120 is Jan 20, 0001, and the model took 58 seconds to run that day. +The next day, Jan 21, took 105.90 seconds. + +A wide variation in the simulation time for typical mid-month model days suggests a system problem. However, there are variations in the cost of the model over time. +For instance, on the last day of every simulated month, the model typically writes netcdf files, which can be a significant intermittent cost. +Also, some model configurations read data mid-month or run physics intermittently at a timestep longer than one day. +In those cases, some variability is expected. The time variation typically is quite erratic and unpredictable if the problem is system performance variability. + +Sometimes when a job times out or overflows disk space, the restart files will get mangled. +With the exception of the CAM and CLM history files, all the restart files have consistent sizes. + +Compare the restart files against the sizes of a previous restart. If they don't match, remove them and move the previous restart into place before resubmitting the job. +See `Restarting a run `_. + +It is not uncommon for nodes to fail on HPC systems or for access to large file systems to hang. Before you file a bug report, make sure a case fails consistently in the same place. + +**Rerunning with additional debugging information** + +There are a few changes you can make to your case to get additional information that aids in debugging: + +- Increase the value of the run-time xml variable ``INFO_DBUG``: ``./xmlchange INFO_DBUG=2``. + This adds more information to the cpl.log file that can be useful if you can't tell what component is aborting the run, or where bad coupling fields are originating. + (This does NOT require rebuilding.) + +- Try rebuilding and rerunning with the build-time xml variable ``DEBUG`` set to ``TRUE``: ``./xmlchange DEBUG=TRUE``. + + - This adds various runtime checks that trap conditions such as out-of-bounds array indexing, divide by 0, and other floating point exceptions (the exact conditions checked depend on flags set in macros defined in the cmake_macros subdirectory of the caseroot). + + - The best way to do this is often to create a new case and run ``./xmlchange DEBUG=TRUE`` before running ``./case.build``. + However, if it is hard for you to recreate your case, then you can run that xmlchange command from your existing case; then you must run ``./case.build --clean-all`` before rerunning ``./case.build``. + + - Note that the model will run significantly slower in this mode, so this may not be feasible if the model has to run a long time before producing the error. + (Sometimes it works well to run the model until shortly before the error in non-debug mode, have it write restart files, then restart after rebuilding in debug mode.) + Also note that answers will change slightly, so if the error arises from a rare condition, then it may not show up in this mode. diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/unit_testing.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/unit_testing.rst.txt new file mode 100644 index 00000000000..af8025a6f44 --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/unit_testing.rst.txt @@ -0,0 +1,421 @@ +.. _unit-testing: + +Fortran Unit Testing +==================== + +Introduction +------------ + +What is a unit test? +~~~~~~~~~~~~~~~~~~~~ + +A unit test is a fast, self-verifying test of a small piece of code. +A single unit test typically covers 10s to 100s of lines of code; a single function or small module, for example. +It typically runs in milliseconds and produces a simple pass/fail result. + +Unit tests: + +* Ensure that code remains correct as it is modified. In this respect, unit tests complement the CIME system tests. + +* Ensure that new code is correct. + +* Can help guide development, via test-driven development (TDD). + +* Provide executable documentation of the intended behavior of a piece of code. + +* Support development on your desktop machine. + +Overview of unit test support in CIME +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +CIME comes with a set of tools to support building and running unit tests. +These consist of: + +#. CMake tools to support building and running tests via CMake and CTest. + +#. A Python script that provides a simple front end for the CMake-based tests. + +The Fortran unit tests use `pFUnit `_, which is a Fortran testing framework that follows conventions of other xUnit frameworks. CIME's support for pFUnit requires pFUnit version 4 or greater. + +.. _running_unit_tests: + +Running CIME's Fortran unit tests +--------------------------------- + +These instructions assume that you are using a machine that already has pFUnit installed, along with the necessary support in CIME. +If that is not the case, see :ref:`adding_machine_support`. + +From the top-level CIME directory, you can run all of CIME's Fortran unit tests by running: + +.. code-block:: shell + + > scripts/fortran_unit_testing/run_tests.py --build-dir MY_BUILD_DIR + +You can replace ``MY_BUILD_DIR`` with a path to the directory where you would like the unit test build files to be placed. +To ensure a completely clean build, use: + +.. code-block:: shell + + > scripts/fortran_unit_testing/run_tests.py --build-dir `mktemp -d ./unit_tests.XXXXXXXX` + +Once you have built the unit tests (whether the build was successful or not), you can reuse the same build directory later to speed up the rebuild. +There are a number of useful arguments to **run_tests.py**. For full usage information, run: + +.. code-block:: shell + + > scripts/fortran_unit_testing/run_tests.py --help + +If your build is successful, you will get a message like this: + +.. code-block:: none + + ================================================== + Running CTest tests for __command_line_test__/__command_line_test__. + ================================================== + +This will be followed by a list of tests, with a Pass/Fail message for each, like these examples: + +.. code-block:: none + + Test project /Users/sacks/cime/unit_tests.0XHUkfqL/__command_line_test__/__command_line_test__ + Start 1: avect_wrapper + 1/17 Test #1: avect_wrapper .................... Passed 0.02 sec + Start 2: seq_map + 2/17 Test #2: seq_map .......................... Passed 0.01 sec + Start 3: glc_elevclass + 3/17 Test #3: glc_elevclass .................... Passed 0.01 sec + +You will also see a final message like this: + +.. code-block:: none + + 100% tests passed, 0 tests failed out of 17 + +These unit tests are run automatically as part of **scripts_regression_tests** on machines that have a serial build of pFUnit available for the default compiler. + +.. _adding_machine_support: + +How to add unit testing support on your machine +----------------------------------------------- + +The following instructions assume that you have ported CIME to your +machine by following the instructions in +:doc:`/users_guide/porting-cime`. If you have done that, you can add +unit testing support by building pFUnit on your machine and then +pointing to the build in your ** *MACH*_*COMPILER*.cmake** file. Those +processes are described in the following sections. + +Building pFUnit +~~~~~~~~~~~~~~~ + +Follow the instructions below to build pFUnit using the default compiler on your machine. +That is the default for **run_tests.py** and that is required for **scripts_regression_tests.py** to run the unit tests on your machine. +For the CMake step, we typically build with ``-DSKIP_MPI=YES``, ``-DSKIP_OPENMP=YES`` and ``-DCMAKE_INSTALL_PREFIX`` set to the directory where you want pFUnit to be installed. +(At this time, no unit tests require parallel support, so we build without MPI support to keep things simple.) +Optionally, you can also provide pFUnit builds with other supported compilers on your machine. + +#. Obtain pFUnit from https://github.com/Goddard-Fortran-Ecosystem/pFUnit (see + https://github.com/Goddard-Fortran-Ecosystem/pFUnit#obtaining-pfunit for details) + +#. Create a directory for the build and cd to that directory: + + .. code-block:: shell + + > mkdir build-dir + > cd build-dir + +#. Set up your environment to be similar to the environment used in CIME system builds. + For example, load the appropriate compilers into your path. + An easy way to achieve this is to run the following with an optional compiler argument: + + .. code-block:: shell + + > $CIMEROOT/CIME/scripts/configure --mpilib mpi-serial + + Then source either **./.env_mach_specific.sh** or **./.env_mach_specific.csh**, depending on your shell. + + On some systems, you may need to explicitly set the ``FC`` and ``CC`` environment + variables so that pFUnit's CMake build picks up the correct compilers, e.g., with: + + .. code-block:: shell + + > export FC=ifort + > export CC=icc + +#. For convenience, set the ``PFUNIT`` environment variable to point to the location where you want to install pFUnit. For example (in bash): + + .. code-block:: shell + + > export PFUNIT=$CESMDATAROOT/tools/pFUnit/pFUnit4.7.0_cheyenne_Intel19.1.1_noMPI_noOpenMP + +#. Configure and build pFUnit: + + .. code-block:: shell + + > cmake -DSKIP_MPI=YES -DSKIP_OPENMP=YES -DCMAKE_INSTALL_PREFIX=$PFUNIT .. + > make -j 8 + +#. Run pFUnit's self-tests: + + .. code-block:: shell + + > make tests + +#. Install pFUnit in the directory you specified earlier: + + .. code-block:: shell + + > make install + +You can repeat this process with different compiler environments. +Make sure to choose a different installation directory for each build by setting the ``PFUNIT`` variable differently. + +Adding to the appropriate cmake file +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After you build pFUnit, tell CIME about your build or builds. +To do this, specify the appropriate path using the ``PFUNIT_PATH`` CMake variable in the ** *MACH*_*COMPILER*.cmake** file. +For a build with no MPI or openMP support (as recommended above), the block should look like this (with the actual path replaced with the PFUNIT path you specified when doing the build): + + .. code-block:: cmake + + if (MPILIB STREQUAL mpi-serial AND NOT compile_threaded) + set(PFUNIT_PATH "$ENV{CESMDATAROOT}/tools/pFUnit/pFUnit4.7.0_cheyenne_Intel19.1.1_noMPI_noOpenMP") + endif() + +Once you have specified the path for your build(s), you should be able to run the unit tests by following the instructions in :ref:`running_unit_tests`. + +How to write a new unit test +---------------------------- + +.. todo:: Need to write this section. This will draw on some of the information in sections 3 and 4 of https://github.com/NCAR/cesm_unit_test_tutorial (though without the clm and cam stuff). + +It should also introduce the role of .pf files, which are referenced several paragraphs later as if already explained. + +General guidelines for writing unit tests +----------------------------------------- + +Unit tests typically test a small piece of code, on the order of 10-100 lines, as in a single function or small class. + +Good unit tests are **"FIRST"**: +(https://pragprog.com/magazines/2012-01/unit-tests-are-first): + +* **Fast** (milliseconds or less). This means that, generally, they should not do any file i/o. Also, if you are testing a complex function, test it with a simple set of inputs rather than a 10,000-element array that will require a few seconds of runtime to process. + +* **Independent**. This means that test Y shouldn't depend on some global variable that text X created. Such dependencies cause problems if the tests run in a different order, if one test is dropped, and so on. + +* **Repeatable**. This means, for example, that you shouldn't generate random numbers in your tests. + +* **Self-verifying**. Don't write a test that writes out its answers for manual comparison. Tests should generate an automatic pass/fail result. + +* **Timely**. Write the tests *before* the production code (TDD) or immediately afterwards - not six months later when it's time to finally merge your changes onto the trunk and you have forgotten the details. Much of the benefit of unit tests comes from developing them concurrently with the production code. + +Good unit tests test a single, well-defined condition. This generally means that +you make a single call to the function or subroutine that you're testing, with a +single set of inputs. Usually you need to run multiple tests in order to test +all of the unit's possible behaviors. + +Testing a single condition in each test makes pinpointing problems easier when a test fails. +This also makes it easier to read and understand the tests, allowing them to serve as useful +documentation of how the code should operate. + +A good unit test has four distinct pieces: + +#. **Setup**: For example, creating variables that will be needed for the routine you're testing. For simple tests, this piece may be empty. + +#. **Exercise**: Calling the routine you're testing. + +#. **Verify**: Calling assertion methods (next section) to ensure that the results match what you expected. + +#. **Teardown**: For example, deallocating variables. For simple tests, this piece may be empty. If it is needed, however, it is best done in the special tearDown routine discussed in `Defining a test class in order to define setUp and tearDown methods`_ and `More on test teardown`_.** + +If you have many tests of the same subroutine, you may find quite a +lot of duplication. It's good practice to extract major areas of duplication to their own +subroutines in the **.pf** file, which your tests can call. This aids the understandability +and maintainability of your tests. pFUnit knows which subroutines are tests and which are +"helper" routines because of the ``@Test`` directives: You only add a ``@Test`` directive +for your tests, not for your helper routines. + +More details on writing pFUnit-based unit tests +----------------------------------------------- + +Assertion methods +~~~~~~~~~~~~~~~~~ + +pFUnit provides many assertion methods that you can use in the Verify step. +Here are some of the most useful: + +================================================= =================================================================== + +``@assertEqual(expected, actual)`` Ensures that expected == actual. + Accepts an optional ``tolerance`` argument giving the tolerance for + real-valued comparisons. + +``@assertLessThan(expected, actual)`` Ensures that expected < actual. + +``@assertGreaterThan(expected, actual)`` Ensures that expected > actual. + +``@assertLessThanOrEqual(expected, actual)`` + +``@assertGreaterThanOrEqual(expected, actual)`` + +``@assertTrue(condition)`` It is better to use the two-valued assertions above, if possible. + They provide more information if a test fails. + +``@assertFalse(condition)`` + +``@assertIsFinite(value)`` Ensures that the result is not NaN or infinity. + +``@assertIsNan(value)`` This can be useful for failure checking - for example, when your + function returns NaN to signal an error. + +================================================= =================================================================== + +Comparison assertions accept an optional ``tolerance`` argument, which gives the +tolerance for real-valued comparisons. + +All of the assertion methods also accept an optional ``message`` argument, which prints +a string if the assertion fails. If no message is provided, you will be pointed to the +file and line number of the failed assertion. + +Defining a test class in order to define setUp and tearDown methods +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +As noted in the comments in **test_circle.pf**, defining a test class is optional. +However, defining a minimal test class as shown here with ``TestCircle`` allows you +use some pFUnit features such as the setUp and tearDown methods. + +.. code-block:: none + + @TestCase + type, extends(TestCase) :: TestCircle + contains + procedure :: setUp + procedure :: tearDown + end type TestCircle + +If you define this test class, you also need to: + +* Define *setUp* and *tearDown* subroutines. These can start out empty: + + .. code-block:: Fortran + + subroutine setUp(this) + class(TestCircle), intent(inout) :: this + end subroutine setUp + + subroutine tearDown(this) + class(TestCircle), intent(inout) :: this + end subroutine tearDown + +* Add an argument to each subroutine of the class. By convention, this argument is named ``this``. + +Code in the setUp method is executed before each test. This is convenient +if you need to do some setup that is the same for every test. + +Code in the tearDown method is executed after each test. This is often used +to deallocate memory. See `More on test teardown`_ for details. + +You can add any data or procedures to the test class. Adding data is +particularly useful, as this can be a way for the setUp and tearDown methods to +interact with your tests: The setUp method can fill a class variable with data, +which your tests can then use (accessed via ``this%somedata``). Conversely, if +you want the tearDown method to deallocate a variable, the variable cannot be local +to your test subroutine. Instead, you make the variable a member of the class, so +that the tearDown method can access it. + +Here is an example. Say you have this variable in your test class: + +.. code-block:: Fortran + + real(r8), pointer :: somedata(:) + +The setUp method can create ``somedata`` if it needs to be the same +for every test. + +Alternatively, it can be created in each test routine that needs it if it +differs from test to test. (Some tests don't need it at all.) In that situation, +create it like this: + +.. code-block:: Fortran + + allocate(this%somedata(5)) + this%somedata(:) = [1,2,3,4,5] + +Your tearDown method then can have code like this: + +.. code-block:: Fortran + + if (associated(this%somedata)) then + deallocate(this%somedata) + end if + +More on test teardown +~~~~~~~~~~~~~~~~~~~~~ + +As stated in `Defining a test class in order to define setUp and tearDown methods`_, +code in the tearDown method is executed after each test, often to do cleanup operations. + +Using the tearDown method is recommended because tests abort if an assertion fails. +The tearDown method is still called, however, so teardown that needs to be done +still gets done, regardless of pass/fail status. Teardown code might otherwise be +skipped, which can lead other tests to fail or give unexpected results. + +All of the tests in a single test executable run one after another. For CIME, this +means all of the tests that are defined in all **.pf** files in a single test directory. + +As a result, tests can interact with each other if you don't clean up after yourself. +In the best case, you might get a memory leak. In the worst case, the pass/fail status of tests +depends on which other tests have run previously, making your unit tests unrepeatable +and unreliable. + +**To avoid this:** + +* Deallocate any pointers that your test allocates. +* Reset any global variables to some known, initial state. +* Do other, similar cleanup for resources that are shared by multiple tests. + +In Fortran2003, allocatable variables are deallocated automatically when they go +out of scope, but pointers are not. Explicitly deallocate any pointers that have +been allocated, either in test setup or in the execution of the routine +you are testing. + +You might need to move some variables from subroutine-local to the class. This is +because the tearDown method can access class instance variables, but not subroutine-local +variables. + +CIME makes extensive use of global variables that may be used directly or +indirectly by a routine you are testing. If your test has allocated or modified +any global variables, it is important to reset them to their initial state in the +teardown portion of the test. + +Finding more documentation and examples +--------------------------------------- + +More detailed examples in CIME +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There are many examples of unit tests in CIME, some simple and some quite complex. +You can find them by looking for files with the ".pf" extension: + +.. code-block:: shell + + > find . --name '*.pf' + +You can also see examples of the unit test build scripts by viewing the +**CMakeLists.txt** files throughout the source tree. + +Other pFUnit documentation sources +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Unfortunately, the documentation inside the pFUnit repository (in the documentation and Examples directories) is out-of-date (at least as of April, 2023): much of this documentation refers to version 3 of pFUnit, which differs in some ways from version 4. However, some working examples are provided in https://github.com/Goddard-Fortran-Ecosystem/pFUnit_demos. + +Documentation of the unit test build system +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The CMake build infrastructure is in **$CIMEROOT/CIME/non_py/src/CMake**. + +The infrastructure for building and running tests with **run_tests.py** is in +**$CIMEROOT/scripts/fortran_unit_testing**. That directory also contains general +documentation about how to use the CIME unit test infrastructure (in the +**README** file) and examples (in the **Examples** directory). diff --git a/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/workflows.rst.txt b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/workflows.rst.txt new file mode 100644 index 00000000000..c23af1fc4af --- /dev/null +++ b/branch/azamat/baselines/update-perf-info/html/_sources/users_guide/workflows.rst.txt @@ -0,0 +1,101 @@ +.. _workflows: + +********* +Workflows +********* + +Currently, there are three kinds of workflow controls available in CIME. + +1. Multiple jobs workflow + + The file, **$CIMEROOT/config/$model/machines/config_batch.xml**, contains a section called ```` which defines the submit script templates and the pre-requisites for running them. + As an example, in cesm, the default ```` section is the following, with an explanation give by the NOTES additions. + + :: + + + + + + $BUILD_COMPLETE and not $TEST + + + + $BUILD_COMPLETE and $TEST + + + + 1 + 0:20:00 + + case.run or case.test + $DOUT_S + + + + The elements that can be contained in the ```` section are: + + * : the name of the batch job. + + Batch jobs can contain one or more jobs, and each job has the following elements: + + *