This work is licensed under CC BY 4.0 
All scripts run on Python 3.7 or newer. The following packages are required:
numpyscipymosek(+ license)sympyfor themultinomial_coefficients_iteratorfunctionpsutilfor the exemplary command line programs, only used to get the physical CPU count
The Optimizer...py files define a class Optimizer that is responsible for carrying out the
optimization.
The filename indicates the problem this corresponds to; this is detailed in the file header.
We additionally provide three command line applications:
-
Main.pyruns the optimizations for given parametersd,s, andr. Various configuration options are available, see the help.The output is stored in a subfolder named
<d> <s> <r>according to the parameters; the files are named<pdist> <f> <type>.dat, where<pdist>is the distillation success probability and<f>the optimized fidelity. If<type>ischoi, it contains the vectorized upper triangle of the Choi state of the distillation map (in the Dicke basis). If<type>isrho, it contains the vectorized upper triangle of the input density matrix (in the Dicke basis). If the--erasureparameter is specified, the optimization is done in the full computational basis andrhois replaced bypsi; the file then contains the full input state vector. Additionally, the subfolder is suffixed witherasure. -
MainAllR.pyruns the optimizations for given parametersd,s, andptrans. Various configuration options are available, see the help.This program uses the data created by
Main.py(without the--erasureoption) as initial points and therefore can only be run afterwards. It explores the possibility to use different maps for variousrvalues.The output is stored in a subfolder named
<ptrans> <d> <s>; the files are named<ptot> <f> <type>.dat, where<ptot>now is the total success probability. -
MainStepwise.pyruns the optimization using the iterative basis selection approach introduced in (Thesis link yet to come). Arguments are as befored,s, andr, as well aspdist. The search starts with two kets; this choice can be overwritten specifying--min=<larger number>.
Both applications are parallelized and by default use either the environment variable SLURM_NTASK
(if the SLURM manager is used) or the number of physical cores available.
This can be configured using the --workers option.
The data that underlies the paper can be generated by appropriate calls of the Main.py program.
Note that for qubits and low dimensions, this can easily be done on a personal computer, but qudits
and larger values of s and r require substantial memory and time resources.
The actual commands may therefore depend on the scheduler.
For example, with SLURM, a shell script Main.sh may be created that looks as follows:
#!/bin/sh
# potentially set up environment
python Main.py $1 ${SLURM_ARRAY_TASK_ID} $2Then, a call such as sbatch --ntasks=<multithreading level> --array=2-10 --time=<time constraint> Main.sh 2 1
will queue jobs that create the folders for every configuration 2 2 1 until 2 10 1 with the
corresponding numerical data.
Note that the main application will automatically determine the number of threads from the
appropriate SLURM environment variable.
If a different scheduler is used, this value must be passed in the optional parameter
-workers=<multithreading level>.
The Julia file qeccPolynomial.jl contains code to construct the polynomial optimization problems associated with this
problem. Usage instructions are at the bottom of the file.