-
Notifications
You must be signed in to change notification settings - Fork 24
Atomic Data
This is a description of how to get and use atomic data for Python ksl's earlier work on this is on confluence. The principle motivation for this page is to understand how to add macro atoms
At present (2020), the atomic data that is used in Python is contained in the directory, xdata, that is part of the main distribution. When python is run, it links this directory to the current working directory as the subdirectory data. Historically, the atomic data was contained in a separate repository called data, which contained not only the atomic data, but a large number of stellar models for use with Python. Today this repository, contains the models but not the atomic data.
Note that this information on this page is not entirely up today, as we are transitioning to useing ReadtheDocs for all Python documentation
To obtain atomic data, cd $PYTHON/data and then clone the data branch:
git clone https://github.com/agnwinds/python.git -b data .
In due course, this will also be maintained in the structure branch, and there will be releases of atomic data sets.
- NIST: Claims and probably is the best, but the data are not organized as well as in other databases. Thus NIST requires more hand work
-
Chianti: contains everything one needs, except photoionzation, but is not as complete as NIST.
- The files presented are all ascii, and the columns are defined int he Chianti users guide
- The basic files for use in python for each ion are .elvlc - the energy level file, the wgfa file - the lines file with oscillator strengths. There are also other files, especially one for collision strengths which might be useful in future
- Topbase: The general problem with topbase is the accuracy of the wavelengths that are reported. (There is an associated TipBase ? but as far as ksl could determine this was not exceptionally helpful)
- University of Strathclude: This is a repository for computer modelled data for recombination rates. It is the source of much of the data in Chianti, and we use the partial recombination rates to ground from this site, which is not included in Chianti. Is is this the source for Nigel Badnell's dielectronic recombination data setwhich we currently use.
- AtomDB: Atomic data from the AtomDB team, a project of Randall Smith and Adam Foster, who we are in contact with. It is unclear as yet whether this data is an improvement on Topbase. They have extracted the data from XSTAR, so the XSTAR documentation is worth reading.
In December 2012, NSH was drowning somewhat in different data sets - modfified and added to over the previous two years in a rather haphazard way. The entries below are his attempt to make a historical record of what data was used at various stages, and to try to set a stake in the ground for what to use. JM edited the format somewhat in July 2013 and introduced a Github branch for data.
These files are 'self contained' that is the standardXX file points into the directory in which the data expands. The data branch of github should be consulted, and this will also be maintained in the 'structure' branch.
The intention is that the directory goes into the 'data' directory, a link to which is created when you type execute the script Setup_Py_Dir. The line in the .pf file would then look like (note that atomic will still work as the atomic symbolic link goes to the data folder)
Atomic_data data/standardXX
-
atomic_h20 - SS Macro atom data - Used for Sim et al. 2005 This is the macro atom atomic data for the 20 level Hydrogen atom Note: Includes ONLY Hydrogen Format currently data/h20 Will be used as the base for future macro atom data sets- JM is testing data sets including Hydrogen read in as a macro atom with the other elements treated as 'simple ions'
-
atomic39 - 121218 - Historical, Used for LK02. This is the data that comes with the old full installation here: http://www-int.stsci.edu/~long/xpub/Python/
-
atomic39_REG - 121218 NSH This is the version of standard39 that NSH used for his regression testing. Changes are elem_ions_ver.py : For some reason (lost to me now) the number of elements included in the elem_ions_ver.py file was reduced from 20 to 14. Basically all element with an abundance less than 1e-6 that of Hydrogen were commented out. Unfortunately (perhaps) this change stuck. recomb.data : In the original data, many lines were commented out. NSH uncommented all of them, since memory is not an issue, it seemed safer to have all data read in, wether it is needed or not. I think I did this because the code was complaining at some point that recomb data was not available for some elements I turned on. This data is now (py74) only a stopgap since the latest data allows the recombination to ground state to be computed from basic data.
-
atomic70 - 121218 - NSH This is almost identical to standard39_REG, but with a file for dielectronic recombination included. It was used for regression testing for runs with _DR in the parameter file names clist_K : This file was used for a reasonably short time - assume only py70 will work - it is for dielectronic recombination.
-
atomic73 - 121218 - NSH - Used for Higginbottom et al. (2013) This is a set of data assembled after the work in 2012 to improve the agreement between python and cloudy. This was achieved by improving the calculation of recombination to ground state by included all the data needed to compute it from the raw data (recomb to all and recomb to ground). The topbase data for hydrogen was expended to include all available data and topbase data for iron was incoporated.
-
elem_ions_ver.py - still only 14 elements included. The multiplicity of bare ions has also been corrected from 2 to 1.
-
topbase_levels_h.py - More levels have been included from topbase, and the file levels_hhe file split into two.
-
topbase_levels_fe.py - Added
-
levels_ver_2.py - In 39, an error in the parse routine meant that many levels were missing. This is fixed
-
lines_linked_ver_2.py - As above
-
topbase_h1_phot.py - More data included
-
topbase_fe_phot.py - Added
-
chianti_dr.dat - dielectronic recombination data included from Chianti
-
chianti_rr.dat - radiative recombination rate from chianti
-
Badnell_GS_RR.txt - ground state recombination rate data from Nigel Badnells site at the university of Strathclyde
-
gffint.dat - gaunt free free factors from Sutherland (1997)
-
atomic_77 - 130731 - JM This is a data set which is identical except the topbase photoionization cross sections are extrapolated up to 100KeV using the extrapolate_log.py script, and any which look odd (by eye!) are fudged to a gradient of -3. This is done by taking the gradient in log-log space at the maximum energy in topbase and carrying out an extrapolation in logarithmically space energy. This can be improved, probably, and has not been tested in any detail other than a standard regression test and inspection of each plotted up cross section.
Basically to add MacroAtoms one needs to do the following:
- Modify the ion information to allow for treating a particular ion as a MacroAtom, basically setting aside storage for the MacroAtom ?
- Add level information in a special format
- Add line information in a special format
- Add photoionziation data in a special format
- The crucial bits in this assure that one has the correct linkages between lines and levels and photoionization.
At present, an atom needs to be treated as either a simple atom or a MacroAtom. One cannot mix simple and MacroIons of the same atom. This is a limitation of the code and if necessary could be changed.
Ions
Normal Ion records look like this
IonV H 1 1 2 13.59900 1000 5 1s(2S_{1/2})
IonV H 1 2 2 1.0000e+20 0 0 Bare
Ion Macro records look like this
IonM H 1 1 2 13.59900 20 20 1s(2S_{1/2})
IonM H 1 2 1 1.0000e+20 1 1 Bare
There is no obvious difference between the ! IonM ? records and the IonV records. Both set aside room for a maximum number of levels, and then a number of levels to be treated differently. In practice an ion probably needs to be treated as a MacroIon or not in the code. One cannot mix within a single run of the code. The extra levels in the IonV were probably the result of an abortive initial attempt at detailed balance that predated the MacroAtom approach.
# Maximum excitation above ground (in Ryd) for inclusion 4.000000
# Miniumum excitation below continuum (in Ryd) for inclusion -0.100000
# Topbase levels: Order changed to move config to end of line
#LevTop z ion iSLP iLV iCONF E(eV) Te(eV) gi RL(s) eqn RL(s)
# ======================================================================================
# i NZ NE iSLP iLV iCONF E(RYD) TE(RYD) gi EQN RL(NS)
# ======================================================================================
# ======================================================================================
LevTop 1 1 200 1 -13.605698 0.000000 2 1.0000 1.00e+21 () 1s
LevTop 1 1 200 2 -3.401425 10.204273 2 2.0000 1.00e+21 () 2s
LevTop 1 1 211 1 -3.401425 10.204273 6 2.0000 1.60e-09 () 2p
whereas the for Macro Atoms we have
# z ion lvl ion_pot ex_energy g rad_rate
LevMacro 1 1 1 -13.59843 0.000000 2 1.00e+21 () n=1
LevMacro 1 1 2 -3.39960 10.19883 8 1.00e-09 () n=2
LevMacro 1 1 3 -1.51093 12.08750 18 1.00e-09 () n=3
The columns are exactly the same in both cases. Each Level is described by an element number and ion number and a level number. However there are some specific differences. In particular, for LevTop levels, the excitation energy needs need to be on an absolute scale between ions, and so it includes the ionization energy of the lower level ionization states. Note that the radiative rates are not used. The original intention was to use this to define the difference between metastable and normal levels, with the expectation that if the level was metastable it would be put in Boltzmann equililbrium with the groundstate. Right now python uses 10**15 seconds, essentially a Hubble time to do this, but this portion of the code is not, according to ss, tested. The primary source for this is usually the NIST database, although similar information is usually available in Chianti . One normally wants text output, and eV to describe the levels, and then one needs to put things in energy order. Since they quote J, one converts to g = 2J+1
JM also used Topbase for the Helium data for his CV models.
For lines, we did not create a specific topbase format, but most of the recent sets of data use a format that is similar to what is need for macro atoms.
Line 1 1 926.226013 0.003184 2 4 0.000000 13.387685 0 9
Line 1 1 930.747986 0.004819 2 4 0.000000 13.322634 0 8
Line 1 1 937.802979 0.007798 2 4 0.000000 13.222406 0 7
Line 1 1 949.742981 0.013931 2 4 0.000000 13.056183 0 6
and for MacroAtoms
# z = element, ion= ionstage, f = osc. str., gl(gu) = stat. we. lower(upper) level
# el(eu) = energy lower(upper) level (eV), ll(lu) = lvl index lower(upper) level
# z ion lambda f gl gu el eu ll lu
LinMacro 1 1 1215.33907 0.41620 2 8 0.00000 10.19883 1 2
LinMacro 1 1 1025.44253 0.07910 2 18 0.00000 12.08750 1 3
LinMacro 1 1 972.27104 0.02899 2 32 0.00000 12.74854 1 4
For LinMacro the columns are an identifier, the element z, the ion number, the wavelength of the line in A, the absoprion oscillator strength, the lower and upper level mulitplicities, the energy of the lower level and upper level. The utlimate source for this information is usually NIST . The main issue with all of this is that one needs to index everything self-consistently. http://physics.nist.gov/PhysRefData/ASD/index.html
JM used Topbase for the Helium data for his CV models.
and the topbase photoionization records look like
PhotTopS 1 1 200 1 13.605698 50
PhotTop 13.605698 6.304e-18
PhotTop 16.627193 3.679e-18
wheeas the Macro Atoms look like
PhotMacS 1 1 1 1 13.598430 100
PhotMac 13.598430 6.3039999e-18
PhotMac 13.942675 5.8969998e-18
The meaning of the columns is the same in both cases here. It may be simply a historical accident that we have both formats, or probably we were worried we would need to change. Topbase is generally the source for this information.