Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Xarray DataObjects #183

Closed
wants to merge 35 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
c186c3b
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 6, 2017
d55e7ba
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 17, 2017
48b00ff
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 18, 2017
80392fa
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 20, 2017
cb7910c
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 24, 2017
580029f
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 24, 2017
44d397f
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 26, 2017
eee2060
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL Apr 27, 2017
c1e615d
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 3, 2017
e062ea2
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 8, 2017
a14e9ab
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 9, 2017
9784e2b
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 10, 2017
3c2e833
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 11, 2017
eb95597
user guide for single value ravenoutput plots generated
PaulTalbot-INL May 11, 2017
98c92b1
associated guide docs
PaulTalbot-INL May 11, 2017
4e8f2ad
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 15, 2017
af1247d
implemented cached xr.Dataset, although it is not used in the code ye…
PaulTalbot-INL May 16, 2017
2f23222
Switching to sourcing the raven environment.
joshua-cogliati-inl Apr 13, 2017
27faa13
Adding checking the home directory for a raven_libs_profile to source.
joshua-cogliati-inl May 17, 2017
a620c1c
script for install or create with conda
PaulTalbot-INL May 17, 2017
afb7cdb
Merge branch 'devel' of github.com:idaholab/raven into devel
PaulTalbot-INL May 17, 2017
3f679fd
Merge branch 'devel' into cached_xarray
PaulTalbot-INL May 17, 2017
8257615
Merge pull request #1 from joshua-cogliati-inl/script_update
PaulTalbot-INL May 17, 2017
bc9c003
updated from josh work
PaulTalbot-INL May 17, 2017
b5b2257
update based on comments
PaulTalbot-INL May 17, 2017
9106b97
removed extraneous files
PaulTalbot-INL May 17, 2017
69df641
added regression test to tests
PaulTalbot-INL May 17, 2017
f23f87c
dummy commit to force testing
PaulTalbot-INL May 18, 2017
b1afabc
docstrings fixed
PaulTalbot-INL May 18, 2017
6dcd2bb
mergefixes from devel
PaulTalbot-INL Oct 2, 2017
82676e1
added ND cached np array
PaulTalbot-INL Oct 11, 2017
c9856ca
work-in-progress commit for collaboration
PaulTalbot-INL Oct 12, 2017
b14f698
added test script with data object templates XrDataObject in DataObjects
PaulTalbot-INL Oct 12, 2017
b1fcdd8
new cached ND array, now with DataArrays and ND functionality
PaulTalbot-INL Oct 20, 2017
e5604f2
working on integrating new data object
PaulTalbot-INL Oct 20, 2017
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
# Raven

Risk Analysis Virtual Environment

RAVEN (Risk Analysis Virtual Environment) is one of the many INL-developed software tools researchers can use to identify and increase the safety margin in nuclear reactor systems.
Expand Down
142 changes: 73 additions & 69 deletions doc/user_manual/install.tex

Large diffs are not rendered by default.

24 changes: 12 additions & 12 deletions doc/user_manual/model.tex
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,13 @@ \section{Models}
These aliases can be used anywhere in the RAVEN input to refer to the #1
variables.
%
In the body of this node the user specifies the name of the variable that the model is going to use
In the body of this node the user specifies the name of the variable that the model is going to use
(during its execution).
%
The actual alias, usable throughout the RAVEN input, is instead defined in the
\xmlAttr{variable} attribute of this tag.
\xmlAttr{variable} attribute of this tag.
\\The user can specify aliases for both the input and the output space. As sanity check, RAVEN
requires an additional required attribute \xmlAttr{type}. This attribute can be either ``input'' or ``output''.
requires an additional required attribute \xmlAttr{type}. This attribute can be either ``input'' or ``output''.
%
\nb The user can specify as many aliases as needed.
%
Expand Down Expand Up @@ -900,7 +900,7 @@ \subsection{Dummy}
<Models>
...
<Dummy name='aUserDefinedName1' subType=''/>

<Dummy name='aUserDefinedName2' subType=''>
<alias variable="a_RAVEN_input_variable" type="input">
another_name_for_this_variable_in_the_model
Expand Down Expand Up @@ -1311,14 +1311,14 @@ \subsection{EnsembleModel}
\nb All the inputs here specified need to be listed in the Steps where the EnsembleModel
is used.
\item \xmlNode{Output}, \xmlDesc{string, optional field},
represents the output entities that need to be linked to this sub-model. \nb The \xmlNode{Output}s here specified are not part
represents the output entities that need to be linked to this sub-model. \nb The \xmlNode{Output}s here specified are not part
of the determination of the EnsembleModel execution but represent an additional storage of results from the
sub-models. For example, if the \xmlNode{TargetEvaluation} is of type PointSet (since just scalar data needs to be transferred to other
sub-models. For example, if the \xmlNode{TargetEvaluation} is of type PointSet (since just scalar data needs to be transferred to other
models) and the sub-model is able to also output history-type data, this Output can be of type HistorySet.
Note that the structure of each Output dataObject must include only variables (either input or output) that are defined among the model.
As an example, the Output dataObjects cannot contained variables that are defined at the Ensemble model level.
%
The user can specify as many \xmlNode{Output} (s) as needed. The optional \xmlNode{Output}s can be of both classes ``DataObjects'' and ``Databases''
The user can specify as many \xmlNode{Output} (s) as needed. The optional \xmlNode{Output}s can be of both classes ``DataObjects'' and ``Databases''
(e.g. PointSet, HistorySet, HDF5).
\nb \textbf{The \xmlNode{Output} (s) here specified MUST be listed in the Step in which the EnsembleModel is used.}
\end{itemize}
Expand All @@ -1334,16 +1334,16 @@ \subsection{EnsembleModel}
%
\begin{itemize}
\item \xmlNode{maxIterations}, \xmlDesc{integer, optional field},
maximum number of Picard's iteration to be performed (in case the iteration scheme does
maximum number of Picard's iteration to be performed (in case the iteration scheme does
not previously converge). \default{30};
\item \xmlNode{tolerance}, \xmlDesc{float, optional field},
convergence criterion. It represents the L2 norm residue below which the Picard's iterative scheme is
convergence criterion. It represents the L2 norm residue below which the Picard's iterative scheme is
considered converged. \default{0.001};
\item \xmlNode{initialConditions}, \xmlDesc{XML node, required parameter (if Picard's activated)},
Within this sub-node, the initial conditions for the input variables (that are part of a loop) need to
be specified in sub-nodes named with the variable name (e.g. \xmlNode{varName}). The body of the
\xmlNode{varName} contains the value of the initial conditions (scalar or arrays, depending of the
type of variable). If an array needs to be inputted, the user can specify the attribute \xmlAttr{repeat}
\xmlNode{varName} contains the value of the initial conditions (scalar or arrays, depending of the
type of variable). If an array needs to be inputted, the user can specify the attribute \xmlAttr{repeat}
and the code is going to repeat for \xmlAttr{repeat}-times the value inputted in the body.
\item \xmlNode{initialStartModels}, xmlDesc{XML node, only required parameter when Picard's iteration is activated},
specifies the list of models that will be initially executed. \nb Do not input this node for non-Picard calculations,
Expand Down Expand Up @@ -1407,7 +1407,7 @@ \subsection{EnsembleModel}
<var3> 45.0</var3>
</initialConditions>
</settings>

<Model class="Models" type="ExternalModel">
thermalConductivityComputation
<TargetEvaluation class="DataObjects" type="PointSet">
Expand Down
56 changes: 37 additions & 19 deletions doc/user_manual/postprocessor.tex
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,7 @@ \subsubsection{BasicStatistics}
the \textbf{variationCoefficient} will be \textbf{INF}.
\item \textbf{skewness}: skewness
\item \textbf{kurtosis}: excess kurtosis (also known as Fisher's kurtosis)
\item \textbf{samples}: the number of samples in the data set used to determine the statistics.
\end{itemize}
The matrix quantities available for request are:
\begin{itemize}
Expand All @@ -111,7 +112,6 @@ \subsubsection{BasicStatistics}
\item \textbf{NormalizedSensitivity}: matrix of normalized sensitivity
coefficients. \nb{It is the matrix of normalized VarianceDependentSensitivity}
\item \textbf{VarianceDependentSensitivity}: matrix of sensitivity coefficients dependent on the variance of the variables
\item \textbf{samples}: the number of samples in the data set used to determine the statistics.
\end{itemize}
If all the quantities need to be computed, this can be done through the \xmlNode{all} node, which
requires the \xmlNode{targets} and \xmlNode{features} sub-nodes.
Expand Down Expand Up @@ -1289,10 +1289,10 @@ \subsubsection{Interfaced}

\paragraph{Method: dataObjectLabelFilter}
This Post-Processor allows to filter the portion of a dataObject, either PointSet or HistorySet, with a given clustering label.
A clustering algorithm associates a unique cluster label to each element of the dataObject (PointSet or HistorySet).
This cluster label is a natural number ranging from $0$ (or $1$ depending on the algorithm) to $N$ where $N$ is the number of obtained clusters.
Recall that some clustering algorithms (e.g., K-Means) receive $N$ as input while others (e.g., Mean-Shift) determine $N$ after clustering has been performed.
Thus, this Post-Processor is naturally employed after a data-mining clustering techniques has been performed on a dataObject so that each clusters
A clustering algorithm associates a unique cluster label to each element of the dataObject (PointSet or HistorySet).
This cluster label is a natural number ranging from $0$ (or $1$ depending on the algorithm) to $N$ where $N$ is the number of obtained clusters.
Recall that some clustering algorithms (e.g., K-Means) receive $N$ as input while others (e.g., Mean-Shift) determine $N$ after clustering has been performed.
Thus, this Post-Processor is naturally employed after a data-mining clustering techniques has been performed on a dataObject so that each clusters
can be analyzed separately.

In the \xmlNode{PostProcessor} input block, the following XML sub-nodes are required,
Expand Down Expand Up @@ -1320,10 +1320,10 @@ \subsubsection{Interfaced}

The user is required to provide the following information:
\begin{itemize}
\item the set of input variables. For each variable the following need to be specified:
\item the set of input variables. For each variable the following need to be specified:
\begin{itemize}
\item the set of values that imply a reliability value equal to $1$ for the input variable
\item the set of values that imply a reliability value equal to $0$ for the input variable
\item the set of values that imply a reliability value equal to $0$ for the input variable
\end{itemize}
\item the output target variable. For this variable it is needed to specify the values of the output target variable that defines the desired outcome.
\end{itemize}
Expand All @@ -1333,11 +1333,11 @@ \subsubsection{Interfaced}
\item $R_0$ Probability of the outcome of the output target variable (nominal value)
\item $R^{+}_i$ Probability of the outcome of the output target variable if reliability of the input variable is equal to $0$
\item $R^{-}_i$ Probability of the outcome of the output target variable if reliability of the input variable is equal to $1$
\end{itemize}
\end{itemize}

Available measures are:
\begin{itemize}
\item Risk Achievement Worth (RAW): $RAW = R^{+}_i / R_0 $
\item Risk Achievement Worth (RAW): $RAW = R^{+}_i / R_0 $
\item Risk Achievement Worth (RRW): $RRW = R_0 / R^{-}_i$
\item Fussell-Vesely (FV): $FV = (R_0 - R^{-}_i) / R_0$
\item Birnbaum (B): $B = R^{+}_i - R^{-}_i$
Expand All @@ -1358,7 +1358,7 @@ \subsubsection{Interfaced}
\end{itemize}

\textbf{Example:}
This example shows an example where it is desired to calculate all available risk importance measures for two input variables (i.e., pumpTime and valveTime)
This example shows an example where it is desired to calculate all available risk importance measures for two input variables (i.e., pumpTime and valveTime)
given an output target variable (i.e., Tmax).
A value of the input variable pumpTime in the interval $[0,240]$ implies a reliability value of the input variable pumpTime equal to $0$.
A value of the input variable valveTime in the interval $[0,60]$ implies a reliability value of the input variable valveTime equal to $0$.
Expand All @@ -1375,14 +1375,14 @@ \subsubsection{Interfaced}
<variable R0values='0,240' R1values='1441,2880'>pumpTime</variable>
<variable R0values='0,60' R1values='1441,2880'>valveTime</variable>
<target values='2200,2500' >Tmax</target>
</PostProcessor>
</PostProcessor>
...
</Models>
...
</Simulation>
\end{lstlisting}

This Post-Processor allows the user to consider also multiple datasets (a data set for each initiating event) and calculate the global risk importance measures.
This Post-Processor allows the user to consider also multiple datasets (a data set for each initiating event) and calculate the global risk importance measures.
This can be performed by:
\begin{itemize}
\item Including all datasets in the step
Expand Down Expand Up @@ -1418,7 +1418,7 @@ \subsubsection{Interfaced}
<target values='0.9,1.1'>outcome</target>
<data freq='0.01'>outRun1</data>
<data freq='0.02'>outRun2</data>
</PostProcessor>
</PostProcessor>
...
</Models>
...
Expand All @@ -1428,8 +1428,8 @@ \subsubsection{Interfaced}
\end{itemize}

This post-processor can be made time dependendent if a single HistorySet is provided among the other data objects.
The HistorySet contains the temporal profiles of a subset of the input variables. This temporal profile can be only
boolean, i.e., 0 (component offline) or 1 (component online).
The HistorySet contains the temporal profiles of a subset of the input variables. This temporal profile can be only
boolean, i.e., 0 (component offline) or 1 (component online).
Note that the provided history set must contains a single History; multiple Histories are not allowed.
When this post-processor is in a dynamic configuration (i.e., time-dependent), the user is required to specify an xml
node \xmlNode{temporalID} that indicates the ID of the temporal variable.
Expand All @@ -1450,12 +1450,12 @@ \subsubsection{Interfaced}
<target values='0.9,1.1'>outcome</target>
<data freq='1.0'>outRun1</data>
<temporalID>time</temporalID>
</PostProcessor>
</PostProcessor>
...
</Models>
...
<Steps>
...
...
<PostProcess name="PP">
<Input class="DataObjects" type="PointSet" >outRun1</Input>
<Input class="DataObjects" type="HistorySet" >timeDepProfiles</Input>
Expand Down Expand Up @@ -1505,6 +1505,11 @@ \subsubsection{RavenOutput}
file. This will appear as an entry in the output \xmlNode{DataObject} and the corresponding column are
the values extracted from this file. If not specified, RAVEN will attempt to find a suitable integer ID
to use, and a warning will be raised.

When defining the \xmlNode{DataObject} that this postprocessor will write to, and when using the static
(non-\xmlNode{dynamic}) form of the postprocessor, the \xmlNode{input} space should be given as
\xmlString{ID}, and the output variables should be the outputs specified in the postprocessor. See the
examples below. In the data object, the variable values will be keyed on the \xmlString{ID} parameter.
\end{itemize}
Each value that needs to be extracted from the file needs to be specified by one of the following
\xmlNode{output} nodes within the \xmlNode{File} node:
Expand Down Expand Up @@ -1545,7 +1550,11 @@ \subsubsection{RavenOutput}
</ans>
</BasicStatistics>
\end{lstlisting}
The RAVEN input to extract this information would appear as follows:

The RAVEN input to extract this information would appear as follows.
We include an example of defining the \xmlNode{DataObject} that this postprocessor will write out to, for
further clarity.

\begin{lstlisting}[style=XML]
<Simulation>
...
Expand All @@ -1556,7 +1565,7 @@ \subsubsection{RavenOutput}
...
<Models>
...
<PostProcessor name='pp' subType='RavenOut'>
<PostProcessor name='pp' subType='RavenOutput'>
<File name='in1' ID='1'>
<output name='first'>ans|val1</output>
<output name='second'>ans|val2</output>
Expand All @@ -1569,6 +1578,15 @@ \subsubsection{RavenOutput}
...
</Models>
...
<DataObjects>
...
<PointSet name='pointSetName'>
<input>ID</input>
<output>first,second</output>
</PointSet>
...
</DataObjects>
...
</Simulation>
\end{lstlisting}

Expand Down
3 changes: 3 additions & 0 deletions framework/DataObjects/Factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
from DataObjects.Data import Data
from DataObjects.PointSet import PointSet
from DataObjects.HistorySet import HistorySet
from DataObjects.XrDataObject import DataSet
## [ Add new class here ]
################################################################################
## Alternatively, to fully automate this file:
Expand All @@ -45,6 +46,8 @@

for classObj in utils.getAllSubclasses(eval(__base)):
__interFaceDict[classObj.__name__] = classObj
# TODO hack add-on
__interFaceDict['DataSet'] = DataSet

def knownTypes():
"""
Expand Down
125 changes: 125 additions & 0 deletions framework/DataObjects/TestDataSets.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
# Copyright 2017 Battelle Energy Alliance, LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
This Module performs Unit Tests for the Distribution class.
It can not be considered part of the active code but of the regression test system
"""

#For future compatibility with Python 3
from __future__ import division, print_function, unicode_literals, absolute_import
import warnings
warnings.simplefilter('default',DeprecationWarning)

import xml.etree.ElementTree as ET
import sys, os
import pickle as pk
import numpy as np
frameworkDir = os.path.dirname(os.path.abspath(os.path.join(sys.argv[0],'..')))

sys.path.append(frameworkDir)
from utils.utils import find_crow

find_crow(frameworkDir)

import XrDataObject # FIXME
import MessageHandler

mh = MessageHandler.MessageHandler()
mh.initialize({'verbosity':'debug'})

print (XrDataObject )
def createElement(tag,attrib=None,text=None):
"""
Method to create a dummy xml element readable by the distribution classes
@ In, tag, string, the node tag
@ In, attrib, dict, optional, the attribute of the xml node
@ In, text, str, optional, the dict containig what should be in the xml text
"""
if attrib is None:
attrib = {}
if text is None:
text = ''
element = ET.Element(tag,attrib)
element.text = text
return element

results = {"pass":0,"fail":0}

def checkAnswer(comment,value,expected,tol=1e-10):
"""
This method is aimed to compare two floats given a certain tolerance
@ In, comment, string, a comment printed out if it fails
@ In, value, float, the value to compare
@ In, expected, float, the expected value
@ In, tol, float, optional, the tolerance
@ Out, None
"""
if abs(value - expected) > tol:
print("checking answer",comment,value,"!=",expected)
results["fail"] += 1
else:
results["pass"] += 1

#Test module methods #TODO
#print(Distributions.knownTypes())
#Test error
#try:
# Distributions.returnInstance("unknown",'dud')
#except:
# print("error worked")

#############
# Point Set #
#############

xml = createElement('DataSet',attrib={'name':'test'})
xml.append(createElement('Input',text='a,b,c'))
xml.append(createElement('Output',text='x,y,z'))

# check construction
# check builtins
# check basic property getters

# check appending (add row)
# check add var (add column)
# check remove var (remove column)
# check remove sample (remove row)

# check slicing
# # var vals
# # realization vals
# # by meta
# check find-by-index
# check find-by-value (including find-by-meta)

# check write to CSV
# check write to netCDF
# check load from CSV
# check load from netCDF


print(results)

sys.exit(results["fail"])
"""
<TestInfo>
<name>framework.test_datasets</name>
<author>talbpaul</author>
<created>2017-10-20</created>
<classesTested>DataSet</classesTested>
<description>
This test is a Unit Test for the DataSet classes.
</description>
</TestInfo>
"""
Loading