diff --git a/CHANGELOG.md b/CHANGELOG.md index 57d0befca..937987d69 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,9 +1,10 @@ -Release v0.13.0 - March 2021 +Release v0.13.0 - April 2021 --------------------------------- #### New features +- DeerLab now supports Python 3.9. - The function ``fitsignal`` has been re-named to ``fitmodel`` for correctness and consistency with other functions ([#102](https://github.com/JeschkeLab/DeerLab/pull/102)). - Added new experiment models for RIDME on systems with one to seven harmonic pathways (S=1/2 to S=7/2) to include all higher harmonics (overtones) ([#79](https://github.com/JeschkeLab/DeerLab/pull/79)). - Bootstrapping is now embedded into ``fitmodel`` to automatically bootstrap all output quantities without the need to write additional script lines ([#55](https://github.com/JeschkeLab/DeerLab/issues/55)). In ``fitmodel`` a new option ``uq`` allows to switch between covariance or bootstrapping uncertainty quantification ([#88](https://github.com/JeschkeLab/DeerLab/pull/88)). @@ -11,24 +12,27 @@ Release v0.13.0 - March 2021 - Implemented several initialization strategies in ``fitmultimodel`` for multi-model components ([#67](https://github.com/JeschkeLab/DeerLab/pull/67)). Three different new strategies ``'spread'``, ``'split'`` and ``'merge'`` will initialize the parameter values of the N-component fit based on the results of the N-1/N+1 component fit to improve quality of results and speed. - Added contribution guidelines to the documentation and automated list of DeerLab contributors. - The function ``snlls`` now accepts additional custom penalties to include in the optimization ([#76](https://github.com/JeschkeLab/DeerLab/issues/76), [#108](https://github.com/JeschkeLab/DeerLab/pull/112)). +- All fit functions now return the fit of the data along its uncertainty automatically as part of the ``FitResult`` object ([#130](https://github.com/JeschkeLab/DeerLab/issues/130), [#134](https://github.com/JeschkeLab/DeerLab/pull/134)). #### Overall changes -- The performance of all fit functions has been considerably accelerated by removing call overheads in built-in DeerLab models ([#100](https://github.com/JeschkeLab/DeerLab/issues/100), [#101](https://github.com/JeschkeLab/DeerLab/pull/101)). -- All fit functions now return the fitted data and its uncertainty quantification as part of the ``FitResult`` object ([#130](https://github.com/JeschkeLab/DeerLab/issues/130), [#134](https://github.com/JeschkeLab/DeerLab/pull/134)). +- The performance of all fit functions has been considerably accelerated by removing call overheads in built-in DeerLab models ([#100](https://github.com/JeschkeLab/DeerLab/issues/100), [#101](https://github.com/JeschkeLab/DeerLab/pull/101), [#143](https://github.com/JeschkeLab/DeerLab/pull/143)). - Improved robustness of the installer ([#65](https://github.com/JeschkeLab/DeerLab/pull/65)): - The installer no longer assumes the alias ``pip`` to be setup on the system. - The installation will now handle cases when system-wide privileges are not available ([#52](https://github.com/JeschkeLab/DeerLab/issues/52)). - Improved robustness of the installation in Windows systems to avoid missing DLL errors ([#64](https://github.com/JeschkeLab/DeerLab/issues/64)). - - The installer will now get the latest Numpy/Scipy releases in Windows systems. -- Adapted piece of code leading to a ``VisibleDeprecationWarning `` visible during execution of certain DeerLab functions. + - The installer will now get the latest Numpy/Scipy releases in Windows systems available at the [Gohlke repository](https://www.lfd.uci.edu/~gohlke/pythonlibs/). +- Adapted piece of code leading to a ``VisibleDeprecationWarning`` visible during execution of certain DeerLab functions. - Improved interface of built-in plots in ``FitResult.plot()``. The method now returns a Matplotlib figure object (`matplotlib.figure.Figure`) instead of an axes object (``matplotlib.axes._subplots.AxesSubplot``) which can be modified more freely to adjust graphical elements ([#85](https://github.com/JeschkeLab/DeerLab/issues/85)). The method now takes an optional keyword ``FitResult.plot(show=True\False)`` to enable/disable rendering of the graphics upon calling the method ([#87](https://github.com/JeschkeLab/DeerLab/pull/87)). - The fit objective values returned in ``FitResult.cost`` are now correct (previous versions had an erroneous 1/2 factor) ([#80](https://github.com/JeschkeLab/DeerLab/issues/80)). The value is now returned as a scalar value instead of a single-element list ([#81](https://github.com/JeschkeLab/DeerLab/issues/81)). - Removed the re-normalization conventions ``K(t=0,r)=1`` and ``B(t=0)=1`` and associated options ``renormalize`` and ``renormpaths`` in the ``dipolarkernel`` and ``dipolarbackground`` functions ([#99](https://github.com/JeschkeLab/DeerLab/pull/99)) to avoid identifiability issues between dipolar pathway amplitudes and signal scales during fitting ([#76](https://github.com/JeschkeLab/DeerLab/issues/76)). - The fit convergence criteria ``tol`` (objective function tolerance) and ``maxiter`` (iteration limit) are now exposed as keyword argument in all fit functions ([#111](https://github.com/JeschkeLab/DeerLab/issues/111), [#112](https://github.com/JeschkeLab/DeerLab/pull/112)). -- Improvements and corrections to the documentation ([#95](https://github.com/JeschkeLab/DeerLab/pull/95), [#96](https://github.com/JeschkeLab/DeerLab/pull/96), [#104](https://github.com/JeschkeLab/DeerLab/pull/104), [#106](https://github.com/JeschkeLab/DeerLab/pull/106), [#107](https://github.com/JeschkeLab/DeerLab/pull/107)) -- Corrections in the ``info`` dictionary of multiple ``dd_models``. The key ``Parameters`` of some models contained the wrong names. -- The keyword argument to request uncertainty quantification has been unified across all fitting functions. It is now ``uq``. +- Multiple improvements and corrections to the documentation ([#95](https://github.com/JeschkeLab/DeerLab/pull/95), [#96](https://github.com/JeschkeLab/DeerLab/pull/96), [#104](https://github.com/JeschkeLab/DeerLab/pull/104), [#106](https://github.com/JeschkeLab/DeerLab/pull/106), [#107](https://github.com/JeschkeLab/DeerLab/pull/107), [#115](https://github.com/JeschkeLab/DeerLab/pull/115), [#122](https://github.com/JeschkeLab/DeerLab/pull/122)) +- Corrections in the metadata of multiple ``dd_models``. The key ``Parameters`` of some models contained the wrong names. +- The metadata of the built-in models is now accessible and manipulable via function attributes (e.g. ``dd_gauss.parameters``) rather than trought a returned dictionary (e.g. ``dd_gauss()['Parameters']``) ([#143](https://github.com/JeschkeLab/DeerLab/pull/143)). +- The keyword argument to request uncertainty quantification has been unified across all fitting functions. It is now ``uq`` ([#120](https://github.com/JeschkeLab/DeerLab/pull/120)). +- The ``UncertQuant`` class has been renamed into ``UQResult`` ([#123](https://github.com/JeschkeLab/DeerLab/pull/123)). +- Uncertainty quantification is now tested numerically against an external package (``lmfit``) to ensue quality and accuracy ([#121](https://github.com/JeschkeLab/DeerLab/pull/121)). #### Specific changes - ``deerload``: @@ -42,17 +46,27 @@ Release v0.13.0 - March 2021 - Removed the keyword argument ``uqanalysis=True/False``. The uncertainty quantification can now be disabled via the new keyword ``uq=None`` ([#98](https://github.com/JeschkeLab/DeerLab/pull/98)). - Corrected the behaviour of built-in start values when manually specifying boundaries ([#73](https://github.com/JeschkeLab/DeerLab/pull/73)). If the built-in start values are outside of the user-specified boundaries the program will now automatically set the start values in the middle of the boundaries to avoid errors ([#72](https://github.com/JeschkeLab/DeerLab/issues/72)). - Implemented the constraint ``Lam0+sum(lam)<=1`` to ensure the structural-identifiability of ``Lam0`` and ``V0`` during SNLLS optimization of experiment models with more than one modulated dipolar pathway (i.e. does not affect ``ex_4pdeer``) ([#76](https://github.com/JeschkeLab/DeerLab/issues/76),[#108](https://github.com/JeschkeLab/DeerLab/pull/108)). + - Removed the optional keyword argument ``regtype`` ([#137](https://github.com/JeschkeLab/DeerLab/pull/137)). +- ``fitregmodel``: + - Corrected the behaviour of the uncertainty quantification when disabling the non-negativity constraint ([#121](https://github.com/JeschkeLab/DeerLab/pull/121)). - ``fitparamodel``: - Made ``par0`` a positional argument instead of an optional keyword ([#70](https://github.com/JeschkeLab/DeerLab/issues/70)). to avoid errors when not defined ([#69](https://github.com/JeschkeLab/DeerLab/issues/69)). - - Keyword argument ``rescale`` has been renamed to ``fitscale`` ([#128])https://github.com/JeschkeLab/DeerLab/issues/128)). + - Keyword argument ``rescale`` has been renamed to ``fitscale`` ([#128](https://github.com/JeschkeLab/DeerLab/issues/128),[#129](https://github.com/JeschkeLab/DeerLab/pull/129)). - ``snlls``: - Corrected bug that was leading to the smoothness penalty being accounted for twice in the least-squares residual during optimization ([#103](https://github.com/JeschkeLab/DeerLab/issues/103)). - Now returns the uncertainty quantification of linear and nonlinear parts as separate objects ``nonlinUncert`` and ``linUncert`` ([#108](https://github.com/JeschkeLab/DeerLab/pull/108)). - Improved the covariance-based uncertainty analysis by including correlations between linear and non-linear parameters ([#108](https://github.com/JeschkeLab/DeerLab/pull/108)). - Improved the behavior of signal scale determination ([#108](https://github.com/JeschkeLab/DeerLab/pull/108)). + - Enabled prescaling of the data to avoid scaling issues during uncertainty quantification ([#132](https://github.com/JeschkeLab/DeerLab/issue/132), [#133](https://github.com/JeschkeLab/DeerLab/pull/133)). + - Corrected the behaviour of the uncertainty quantification when disabling the regularization penalty ([#121](https://github.com/JeschkeLab/DeerLab/pull/121)). - ``diststats``: - Now compatible with non-uniformly defined distance distributions ([#92](https://github.com/JeschkeLab/DeerLab/issues/92), [#94](https://github.com/JeschkeLab/DeerLab/pull/94)). - Added internal validation step to avoid non-sensical results when confounding the syntax ([#91](https://github.com/JeschkeLab/DeerLab/pull/91)). +- ``dipolarkernel``: + - Now allows defining pathways without unmodulated components. + - All optional keyword arguments can only be passed as named and not positional arguments ([#138](https://github.com/JeschkeLab/DeerLab/pull/138)). + - The keyword ``pathways`` now only takes lists of pathways and not modulation depth parameters. A new separate keyword ``mod`` takes the modulation depth parameter for the simplified 4-pulse DEER kernel ([#118](https://github.com/JeschkeLab/DeerLab/issues/118), [#138](https://github.com/JeschkeLab/DeerLab/pull/138)). + - Renmaed the background argument keyword ``B`` into ``bg`` ([#138](https://github.com/JeschkeLab/DeerLab/pull/138)). - ``regparamrange``: - Implemented new CSD algorithm to avoid LAPACK library crashes encountered when using multiple DeerLab functions calling ``regparamrange`` internally ([#68](https://github.com/JeschkeLab/DeerLab/pull/68)). - ``correctphase``: diff --git a/deerlab/bg_models.py b/deerlab/bg_models.py index 10e5ffc85..bf4029c23 100644 --- a/deerlab/bg_models.py +++ b/deerlab/bg_models.py @@ -16,15 +16,19 @@ docstr_header1 = lambda title,fcnstr: f""" {title} -If called without arguments, returns an ``info`` dictionary of model parameters and boundaries:: +The function takes a list or array of parameters and returns the calculated background model:: - info = {fcnstr}() + B = {fcnstr}(t,param) + B = {fcnstr}(t,param,lam) +The built-in information on the model can be accessed via its attributes:: -Otherwise the function returns the calculated background model:: + {fcnstr}.parameters # String list of parameter names + {fcnstr}.units # String list of metric units of parameters + {fcnstr}.start # List of values used as start values during optimization + {fcnstr}.lower # List of values used as lower bounds during optimization + {fcnstr}.upper # List of values used as upper bounds during optimization - B = {fcnstr}(t,param) - B = {fcnstr}(t,param,lam) Parameters ---------- @@ -37,15 +41,6 @@ Returns ------- -info : dict - Dictionary containing the built-in information of the model: - - * ``info['Parameters']`` - string list of parameter names - * ``info['Units']`` - string list of metric units of parameters - * ``info['Start']`` - list of values used as start values during optimization - * ``info['Lower']`` - list of values used as lower bounds during optimization - * ``info['Upper']`` - list of values used as upper bounds during optimization - * ``info['ModelFcn']`` - function used to calculate the model output B : ndarray Dipolar background. @@ -57,14 +52,18 @@ docstr_header2 = lambda title,fcnstr: f""" {title} -If called without arguments, returns an ``info`` dictionary of model parameters and boundaries:: +The function takes a list or array of parameters and returns the calculated background model:: - info = {fcnstr}() + B = {fcnstr}(t,param) +The built-in information on the model can be accessed via its attributes:: -Otherwise the function returns the calculated background model:: + {fcnstr}.parameters # String list of parameter names + {fcnstr}.units # String list of metric units of parameters + {fcnstr}.start # List of values used as start values during optimization + {fcnstr}.lower # List of values used as lower bounds during optimization + {fcnstr}.upper # List of values used as upper bounds during optimization - B = {fcnstr}(t,param) Parameters ---------- @@ -75,23 +74,14 @@ Returns ------- -info : dict - Dictionary containing the built-in information of the model: - - * ``info['Parameters']`` - string list of parameter names - * ``info['Units']`` - string list of metric units of parameters - * ``info['Start']`` - list of values used as start values during optimization - * ``info['Lower']`` - list of values used as lower bounds during optimization - * ``info['Upper']`` - list of values used as upper bounds during optimization - * ``info['ModelFcn']`` - function used to calculate the model output B : ndarray Dipolar background. """ # ================================================================= -def docstring(takes_lambda=False): # ================================================================= +def docstring(takes_lambda=False): """ Decorator: Insert docstring header to a pre-existing docstring """ @@ -108,42 +98,41 @@ def _decorator(func): return _decorator # ================================================================= - -def _parsargs(args,npar,takes_lambda=False): # ================================================================= - name = inspect.stack()[1][3] - - # Check the number of input arguments specified - if not takes_lambda: - if len(args)==2: - t,p = args - else: - raise KeyError(f'The model function {name} requires two input arguments: {name}(t,params).') - if takes_lambda: - if len(args)==3: - t,p,lam = args - elif len(args)==2: - t,p = args - lam = 1 - else: - raise KeyError(f'The model function {name} requires two or three input arguments: {name}(t,params) or {name}(t,params,lambda).') - - t = np.atleast_1d(t) - p = np.atleast_1d(p) +def setmetadata(parameters,units,start,lower,upper): + """ + Decorator: Set model metadata as function attributes + """ + def _setmetadata(func): + func.parameters = parameters + func.units = units + func.start = start + func.lower = lower + func.upper = upper + return func + return _setmetadata +# ================================================================= +# ================================================================= +def _parsargs(t,p,npar): + t,p = np.atleast_1d(t,p) # Check that the correct number of parmameters have been specified if len(p)!=npar: - raise ValueError(f'The model function {name} requires {npar} parameters, but {len(p)} are provided.') - - if takes_lambda: - return t,p,lam - else: - return t,p + raise ValueError(f'The model function requires {npar} parameters, but {len(p)} are provided.') + + return t,p # ================================================================= + +# ================================================================= +@setmetadata( +parameters = ['Concentration of pumped spins'], +units = ['μM'], +start = np.asarray([50]), +lower = np.asarray([0.01]), +upper = np.asarray([5000])) @docstring(takes_lambda=True) -def bg_hom3d(*args): -# ====================================================================== +def bg_hom3d(t,param,lam=1): r""" Background from homogeneous distribution of spins in a 3D medium @@ -173,42 +162,35 @@ def bg_hom3d(*args): ``param[0]`` :math:`c_p` 50 0.01 5000 Pumped spin concentration (μM) ============== =============== ============= ============= ============= ================================= """ - def model(t,param,lam=1): - conc = param # concentration, µM - NA = 6.02214076e23 # Avogadro constant, mol^-1 - muB = 9.2740100783e-24 # Bohr magneton, J/T (CODATA 2018 value) - mu0 = 1.25663706212e-6 # magnetic constant, N A^-2 = T^2 m^3 J^-1 (CODATA 2018) - h = 6.62607015e-34 # Planck constant, J/Hz (CODATA 2018) - ge = 2.00231930436256 # free-electron g factor (CODATA 2018 value) - hbar = h/2/pi # reduced Planck constant, J/(rad/s) - - D = (mu0/4/pi)*(muB*ge)**2/hbar # dipolar constant, m^3 s^-1 - - conc = conc*1e-6*1e3*NA # umol/L -> mol/L -> mol/m^3 -> spins/m^3 - - # Compute background function - B = np.exp(-8*pi**2/9/m.sqrt(3)*lam*conc*D*np.abs(t*1e-6)) - return B - - if not args: - info = dict( - Parameters = ['Concentration of pumped spins'], - Units = ['μM'], - Start = np.asarray([50]), - Lower = np.asarray([0.01]), - Upper = np.asarray([5000]), - ModelFcn = model - ) - return info - else: - t,param,lam = _parsargs(args, npar=1, takes_lambda=True) - return model(t,param,lam) + t,param = _parsargs(t,param,npar=1) + + conc = param # concentration, µM + Nav = 6.02214076e23 # Avogadro constant, mol^-1 + muB = 9.2740100783e-24 # Bohr magneton, J/T (CODATA 2018 value) + mu0 = 1.25663706212e-6 # magnetic constant, N A^-2 = T^2 m^3 J^-1 (CODATA 2018) + h = 6.62607015e-34 # Planck constant, J/Hz (CODATA 2018) + ge = 2.00231930436256 # free-electron g factor (CODATA 2018 value) + hbar = h/2/pi # reduced Planck constant, J/(rad/s) + + D = (mu0/4/pi)*(muB*ge)**2/hbar # dipolar constant, m^3 s^-1 + + conc = conc*1e-6*1e3*Nav # umol/L -> mol/L -> mol/m^3 -> spins/m^3 + + # Compute background function + B = np.exp(-8*pi**2/9/m.sqrt(3)*lam*conc*D*np.abs(t*1e-6)) + return B # ====================================================================== +# ================================================================= +@setmetadata( +parameters = ['Spin concentration','Exclusion distance'], +units = ['μM','nm'], +start = np.asarray([50, 1]), +lower = np.asarray([0.01, 0.01]), +upper = np.asarray([5000, 20])) @docstring(takes_lambda=True) -def bg_hom3dex(*args): -# ====================================================================== +def bg_hom3dex(t,param,lam=1): r""" Background from homogeneous distribution of spins with excluded-volume effects @@ -239,61 +221,54 @@ def bg_hom3dex(*args): ``param[1]`` :math:`R` 1 0.1 20 Exclusion distance (nm) ============== =============== ============= ============= ============= ================================= """ - - def model(t,param,lam): - # Load precalculated reduction factor look-up table (Kattnig Eq.(18)) - dR_tab,alphas_tab = load_exvolume_redfactor() - - # Get parameters - conc = param[0] # µM - R = param[1] # nm - - NA = 6.02214076e23 # Avogadro constant, mol^-1 - conc = conc*1e-6*1e3*NA # umol/L -> mol/L -> mol/m^3 -> spins/m^3 - ge = 2.00231930436256 # free-electron g factor (CODATA 2018 value) - mu0 = 1.25663706212e-6 # magnetic constant, N A^-2 = T^2 m^3 J^-1 (CODATA 2018) - muB = 9.2740100783e-24 # Bohr magneton, J/T (CODATA 2018 value) - h = 6.62607015e-34 # Planck constant, J/Hz (CODATA 2018) - hbar = h/2/pi - - A = (mu0/4/pi)*(ge*muB)**2/hbar # Eq.(6) m^3 s^-1 - - # Calculate reduction factor (Eq.(18)) - if R==0: - alpha = 1 - else: - dR = A*abs(t*1e-6)/(R*1e-9)**3 # unitless - - # Use interpolation of look-up table for small dR - small = dR < max(dR_tab) - alpha = np.zeros(np.shape(dR)) - alpha[small] = np.interp(dR[small], dR_tab, alphas_tab) - - # For large dR, use limiting dR->inf expression - alpha[~small] = 1 - (3/2/pi)*np.sqrt(3)/dR[~small] - - K = 8*pi**2/9/np.sqrt(3)*A*abs(t*1e-6)*alpha # Eq.(17) - B = np.exp(-lam*conc*K) # Eq.(13) - return B - - if not args: - info = dict( - Parameters = ['Spin concentration','Exclusion distance'], - Units = ['μM','nm'], - Start = np.asarray([50, 1]), - Lower = np.asarray([0.01, 0.01]), - Upper = np.asarray([5000, 20]), - ModelFcn = model - ) - return info + t,param = _parsargs(t,param, npar=2) + + # Load precalculated reduction factor look-up table (Kattnig Eq.(18)) + dR_tab,alphas_tab = load_exvolume_redfactor() + + # Get parameters + conc = param[0] # µM + R = param[1] # nm + + NAv = 6.02214076e23 # Avogadro constant, mol^-1 + conc = conc*1e-6*1e3*NAv # umol/L -> mol/L -> mol/m^3 -> spins/m^3 + ge = 2.00231930436256 # free-electron g factor (CODATA 2018 value) + mu0 = 1.25663706212e-6 # magnetic constant, N A^-2 = T^2 m^3 J^-1 (CODATA 2018) + muB = 9.2740100783e-24 # Bohr magneton, J/T (CODATA 2018 value) + h = 6.62607015e-34 # Planck constant, J/Hz (CODATA 2018) + hbar = h/2/pi + + A = (mu0/4/pi)*(ge*muB)**2/hbar # Eq.(6) m^3 s^-1 + + # Calculate reduction factor (Eq.(18)) + if R==0: + alpha = 1 else: - t,param,lam = _parsargs(args, npar=2, takes_lambda=True) - return model(t,param,lam) + dR = A*abs(t*1e-6)/(R*1e-9)**3 # unitless + + # Use interpolation of look-up table for small dR + small = dR < max(dR_tab) + alpha = np.zeros(np.shape(dR)) + alpha[small] = np.interp(dR[small], dR_tab, alphas_tab) + + # For large dR, use limiting dR->inf expression + alpha[~small] = 1 - (3/2/pi)*np.sqrt(3)/dR[~small] + + K = 8*pi**2/9/np.sqrt(3)*A*abs(t*1e-6)*alpha # Eq.(17) + B = np.exp(-lam*conc*K) # Eq.(13) + return B # ====================================================================== + +# ================================================================= +@setmetadata( +parameters = ['Fractal Concentration of pumped spins','Fractal dimensionality'], +units = ['μmol/dmᵈ',''], +start = np.asarray([50, 3]), +lower = np.asarray([0.01, 0+np.finfo(float).eps]), +upper = np.asarray([5000, 6-np.finfo(float).eps])) @docstring(takes_lambda=True) -def bg_homfractal(*args): -# ====================================================================== +def bg_homfractal(t,param,lam=1): r""" Background from homogeneous distribution of spins in a fractal medium @@ -311,55 +286,49 @@ def bg_homfractal(*args): ``param[1]`` :math:`d` 3 0 6 Fractal dimension ============= ============= ============= ============= ============= ========================================================== """ - def model(t,param,lam=1): - # Unpack model paramters - conc = param[0] # concentration, umol/dm^d - d = float(param[1]) # fractal dimension - - # Natural constants - NA = 6.02214076e23 # Avogadro constant, mol^-1 - muB = 9.2740100783e-24 # Bohr magneton, J/T (CODATA 2018 value) - mu0 = 1.25663706212e-6 # magnetic constant, N A^-2 = T^2 m^3 J^-1 (CODATA 2018) - h = 6.62607015e-34 # Planck constant, J/Hz (CODATA 2018) - ge = 2.00231930436256 # free-electron g factor (CODATA 2018 value) - hbar = h/2/pi # reduced Planck constant, J/(rad/s) - D = (mu0/4/pi)*(muB*ge)**2/hbar # dipolar constant, m^3 s^-1 - # Units conversion of concentration - conc = conc*1e-6*(np.power(10,d))*NA # umol/dm^d -> mol/m^d -> spins/m^d - + t,param = _parsargs(t,param,npar=2) + + # Unpack model paramters + conc = param[0] # concentration, umol/dm^d + d = float(param[1]) # fractal dimension + + # Natural constants + NA = 6.02214076e23 # Avogadro constant, mol^-1 + muB = 9.2740100783e-24 # Bohr magneton, J/T (CODATA 2018 value) + mu0 = 1.25663706212e-6 # magnetic constant, N A^-2 = T^2 m^3 J^-1 (CODATA 2018) + h = 6.62607015e-34 # Planck constant, J/Hz (CODATA 2018) + ge = 2.00231930436256 # free-electron g factor (CODATA 2018 value) + hbar = h/2/pi # reduced Planck constant, J/(rad/s) + D = (mu0/4/pi)*(muB*ge)**2/hbar # dipolar constant, m^3 s^-1 + # Units conversion of concentration + conc = conc*1e-6*(np.power(10,d))*NA # umol/dm^d -> mol/m^d -> spins/m^d + - # Compute constants - if d==3: - c = -pi/2 - Lam = 4/3/np.sqrt(3) - else: - c = np.cos(d*pi/6)*scp.special.gamma(-d/3) - integrand = lambda z: abs(1-3*z**2)**(d/3) - Lam,_ = scp.integrate.quad(integrand,0,1,limit=1000) - - # Compute background function - B = np.exp(4*pi/3*c*Lam*lam*conc*D**(d/3)*abs(t*1e-6)**(d/3)) - - return B - - if not args: - info = dict( - Parameters = ['Fractal Concentration of pumped spins','Fractal dimensionality'], - Units = ['μmol/dmᵈ',''], - Start = np.asarray([50, 3]), - Lower = np.asarray([0.01, 0+np.finfo(float).eps]), - Upper = np.asarray([5000, 6-np.finfo(float).eps]), - ModelFcn = model - ) - return info + # Compute constants + if d==3: + c = -pi/2 + Lam = 4/3/np.sqrt(3) else: - t,param,lam = _parsargs(args, npar=2, takes_lambda=True) - return model(t,param,lam) + c = np.cos(d*pi/6)*scp.special.gamma(-d/3) + integrand = lambda z: abs(1-3*z**2)**(d/3) + Lam,_ = scp.integrate.quad(integrand,0,1,limit=1000) + + # Compute background function + B = np.exp(4*pi/3*c*Lam*lam*conc*D**(d/3)*abs(t*1e-6)**(d/3)) + + return B # ====================================================================== + +# ================================================================= +@setmetadata( +parameters = ['Decay Rate'], +units = ['μs⁻¹'], +start = np.asarray([0.35]), +lower = np.asarray([0]), +upper = np.asarray([200])) @docstring() -def bg_exp(*args): - # ====================================================================== +def bg_exp(t,param): r""" Exponential background model @@ -382,32 +351,24 @@ def bg_exp(*args): Although the ``bg_exp`` model has the same functional form as ``bg_hom3d``, it is distinct since its parameter is a decay rate constant and not a spin concentration like for ``bg_hom3d``. """ - def model(t,param): - t = np.atleast_1d(t) - param = np.atleast_1d(param) - - kappa = param[0] - B = np.exp(-kappa*np.abs(t)) - return B - - if not args: - info = dict( - Parameters = ['Decay Rate'], - Units = ['μs⁻¹'], - Start = np.asarray([0.35]), - Lower = np.asarray([0]), - Upper = np.asarray([200]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=1) - return model(t,param) + t,param = _parsargs(t,param,npar=1) + t = np.atleast_1d(t) + param = np.atleast_1d(param) + kappa = param[0] + B = np.exp(-kappa*np.abs(t)) + return B # ====================================================================== + +# ================================================================= +@setmetadata( +parameters = ['Decay Rate','Stretch factor'], +units = ['μs⁻¹',''], +start = np.asarray([0.25, 1]), +lower = np.asarray([0, 0]), +upper = np.asarray([200, 6])) @docstring() -def bg_strexp(*args): -# ====================================================================== +def bg_strexp(t,param): r""" Stretched exponential background model @@ -430,32 +391,24 @@ def bg_strexp(*args): Although the ``bg_strexp`` model has the same functional form as ``bg_homfractal``, it is distinct since its first parameter is a decay rate constant and not a spin concentration like for ``bg_homfractal``. """ - def model(t,param): - # Unpack model paramters - kappa = param[0] # decay rate, µs^-1 - d = param[1] # fractal dimension - B = np.exp(-kappa*abs(t)**d) - return B - - if not args: - info = dict( - Parameters = ['Decay Rate','Stretch factor'], - Units = ['μs⁻¹',''], - Start = np.asarray([0.25, 1]), - Lower = np.asarray([0, 0]), - Upper = np.asarray([200, 6]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=2) - return model(t,param) + t,param = _parsargs(t,param,npar=2) + kappa = param[0] # decay rate, µs^-1 + d = param[1] # fractal dimension + B = np.exp(-kappa*abs(t)**d) + return B # ====================================================================== +# ================================================================= +@setmetadata( +parameters = ['Decay Rate of 1st component','Stretch factor of 1st component', + 'Decay Rate of 2nd component','Stretch factor of 2nd component'], +units = ['µs^-1','','µs^-1',''], +start = np.asarray([0.25, 1, 0.25, 1]), +lower = np.asarray([ 0, 0, 0, 0]), +upper = np.asarray([200, 6, 200, 6])) @docstring() -def bg_prodstrexp(*args): -# ====================================================================== +def bg_prodstrexp(t,param): r""" Product of two stretched exponentials background model @@ -475,37 +428,28 @@ def bg_prodstrexp(*args): ``param[3]`` :math:`d_2` 1 0 6 2nd strexp stretch factor ============== ================= ============= ============= ============= ================================= """ - def model(t,param): - # Unpack model paramters - kappa1 = param[0] - d1 = param[1] - kappa2 = param[2] - d2 = param[3] - strexp1 = np.exp(-kappa1*abs(t)**d1) - strexp2 = np.exp(-kappa2*abs(t)**d2) - B = strexp1*strexp2 - return B - - if not args: - info = dict( - Parameters = ['Decay Rate of 1st component','Stretch factor of 1st component','Decay Rate of 2nd component','Stretch factor of 2nd component'], - Units = ['µs^-1','','µs^-1',''], - Start = np.asarray([0.25, 1, 0.25, 1]), - Lower = np.asarray([ 0, 0, 0, 0]), - Upper = np.asarray([200, 6, 200, 6]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=4) - return model(t,param) - + t,param = _parsargs(t,param,npar=4) + kappa1 = param[0] + d1 = param[1] + kappa2 = param[2] + d2 = param[3] + strexp1 = np.exp(-kappa1*abs(t)**d1) + strexp2 = np.exp(-kappa2*abs(t)**d2) + B = strexp1*strexp2 + return B # ====================================================================== +# ================================================================= +@setmetadata( +parameters = ['Decay Rate of 1st component','Stretch factor of 1st component', + 'Amplitude of 1st component','Decay Rate of 2nd component','Stretch factor of 2nd component'], +units = ['μs⁻¹','','','μs⁻¹',''], +start = np.asarray([0.25, 1, 0.5, 0.25, 1]), +lower = np.asarray([ 0, 0, 0, 0, 0]), +upper = np.asarray([200, 6, 1, 200, 6])) @docstring() -def bg_sumstrexp(*args): -# ====================================================================== +def bg_sumstrexp(t,param): r""" Sum of two stretched exponentials background model @@ -525,38 +469,29 @@ def bg_sumstrexp(*args): ``param[3]`` :math:`d_2` 1 0 6 2nd strexp stretch factor ``param[4]`` :math:`A_1` 0.50 0 1 Relative amplitude ============== ================= ============= ============= ============= ======================================== - """ - def model(t,param): - # Unpack model paramters - kappa1 = param[0] - d1 = param[1] - w1 = param[2] - kappa2 = param[3] - d2 = param[4] - strexp1 = np.exp(-kappa1*abs(t)**d1) - strexp2 = np.exp(-kappa2*abs(t)**d2) - B = w1*strexp1 + (1-w1)*strexp2 - return B - - if not args: - info = dict( - Parameters = ['Decay Rate of 1st component','Stretch factor of 1st component','Amplitude of 1st component','Decay Rate of 2nd component','Stretch factor of 2nd component'], - Units = ['μs⁻¹','','','μs⁻¹',''], - Start = np.asarray([0.25, 1, 0.5, 0.25, 1]), - Lower = np.asarray([ 0, 0, 0, 0, 0]), - Upper = np.asarray([200, 6, 1, 200, 6]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=5) - return model(t,param) + """ + t,param = _parsargs(t,param,npar=5) + kappa1 = param[0] + d1 = param[1] + w1 = param[2] + kappa2 = param[3] + d2 = param[4] + strexp1 = np.exp(-kappa1*abs(t)**d1) + strexp2 = np.exp(-kappa2*abs(t)**d2) + B = w1*strexp1 + (1-w1)*strexp2 + return B # ====================================================================== +# ================================================================= +@setmetadata( +parameters = ['Intercept','1st-order coefficient'], +units = ['','μs⁻¹'], +start = np.asarray([ 1, -1 ]), +lower = np.asarray([ 0, -200]), +upper = np.asarray([200, 200])) @docstring() -def bg_poly1(*args): -# ====================================================================== +def bg_poly1(t,param): r""" Polynomial 1st-order background model @@ -574,31 +509,23 @@ def bg_poly1(*args): ``param[1]`` :math:`p_1` -1 -200 200 1st order weight (μs\ :sup:`-1`) ============== =============== ============= ============= ============= ==================================== """ - def model(t,param): - # Compute polynomial - p = np.copy(np.flip(param)) - p[:-1] = p[:-1] - B = np.polyval(p,abs(t)) - return B - - if not args: - info = dict( - Parameters = ['Intercept','1st-order coefficient'], - Units = ['','μs⁻¹'], - Start = np.asarray([ 1, -1 ]), - Lower = np.asarray([ 0, -200]), - Upper = np.asarray([200, 200]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=2) - return model(t,param) + t,param = _parsargs(t,param, npar=2) + p = np.copy(np.flip(param)) + p[:-1] = p[:-1] + B = np.polyval(p,abs(t)) + return B # ====================================================================== + +# ================================================================= +@setmetadata( +parameters = ['Intercept','1st-order coefficient','2nd-order coefficient'], +units = ['','μs⁻¹','μs⁻²'], +start = np.asarray([ 1, -1 , -1]), +lower = np.asarray([ 0, -200, -200]), +upper = np.asarray([200, 200, 200])) @docstring() -def bg_poly2(*args): -# ====================================================================== +def bg_poly2(t,param): r""" Polynomial 2nd-order background model @@ -617,31 +544,23 @@ def bg_poly2(*args): ``param[2]`` :math:`p_2` -1 -200 200 2nd order weight (μs\ :sup:`-2`) ============== =============== ============= ============= ============= =================================== """ - def model(t,param): - # Compute polynomial - p = np.copy(np.flip(param)) - p[:-1] = p[:-1] - B = np.polyval(p,abs(t)) - return B - - if not args: - info = dict( - Parameters = ['Intercept','1st-order coefficient','2nd-order coefficient'], - Units = ['','μs⁻¹','μs⁻²'], - Start = np.asarray([ 1, -1 , -1]), - Lower = np.asarray([ 0, -200, -200]), - Upper = np.asarray([200, 200, 200]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=3) - return model(t,param) + t,param = _parsargs(t,param,npar=3) + p = np.copy(np.flip(param)) + p[:-1] = p[:-1] + B = np.polyval(p,abs(t)) + return B # ====================================================================== + +# ================================================================= +@setmetadata( +parameters = ['Intercept','1st-order coefficient','2nd-order coefficient','3rd-order coefficient'], +units = ['','μs⁻¹','μs⁻²','μs⁻³'], +start = np.asarray([ 1, -1 , -1, -1 ]), +lower = np.asarray([ 0, -200, -200, -200]), +upper = np.asarray([200, 200, 200, 200])) @docstring() -def bg_poly3(*args): -# ====================================================================== +def bg_poly3(t,param): r""" Polynomial 3rd-order background model @@ -661,24 +580,9 @@ def bg_poly3(*args): ``param[3]`` :math:`p_3` -1 -200 200 3rd order weight (μs\ :sup:`-3`) ============== =============== ============= ============= ============= =================================== """ - def model(t,param): - # Compute polynomial - p = np.copy(np.flip(param)) - p[:-1] = p[:-1] - B = np.polyval(p,abs(t)) - return B - - if not args: - info = dict( - Parameters = ['Intercept','1st-order coefficient','2nd-order coefficient','3rd-order coefficient'], - Units = ['','μs⁻¹','μs⁻²','μs⁻³'], - Start = np.asarray([ 1, -1 , -1, -1 ]), - Lower = np.asarray([ 0, -200, -200, -200]), - Upper = np.asarray([200, 200, 200, 200]), - ModelFcn = model - ) - return info - else: - t,param = _parsargs(args, npar=4) - return model(t,param) + t,param = _parsargs(t,param,npar=4) + p = np.copy(np.flip(param)) + p[:-1] = p[:-1] + B = np.polyval(p,abs(t)) + return B # ====================================================================== \ No newline at end of file diff --git a/deerlab/dd_models.py b/deerlab/dd_models.py index 31afa41f1..2db3ec7b6 100644 --- a/deerlab/dd_models.py +++ b/deerlab/dd_models.py @@ -12,15 +12,17 @@ docstr_header = lambda title, fcnstr: f""" {title} -If called without arguments, returns an ``info`` dictionary of model parameters and boundaries:: - - info = {fcnstr}() - - -Otherwise the function returns the calculated distance distribution:: +The function takes a list or array of parameters and returns the calculated distance distribution:: P = {fcnstr}(r,param) +The built-in information on the model can be accessed via its attributes:: + + {fcnstr}.parameters # String list of parameter names + {fcnstr}.units # String list of metric units of parameters + {fcnstr}.start # List of values used as start values during optimization + {fcnstr}.lower # List of values used as lower bounds during optimization + {fcnstr}.upper # List of values used as upper bounds during optimization Parameters ---------- @@ -32,16 +34,7 @@ Returns ------- -info : dict - Dictionary containing the built-in information of the model: - - * ``info['Parameters']`` - string list of parameter names - * ``info['Units']`` - string list of metric units of parameters - * ``info['Start']`` - list of values used as start values during optimization - * ``info['Lower']`` - list of values used as lower bounds during optimization - * ``info['Upper']`` - list of values used as upper bounds during optimization - * ``info['ModelFcn']`` - function used to calculate the model output - + P : ndarray Distance distribution. """ @@ -88,27 +81,38 @@ def _decorator(func): return _decorator # ================================================================= +# ================================================================= +def setmetadata(parameters,units,start,lower,upper): + """ + Decorator: Set model metadata as function attributes + """ + def _setmetadata(func): + func.parameters = parameters + func.units = units + func.start = start + func.lower = lower + func.upper = upper + return func + return _setmetadata +# ================================================================= -def _parsargs(args,npar): -#================================================================= - name = inspect.stack()[1][3] - if len(args)!=2: - raise KeyError(f'The model function {name} requires two input arguments: {name}(r,params).') - r,p = args +# ================================================================= +def _parsparam(r,p,npar): + r,p = np.atleast_1d(r,p) if len(p)!=npar: - raise ValueError(f'The model function {name} requires {npar} parameters, but {len(p)} are provided.') + raise ValueError(f'The model function requires {npar} parameters, but {len(p)} are provided.') return r,p -#================================================================= +# ================================================================= +# ================================================================= def _normalize(r,P): -#================================================================= if not all(P==0): P = P/np.trapz(P,r) return P -#================================================================= +# ================================================================= +# ================================================================= def _multigaussfun(r,r0,sig,a): -#================================================================= "Compute a distribution with multiple Gaussians" n = len(r0) P = np.zeros_like(r) @@ -116,10 +120,10 @@ def _multigaussfun(r,r0,sig,a): P += a[k]*m.sqrt(1/(2*m.pi))*1/sig[k]*np.exp(-0.5*((r-r0[k])/sig[k])**2) P = _normalize(r,P) return P -#================================================================= +# ================================================================= def _multirice3dfun(r,nu,sig,a): -#================================================================= +# ================================================================= "Compute a distribution with multiple Gaussians" N = len(nu) nu = np.maximum(nu,0) # to avoid invalid values @@ -132,11 +136,17 @@ def _multirice3dfun(r,nu,sig,a): P[P<0] = 0 P = _normalize(r,P) return P -#================================================================= +# ================================================================= +# ================================================================= +@setmetadata( +parameters = ('Mean','Standard deviation'), +units = ('nm','nm'), +start = np.asarray([3.5, 0.2]), +lower = np.asarray([1, 0.05]), +upper = np.asarray([20, 2.5])) @docstring() -def dd_gauss(*args): -#================================================================= +def dd_gauss(r,param): r""" Gaussian distribution @@ -154,31 +164,25 @@ def dd_gauss(*args): ``param[1]`` :math:`\sigma` 0.2 0.05 2.5 Standard deviation (nm) ============== ======================== ============= ============= ============= =========================== """ - def model(r,p): - r0 = [p[0]] - sigma = [p[1]] - a = [1.0] - P = _multigaussfun(r,r0,sigma,a) - return P - if not args: - info = dict( - Parameters = ('Mean','Standard deviation'), - Units = ('nm','nm'), - Start = np.asarray([3.5, 0.2]), - Lower = np.asarray([1, 0.05]), - Upper = np.asarray([20, 2.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= - + r,param = _parsparam(r,param,npar=2) + r0 = [param[0]] + sigma = [param[1]] + a = [1.0] + P = _multigaussfun(r,r0,sigma,a) + return P +# ================================================================= + + +# ================================================================= +@setmetadata( +parameters = ('Mean of 1st Gaussian', 'Standard deviation of 1st Gaussian', 'Amplitude of 1st Gaussian', + 'Mean of 2nd Gaussian', 'Standard deviation of 2nd Gaussian', 'Amplitude of 2nd Gaussian'), +units = ('nm','nm','','nm','nm',''), +start = np.asarray([2.5, 0.2, 0.5, 3.5, 0.2, 0.5]), +lower = np.asarray([1, 0.05, 0, 1, 0.05, 0]), +upper = np.asarray([20, 2.5, 1, 20, 2.5, 1])) @docstring() -def dd_gauss2(*args): -#================================================================= +def dd_gauss2(r,param): r""" Sum of two Gaussian distributions @@ -201,34 +205,26 @@ def dd_gauss2(*args): ``param[5]`` :math:`a_2` 0.5 0 1 2nd Gaussian amplitude ============== ========================= ============= ============= ============= ====================================== """ - def model(r,p): - r0 = [p[0], p[3]] - sigma = [p[1], p[4]] - a = [p[2], p[5]] - P = _multigaussfun(r,r0,sigma,a) - - return P - - if not args: - info = dict( - Parameters = ('Mean of 1st Gaussian', 'Standard deviation of 1st Gaussian', 'Amplitude of 1st Gaussian', - 'Mean of 2nd Gaussian', 'Standard deviation of 2nd Gaussian', 'Amplitude of 2nd Gaussian'), - Units = ('nm','nm','','nm','nm',''), - Start = np.asarray([2.5, 0.2, 0.5, 3.5, 0.2, 0.5]), - Lower = np.asarray([1, 0.05, 0, 1, 0.05, 0]), - Upper = np.asarray([20, 2.5, 1, 20, 2.5, 1]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=6) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=6) + r0 = [param[0], param[3]] + sigma = [param[1], param[4]] + a = [param[2], param[5]] + P = _multigaussfun(r,r0,sigma,a) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Mean of 1st Gaussian', 'Standard deviation of 1st Gaussian', 'Amplitude of 1st Gaussian', + 'Mean of 2nd Gaussian', 'Standard deviation of 2nd Gaussian', 'Amplitude of 2nd Gaussian', + 'Mean of 3rd Gaussian', 'Standard deviation of 3rd Gaussian', 'Amplitude of 3rd Gaussian'), +units = ('nm','nm','','nm','nm','','nm','nm',''), +start = np.asarray([2.5, 0.2, 0.3, 3.5, 0.2, 0.3, 5, 0.2, 0.3]), +lower = np.asarray([1, 0.05, 0, 1, 0.05, 0, 1, 0.05, 0]), +upper = np.asarray([20, 2.5, 1, 20, 2.5, 1, 20, 2.5, 1])) @docstring() -def dd_gauss3(*args): -#================================================================= +def dd_gauss3(r,param): r""" Sum of three Gaussian distributions @@ -253,35 +249,24 @@ def dd_gauss3(*args): ``param[8]`` :math:`a_3` 0.3 0 1 3rd Gaussian amplitude ============== ========================== ============= ============= ============= ======================================= """ - def model(r,p): - r0 = [p[0], p[3], p[6]] - sigma = [p[1], p[4], p[7]] - a = [p[2], p[5], p[8]] - P = _multigaussfun(r,r0,sigma,a) - - return P - - if not args: - info = dict( - Parameters = ('Mean of 1st Gaussian', 'Standard deviation of 1st Gaussian', 'Amplitude of 1st Gaussian', - 'Mean of 2nd Gaussian', 'Standard deviation of 2nd Gaussian', 'Amplitude of 2nd Gaussian', - 'Mean of 3rd Gaussian', 'Standard deviation of 3rd Gaussian', 'Amplitude of 3rd Gaussian'), - Units = ('nm','nm','','nm','nm','','nm','nm',''), - Start = np.asarray([2.5, 0.2, 0.3, 3.5, 0.2, 0.3, 5, 0.2, 0.3]), - Lower = np.asarray([1, 0.05, 0, 1, 0.05, 0, 1, 0.05, 0]), - Upper = np.asarray([20, 2.5, 1, 20, 2.5, 1, 20, 2.5, 1]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=9) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=9) + r0 = [param[0], param[3], param[6]] + sigma = [param[1], param[4], param[7]] + a = [param[2], param[5], param[8]] + P = _multigaussfun(r,r0,sigma,a) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Mean','Spread','Kurtosis'), +units = ('nm','nm',''), +start = np.asarray([3.5, 0.5,0.5]), +lower = np.asarray([1, 0.05, 0.25]), +upper = np.asarray([20, 5, 15])) @docstring() -def dd_gengauss(*args): -#================================================================= +def dd_gengauss(r,param): r""" Generalized Gaussian distribution model @@ -304,36 +289,24 @@ def dd_gengauss(*args): ============== ========================== ============= ============= ============= ======================================= """ - def model(r,p): - # Compute the model distance distribution - r0 = p[0] - sigma = p[1] - beta = p[2] - x = abs(r-r0)/sigma - P = beta/(2*sigma*spc.gamma(1/beta))*np.exp(-x**beta) - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Mean','Spread','Kurtosis'), - Units = ('nm','nm',''), - Start = np.asarray([3.5, 0.5,0.5]), - Lower = np.asarray([1, 0.05, 0.25]), - Upper = np.asarray([20, 5, 15]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=3) + r0, sigma, beta = param + x = abs(r-r0)/sigma + P = beta/(2*sigma*spc.gamma(1/beta))*np.exp(-x**beta) + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Center','Spread','Kurtosis'), +units = ('nm','nm',''), +start = np.asarray([3.5, 0.2, 5]), +lower = np.asarray([1, 0.05, -25]), +upper = np.asarray([20, 5, 25])) @docstring() -def dd_skewgauss(*args): -#================================================================= +def dd_skewgauss(r,param): r""" Skew Gaussian distribution model @@ -357,36 +330,24 @@ def dd_skewgauss(*args): ============== ============== ============= ============= ============= ========================= """ - def model(r,p): - # Compute the model distance distribution - r0 = p[0] - sigma = p[1] - alpha = p[2] - x = (r-r0)/sigma/np.sqrt(2) - P = 1/np.sqrt(2*np.pi)*np.exp(-x**2)*(1 + spc.erf(alpha*x)) - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Center','Spread','Kurtosis'), - Units = ('nm','nm',''), - Start = np.asarray([3.5, 0.2, 5]), - Lower = np.asarray([1, 0.05, -25]), - Upper = np.asarray([20, 5, 25]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=3) + r0, sigma, alpha = param + x = (r-r0)/sigma/np.sqrt(2) + P = 1/np.sqrt(2*np.pi)*np.exp(-x**2)*(1 + spc.erf(alpha*x)) + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Location','Spread'), +units = ('nm','nm'), +start = np.asarray([3.5, 0.7]), +lower = np.asarray([1, 0.1]), +upper = np.asarray([10, 5])) @docstring() -def dd_rice(*args): -#================================================================= +def dd_rice(r,param): r""" 3D-Rice distribution @@ -406,32 +367,25 @@ def dd_rice(*args): ``param[1]`` :math:`\sigma` 0.7 0.1 5 Spread (nm) ============== ======================== ============= ============= ============= ======================================= """ - def model(r,p): - nu = [p[0]] - sig = [p[1]] - a = [1.0] - P = _multirice3dfun(r,nu,sig,a) - return P - - if not args: - info = dict( - Parameters = ('Location','Spread'), - Units = ('nm','nm'), - Start = np.asarray([3.5, 0.7]), - Lower = np.asarray([1, 0.1]), - Upper = np.asarray([10, 5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=2) + nu = [param[0]] + sig = [param[1]] + a = [1.0] + P = _multirice3dfun(r,nu,sig,a) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Location of 1st Rician', 'Spread of 1st Rician', 'Amplitude of 1st Rician', + 'Location of 2nd Rician', 'Spread of 2nd Rician', 'Amplitude of 2nd Rician'), +units = ('nm','nm','','nm','nm',''), +start = np.asarray([2.5, 0.7, 0.5, 4.0, 0.7, 0.5]), +lower = np.asarray([1, 0.1, 0, 1, 0.1, 0]), +upper = np.asarray([10, 5, 1, 10, 5, 1])) @docstring() -def dd_rice2(*args): -#================================================================= +def dd_rice2(r,param): r""" Sum of two 3D-Rice distributions @@ -458,34 +412,26 @@ def dd_rice2(*args): ``param[5]`` :math:`a_2` 0.5 0 1 2nd Rician amplitude ============== ======================== ============= ============= ============= ======================================= """ - def model(r,p): - nu = [p[0], p[3]] - sig = [p[1], p[4]] - a = [p[2], p[5]] - P = _multirice3dfun(r,nu,sig,a) - - return P - - if not args: - info = dict( - Parameters = ('Location of 1st Rician', 'Spread of 1st Rician', 'Amplitude of 1st Rician', - 'Location of 2nd Rician', 'Spread of 2nd Rician', 'Amplitude of 2nd Rician'), - Units = ('nm','nm','','nm','nm',''), - Start = np.asarray([2.5, 0.7, 0.5, 4.0, 0.7, 0.5]), - Lower = np.asarray([1, 0.1, 0, 1, 0.1, 0]), - Upper = np.asarray([10, 5, 1, 10, 5, 1]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=6) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=6) + nu = [param[0], param[3]] + sig = [param[1], param[4]] + a = [param[2], param[5]] + P = _multirice3dfun(r,nu,sig,a) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Location of 1st Rician', 'Spread of 1st Rician', 'Amplitude of 1st Rician', + 'Location of 2nd Rician', 'Spread of 2nd Rician', 'Amplitude of 2nd Rician', + 'Location of 3rd Rician', 'Spread of 3rd Rician', 'Amplitude of 3rd Rician'), +units = ('nm','nm','','nm','nm','','nm','nm',''), +start = np.asarray([2.5, 0.7, 0.3, 3.5, 0.7, 0.3, 5, 0.7, 0.3]), +lower = np.asarray([1, 0.1, 0, 1, 0.1, 0, 1, 0.1, 0]), +upper = np.asarray([10, 5, 1, 10, 5, 1, 10, 5, 1])) @docstring() -def dd_rice3(*args): -#================================================================= +def dd_rice3(r,param): r""" Sum of three 3D-Rice distributions @@ -515,35 +461,24 @@ def dd_rice3(*args): ``param[8]`` :math:`a_3` 0.3 0 1 3rd Rician amplitude ============== ======================== ============= ============= ============= ======================================= """ - def model(r,p): - nu = [p[0], p[3], p[6]] - sig = [p[1], p[4], p[7]] - a = [p[2], p[5], p[8]] - P = _multirice3dfun(r,nu,sig,a) - - return P - - if not args: - info = dict( - Parameters = ('Location of 1st Rician', 'Spread of 1st Rician', 'Amplitude of 1st Rician', - 'Location of 2nd Rician', 'Spread of 2nd Rician', 'Amplitude of 2nd Rician', - 'Location of 3rd Rician', 'Spread of 3rd Rician', 'Amplitude of 3rd Rician'), - Units = ('nm','nm','','nm','nm','','nm','nm',''), - Start = np.asarray([2.5, 0.7, 0.3, 3.5, 0.7, 0.3, 5, 0.7, 0.3]), - Lower = np.asarray([1, 0.1, 0, 1, 0.1, 0, 1, 0.1, 0]), - Upper = np.asarray([10, 5, 1, 10, 5, 1, 10, 5, 1]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=9) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=9) + nu = [param[0], param[3], param[6]] + sig = [param[1], param[4], param[7]] + a = [param[2], param[5], param[8]] + P = _multirice3dfun(r,nu,sig,a) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Number of residues','Segment length','Scaling exponent'), +units = ('','nm',''), +start = np.asarray([50, 0.2, 0.602]), +lower = np.asarray([2, 0.1, 0.33 ]), +upper = np.asarray([1000, 0.4, 1 ])) @docstring() -def dd_randcoil(*args): -#================================================================= +def dd_randcoil(r,param): r""" Random-coil model for an unfolded peptide/protein @@ -568,39 +503,30 @@ def dd_randcoil(*args): ============== ============= ============= ============= ============= ======================================= """ - def model(r,p): - N = p[0] # number of residues - nu = p[1] # scaling exponent - R0 = p[2] # residue length - - rsq = 6*(R0*N**nu)**2 # mean square end-to-end distance from radius of gyration - normFact = 3/(2*np.pi*rsq)**(3/2) # normalization prefactor - ShellSurf = 4*np.pi*r**2 # spherical shell surface - Gaussian = np.exp(-3*r**2/(2*rsq)) - P = normFact*ShellSurf*Gaussian - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Number of residues','Segment length','Scaling exponent'), - Units = ('','nm',''), - Start = np.asarray([50, 0.2, 0.602]), - Lower = np.asarray([2, 0.1, 0.33 ]), - Upper = np.asarray([1000, 0.4, 1 ]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=3) + N = param[0] # number of residues + nu = param[1] # scaling exponent + R0 = param[2] # residue length + + rsq = 6*(R0*N**nu)**2 # mean square end-to-end distance from radius of gyration + normFact = 3/(2*np.pi*rsq)**(3/2) # normalization prefactor + ShellSurf = 4*np.pi*r**2 # spherical shell surface + Gaussian = np.exp(-3*r**2/(2*rsq)) + P = normFact*ShellSurf*Gaussian + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Center','Radius'), +units = ('nm','nm'), +start = np.asarray([3, 0.5]), +lower = np.asarray([1, 0.1]), +upper = np.asarray([20, 5 ])) @docstring() -def dd_circle(*args): -#================================================================= +def dd_circle(r,param): r""" Semicircle distribution model @@ -619,39 +545,27 @@ def dd_circle(*args): ``param[1]`` :math:`R` 0.5 0.1 5 Radius (nm) ============== ================= ============= ============= ============= ================================= """ - def model(r,p): - # Compute the model distance distribution - r0 = p[0] - R = abs(p[1]) - - dr = r - r0 - idx = abs(dr)(r0+fwhm))] = 0 - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Center','FWHM'), - Units = ('nm','nm'), - Start = np.asarray([3, 0.5]), - Lower = np.asarray([1, 0.1]), - Upper = np.asarray([20, 5 ]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=2) + r0 = param[0] + fwhm = param[1] + + phi = (r-r0)/fwhm*np.pi + P = (1 + np.cos(phi))/2/fwhm + P[(r<(r0-fwhm)) | (r>(r0+fwhm))] = 0 + P = _normalize(r,P) + return P +# ================================================================= def _pb(r,R): -#================================================================= +# ================================================================= P = np.zeros(len(r)) idx = (r >= 0) & (r <= 2*R) P[idx] = 3*r[idx]**5/(16*R**6) - 9*r[idx]**3/(4*R**4) + 3*r[idx]**2/(R**3) return P -#================================================================= +# ================================================================= def _pbs(r,R1,R2): -#================================================================= +# ================================================================= P = np.zeros(len(r)) # Case1 idx = (r >= 0) & (r < np.minimum(2*R1,R2 - R1)) @@ -730,12 +627,17 @@ def _pbs(r,R1,R2): P = P*3/(16*R1**3*(R2**3 - R1**3)) return P -#================================================================= - +# ================================================================= +# ================================================================= +@setmetadata( +parameters = ('Inner shell radius','Shell thickness'), +units = ('nm','nm'), +lower = np.asarray([0.1, 0.1]), +upper = np.asarray([20, 20 ]), +start = np.asarray([1.5, 0.5])) @docstring() -def dd_shell(*args): -#================================================================= +def dd_shell(r,param): r""" Uniform spherical shell @@ -774,40 +676,30 @@ def dd_shell(*args): Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R1 = float(p[0]) - w = float(p[1]) - R2 = R1 + w - - P = np.zeros(len(r)) - P = R2**6*_pb(r,R2) - R1**6*_pb(r,R1) - 2*(R2**3 - R1**3)*_pbs(r,R1,R2) - - P = P/(R2**3 - R1**3)**2 - - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Inner shell radius','Shell thickness'), - Units = ('nm','nm'), - Lower = np.asarray([0.1, 0.1]), - Upper = np.asarray([20, 20 ]), - Start = np.asarray([1.5, 0.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=2) + R1 = float(param[0]) + w = float(param[1]) + R2 = R1 + w + + P = np.zeros(len(r)) + P = R2**6*_pb(r,R2) - R1**6*_pb(r,R1) - 2*(R2**3 - R1**3)*_pbs(r,R1,R2) + + P = P/(R2**3 - R1**3)**2 + + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Sphere radius','Distance to point'), +units = ('nm','nm'), +lower = np.asarray([0.1, 0.1]), +upper = np.asarray([20, 20 ]), +start = np.asarray([1.5, 3.5])) @docstring() -def dd_spherepoint(*args): -#================================================================= +def dd_spherepoint(r,param): r""" One particle distanced from particles distributed on a sphere @@ -833,37 +725,26 @@ def dd_spherepoint(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R = float(p[0]) - d = float(p[1]) - P = np.zeros(len(r)) - idx = (r >= d - R) & (r<= d + R) - P[idx] = 3*r[idx]*(R**2 - (d - r[idx])**2)/(4*d*R**3) - - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Sphere radius','Distance to point'), - Units = ('nm','nm'), - Lower = np.asarray([0.1, 0.1]), - Upper = np.asarray([20, 20 ]), - Start = np.asarray([1.5, 3.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=2) + R = float(param[0]) + d = float(param[1]) + P = np.zeros(len(r)) + idx = (r >= d - R) & (r<= d + R) + P[idx] = 3*r[idx]*(R**2 - (d - r[idx])**2)/(4*d*R**3) + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Sphere radius'), +units = ('nm'), +lower = np.asarray([0.1]), +upper = np.asarray([20]), +start = np.asarray([2.5])) @docstring() -def dd_spheresurf(*args): -#================================================================= +def dd_spheresurf(r,param): r""" Particles distributed on a sphere's surface @@ -888,36 +769,25 @@ def dd_spheresurf(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R = float(p[0]) - P = np.zeros(len(r)) - idx = (r >= 0) & (r<= 2*R) - P[idx] = r[idx]/R**2 - - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Sphere radius'), - Units = ('nm'), - Lower = np.asarray([0.1]), - Upper = np.asarray([20]), - Start = np.asarray([2.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=1) - P = model(r,p) - return P + r,param = _parsparam(r,param,npar=1) + R = float(param[0]) + P = np.zeros(len(r)) + idx = (r >= 0) & (r<= 2*R) + P[idx] = r[idx]/R**2 + P = _normalize(r,P) + return P #================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Inner shell radius','1st Shell thickness','2nd Shell thickness'), +units = ('nm','nm','nm'), +lower = np.asarray([0.1, 0.1, 0.1]), +upper = np.asarray([20, 20, 20 ]), +start = np.asarray([1.5, 0.5, 0.5])) @docstring() -def dd_shellshell(*args): -#================================================================= +def dd_shellshell(r,param): r""" Uniform spherical shell inside another spherical shell @@ -956,48 +826,39 @@ def dd_shellshell(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R1 = float(p[0]) - w1 = float(p[1]) - w2 = float(p[2]) - - R2 = R1 + w1 - R3 = R2 + w2 - - delta21 = R2**3 - R1**3 - q21 = delta21*_pbs(r,R1,R2) - delta31 = R3**3 - R1**3 - q31 = delta31*_pbs(r,R1,R3) - delta32 = R3**3 - R2**3 - q32 = delta32*_pbs(r,R2,R3) - - P = R1**3*q21 - R1**3*q31 + R2**3*q32 - P = P/(delta21*delta32) - - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Inner shell radius','1st Shell thickness','2nd Shell thickness'), - Units = ('nm','nm','nm'), - Lower = np.asarray([0.1, 0.1, 0.1]), - Upper = np.asarray([20, 20, 20 ]), - Start = np.asarray([1.5, 0.5, 0.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P + r,param = _parsparam(r,param,npar=3) + R1 = float(param[0]) + w1 = float(param[1]) + w2 = float(param[2]) + + R2 = R1 + w1 + R3 = R2 + w2 + + delta21 = R2**3 - R1**3 + q21 = delta21*_pbs(r,R1,R2) + delta31 = R3**3 - R1**3 + q31 = delta31*_pbs(r,R1,R3) + delta32 = R3**3 - R2**3 + q32 = delta32*_pbs(r,R2,R3) + + P = R1**3*q21 - R1**3*q31 + R2**3*q32 + P = P/(delta21*delta32) + + P = _normalize(r,P) + + return P #================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Sphere radius','Shell thickness'), +units = ('nm','nm'), +lower = np.asarray([0.1, 0.1]), +upper = np.asarray([20, 20]), +start = np.asarray([1.5, 0.5])) @docstring() -def dd_shellsphere(*args): -#================================================================= +def dd_shellsphere(r,param): r""" Particles distributed on a sphere inside a spherical shell @@ -1029,36 +890,26 @@ def dd_shellsphere(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R1 = float(p[0]) - w = float(p[1]) - R2 = R1 + w - P = _pbs(r,R1,R2) - - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Sphere radius','Shell thickness'), - Units = ('nm','nm'), - Lower = np.asarray([0.1, 0.1]), - Upper = np.asarray([20, 20]), - Start = np.asarray([1.5, 0.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=2) + R1 = float(param[0]) + w = float(param[1]) + R2 = R1 + w + P = _pbs(r,R1,R2) + P = _normalize(r,P) + return P +# ================================================================= + + +# ================================================================= +@setmetadata( +parameters = ('Sphere radius','1st Shell thickness','2nd Shell thickness','Shell-Shell separation'), +units = ('nm','nm','nm','nm'), +lower = np.asarray([0.1, 0.1, 0.1, 0.1]), +upper = np.asarray([20, 20, 20, 2]), +start = np.asarray([0.75, 1, 1, 0.5])) @docstring() -def dd_shellvoidshell(*args): -#================================================================= +def dd_shellvoidshell(r,param): r""" Particles distributed on a spherical shell inside another spherical shell separated by a void @@ -1100,54 +951,45 @@ def dd_shellvoidshell(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R1 = float(p[0]) - w1 = float(p[1]) - w2 = float(p[2]) - d = float(p[3]) - - R2 = R1 + w1 - R3 = R1 + w1 + w2 - R4 = R1 + w1 + w2 + d - - delta21 = R2**3 - R1**3 - delta31 = R3**3 - R1**3 - q31 = delta31*_pbs(r,R1,R3) - delta32 = R3**3 - R2**3 - q32 = delta32*_pbs(r,R2,R3) - delta41 = R4**3 - R1**3 - q41 = delta41*_pbs(r,R1,R4) - delta42 = R4**3 - R2**3 - q42 = delta42*_pbs(r,R2,R4) - delta43 = R4**3 - R3**3 - - P = (R1**3*(q31 - q41) + R2**3*(q42 - q32))/(delta43*delta21) - P = np.round(P,15) - - P = _normalize(r,P) - - return P - - if not args: - info = dict( - Parameters = ('Sphere radius','1st Shell thickness','2nd Shell thickness','Shell-Shell separation'), - Units = ('nm','nm','nm','nm'), - Lower = np.asarray([0.1, 0.1, 0.1, 0.1]), - Upper = np.asarray([20, 20, 20, 2]), - Start = np.asarray([0.75, 1, 1, 0.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=4) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=4) + R1 = float(param[0]) + w1 = float(param[1]) + w2 = float(param[2]) + d = float(param[3]) + + R2 = R1 + w1 + R3 = R1 + w1 + w2 + R4 = R1 + w1 + w2 + d + + delta21 = R2**3 - R1**3 + delta31 = R3**3 - R1**3 + q31 = delta31*_pbs(r,R1,R3) + delta32 = R3**3 - R2**3 + q32 = delta32*_pbs(r,R2,R3) + delta41 = R4**3 - R1**3 + q41 = delta41*_pbs(r,R1,R4) + delta42 = R4**3 - R2**3 + q42 = delta42*_pbs(r,R2,R4) + delta43 = R4**3 - R3**3 + + P = (R1**3*(q31 - q41) + R2**3*(q42 - q32))/(delta43*delta21) + P = np.round(P,15) + + P = _normalize(r,P) + + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Sphere radius','Shell thickness','Shell-Sphere separation'), +units = ('nm','nm','nm'), +lower = np.asarray([0.1, 0.1, 0.1]), +upper = np.asarray([20, 20, 20]), +start = np.asarray([1.5, 1, 0.5])) @docstring() -def dd_shellvoidsphere(*args): -#================================================================= +def dd_shellvoidsphere(r,param): r""" Particles distributed on a sphere inside a spherical shell separated by a void @@ -1187,45 +1029,36 @@ def dd_shellvoidsphere(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R1 = float(p[0]) - w = float(p[1]) - d = float(p[2]) - - R2 = R1 + w - R3 = R1 + w + d - - delta21 = R2**3 - R1**3 - q21 = delta21*_pbs(r,R1,R2) - delta31 = R3**3 - R1**3 - q31 = delta31*_pbs(r,R1,R3) - delta32 = R3**3 - R2**3 - - P = (q31 - q21)/delta32 - P = np.round(P,15) - P = _normalize(r,P) - return P - - if not args: - info = dict( - Parameters = ('Sphere radius','Shell thickness','Shell-Sphere separation'), - Units = ('nm','nm','nm'), - Lower = np.asarray([0.1, 0.1, 0.1]), - Upper = np.asarray([20, 20, 20]), - Start = np.asarray([1.5, 1, 0.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=3) + R1 = float(param[0]) + w = float(param[1]) + d = float(param[2]) + + R2 = R1 + w + R3 = R1 + w + d + + delta21 = R2**3 - R1**3 + q21 = delta21*_pbs(r,R1,R2) + delta31 = R3**3 - R1**3 + q31 = delta31*_pbs(r,R1,R3) + delta32 = R3**3 - R2**3 + + P = (q31 - q21)/delta32 + P = np.round(P,15) + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Sphere radius'), +units = ('nm'), +lower = np.asarray([0.1]), +upper = np.asarray([20]), +start = np.asarray([2.5])) @docstring() -def dd_sphere(*args): -#================================================================= +def dd_sphere(r,param): r""" Particles distributed on a sphere @@ -1250,32 +1083,23 @@ def dd_sphere(*args): .. [1] D.R. Kattnig, D. Hinderberger, Analytical distance distributions in systems of spherical symmetry with applications to double electron-electron resonance, JMR, 230, 50-63, 2013 """ - def model(r,p): - # Compute the model distance distribution - R = float(p[0]) - P = _pb(r,R) - P = _normalize(r,P) - return P - - if not args: - info = dict( - Parameters = ('Sphere radius'), - Units = ('nm'), - Lower = np.asarray([0.1]), - Upper = np.asarray([20]), - Start = np.asarray([2.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=1) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=1) + R = float(param[0]) + P = _pb(r,R) + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Mode','Left width','Right width'), +units = ('nm','nm','nm'), +lower = np.asarray([1, 0.1, 0.1]), +upper = np.asarray([20, 20, 20]), +start = np.asarray([3.5, 1, 0.5])) @docstring() -def dd_triangle(*args): -#================================================================= +def dd_triangle(r,param): r""" Triangle distribution model @@ -1294,43 +1118,33 @@ def dd_triangle(*args): ``param[2]`` :math:`w_\mathrm{R}` 0.3 0.1 5 Right width (nm) ============== ======================== ============= ============= ============= ========================= """ - def model(r,p): - # Compute the model distance distribution - r0 = p[0] - wL = abs(p[1]) - wR = abs(p[2]) - rL = r0 - wL - rR = r0 + wR - idxL = (r >= r0-wL) & (r <= r0) - idxR = (r <= r0+wR) & (r >= r0) - P = np.zeros(len(r)) - if wL>0: - P[idxL] = (r[idxL]-rL)/wL/(wL+wR) - if wR>0: - P[idxR] = -(r[idxR]-rR)/wR/(wL+wR) - P = _normalize(r,P) - return P - - - if not args: - info = dict( - Parameters = ('Mode','Left width','Right width'), - Units = ('nm','nm','nm'), - Lower = np.asarray([1, 0.1, 0.1]), - Upper = np.asarray([20, 20, 20]), - Start = np.asarray([3.5, 1, 0.5]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P -#================================================================= + r,param = _parsparam(r,param,npar=3) + r0 = param[0] + wL = abs(param[1]) + wR = abs(param[2]) + rL = r0 - wL + rR = r0 + wR + idxL = (r >= r0-wL) & (r <= r0) + idxR = (r <= r0+wR) & (r >= r0) + P = np.zeros(len(r)) + if wL>0: + P[idxL] = (r[idxL]-rL)/wL/(wL+wR) + if wR>0: + P[idxR] = -(r[idxR]-rR)/wR/(wL+wR) + P = _normalize(r,P) + return P +# ================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Left edge','Right edge'), +units = ('nm','nm'), +lower = np.asarray([0.1, 0.2]), +upper = np.asarray([6, 20]), +start = np.asarray([2.5, 3])) @docstring() -def dd_uniform(*args): -#================================================================= +def dd_uniform(r,param): r""" Uniform distribution model @@ -1348,32 +1162,16 @@ def dd_uniform(*args): ``param[1]`` :math:`r_\mathrm{R}` 3.0 0.2 20 Right edge (nm) ============== ======================== ============= ============= ============= ========================= """ - def model(r,p): - # Compute the model distance distribution - rL = min(abs(p)) - rR = max(abs(p)) - P = np.zeros(len(r)) - P[(r>=rL) & (r<=rR)] = 1 - P = _normalize(r,P) - return P - - if not args: - info = dict( - Parameters = ('Left edge','Right edge'), - Units = ('nm','nm'), - Lower = np.asarray([0.1, 0.2]), - Upper = np.asarray([6, 20]), - Start = np.asarray([2.5, 3]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P -#================================================================= - + r,param = _parsparam(r,param,npar=2) + rL = min(abs(param)) + rR = max(abs(param)) + P = np.zeros(len(r)) + P[(r>=rL) & (r<=rR)] = 1 + P = _normalize(r,P) + return P +# ================================================================= +# ----------------------------------------------------------------- def wlc(r,L,Lp): P = np.zeros(len(r)) @@ -1391,10 +1189,18 @@ def wlc(r,L,Lp): P[idx] = kappa/(4*np.pi*2*np.sqrt(np.pi))*((1/(kappa*(1 - rcrit))**(3/2)*np.exp(-(1 - 1/2)**2/(kappa*(1 - rcrit)))*(4*((1 - 1/2)/np.sqrt(kappa*(1-rcrit)))**2-2)) + 1/(kappa*(1 - rcrit))**(3/2)*np.exp(-(2 - 1/2)**2/(kappa*(1 - rcrit)))*(4*((2 - 1/2)/np.sqrt(kappa*(1-rcrit)))**2-2)) return P +# ----------------------------------------------------------------- + +# ================================================================= +@setmetadata( +parameters = ('Contour length','Persistence length'), +units = ('nm','nm'), +lower = np.asarray([1.5, 2]), +upper = np.asarray([20, 100]), +start = np.asarray([3.7, 10])) @docstring() -def dd_wormchain(*args): -#================================================================= +def dd_wormchain(r,param): r""" Worm-like chain model near the rigid limit @@ -1416,33 +1222,24 @@ def dd_wormchain(*args): Radial Distribution Function of Semiflexible Polymers Phys. Rev. Lett. 77(12), 2581-2584, 1996 """ - def model(r,p): - # Compute the model distance distribution - L = p[0] - Lp = p[1] - P = wlc(r,L,Lp) - P = _normalize(r,P) - return P - - if not args: - info = dict( - Parameters = ('Contour length','Persistence length'), - Units = ('nm','nm'), - Lower = np.asarray([1.5, 2]), - Upper = np.asarray([20, 100]), - Start = np.asarray([3.7, 10]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=2) - P = model(r,p) - return P + r,param = _parsparam(r,param,npar=2) + L = param[0] + Lp = param[1] + P = wlc(r,L,Lp) + P = _normalize(r,P) + return P #================================================================= + +# ================================================================= +@setmetadata( +parameters = ('Contour length','Persistence length','Gaussian standard deviation'), +units = ('nm','nm'), +lower = np.asarray([1.5, 2, 0.001]), +upper = np.asarray([20, 100, 2]), +start = np.asarray([3.7, 10, 0.2])) @docstring() -def dd_wormgauss(*args): -#================================================================= +def dd_wormgauss(r,param): r""" Worm-like chain model near the rigid limit with Gaussian convolution @@ -1466,44 +1263,27 @@ def dd_wormgauss(*args): Phys. Rev. Lett. 77(12), 2581-2584, 1996 """ - def model(r,p): - # Compute the model distance distribution - L = p[0] - Lp = p[1] - sigma = p[2] - P = wlc(r,L,Lp) - - # Compute Gaussian convolution window - idx = np.argmax(P) - gauss = np.exp(-((r - r[idx])/sigma)**2) - - # Convolution with size retention - P = np.convolve(gauss,P,mode='full') + r,param = _parsparam(r,param,npar=3) + L = param[0] + Lp = param[1] + sigma = param[2] + P = wlc(r,L,Lp) + + # Compute Gaussian convolution window + idx = np.argmax(P) + gauss = np.exp(-((r - r[idx])/sigma)**2) + + # Convolution with size retention + P = np.convolve(gauss,P,mode='full') - # Adjust new convoluted axis - idxconv = np.argmax(P) - rconv = np.linspace(min(r),max(r)*2,len(P)) - rconv = rconv - abs(r[idx] - rconv[idxconv]) + # Adjust new convoluted axis + idxconv = np.argmax(P) + rconv = np.linspace(min(r),max(r)*2,len(P)) + rconv = rconv - abs(r[idx] - rconv[idxconv]) - #Interpolate down to original axis - P = np.interp(r,rconv,P) - print(P.shape) - P = _normalize(r,P) + #Interpolate down to original axis + P = np.interp(r,rconv,P) + P = _normalize(r,P) - return P - - if not args: - info = dict( - Parameters = ('Contour length','Persistence length','Gaussian standard deviation'), - Units = ('nm','nm'), - Lower = np.asarray([1.5, 2, 0.001]), - Upper = np.asarray([20, 100, 2]), - Start = np.asarray([3.7, 10, 0.2]), - ModelFcn = model - ) - return info - else: - r,p = _parsargs(args,npar=3) - P = model(r,p) - return P + return P #================================================================= \ No newline at end of file diff --git a/deerlab/ex_models.py b/deerlab/ex_models.py index 9cb11c6e2..15585ebe7 100644 --- a/deerlab/ex_models.py +++ b/deerlab/ex_models.py @@ -10,15 +10,17 @@ docstr_header = lambda title,fcnstr: f""" {title} -If called without arguments, returns an ``info`` dictionary of model parameters and boundaries:: +The function takes a list or array of parameters and returns the calculated experiment dipolar pathways:: - info = {fcnstr}() + pathways = {fcnstr}(param) +The built-in information on the model can be accessed via its attributes:: -Otherwise the function returns to calculated experiment dipolar pathways:: - - pathways = {fcnstr}(param) - + {fcnstr}.parameters # String list of parameter names + {fcnstr}.units # String list of metric units of parameters + {fcnstr}.start # List of values used as start values during optimization + {fcnstr}.lower # List of values used as lower bounds during optimization + {fcnstr}.upper # List of values used as upper bounds during optimization Parameters ---------- @@ -27,15 +29,6 @@ Returns ------- -info : dict - Dictionary containing the built-in information of the model: - - * ``info['Parameters']`` - string list of parameter names - * ``info['Units']`` - string list of metric units of parameters - * ``info['Start']`` - list of values used as start values during optimization - * ``info['Lower']`` - list of values used as lower bounds during optimization - * ``info['Upper']`` - list of values used as upper bounds during optimization - * ``info['ModelFcn']`` - function used to calculate the model output pathways : ndarray Dipolar pathways of the experiment @@ -57,17 +50,39 @@ def _decorator(func): # ================================================================= -def _parsargs(param,npar): +# ================================================================= +def setmetadata(parameters,units,start,lower,upper): + """ + Decorator: Set model metadata as function attributes + """ + def _setmetadata(func): + func.parameters = parameters + func.units = units + func.start = start + func.lower = lower + func.upper = upper + return func + return _setmetadata +# ================================================================= + #================================================================= +def _parsargs(param,npar): param = np.atleast_1d(param) if len(param)!=npar: raise ValueError('This model requires ',npar,' parameters. A total of ',len(param),' where specified.') return param #================================================================= + +# ================================================================= +@setmetadata( +parameters = ['Modulation depth'], +units = [''], +start = np.asarray([0.3]), +lower = np.asarray([0]), +upper = np.asarray([1])) @docstring() -def ex_4pdeer(param=None): -# =================================================================== +def ex_4pdeer(param): r""" Single-pathway 4-pulse DEER experiment model @@ -91,33 +106,28 @@ def ex_4pdeer(param=None): ``param[0]`` :math:`\lambda` 0.3 0 1 Modulated pathway amplitude (modulation depth) ============== ================ ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param[0] - pathways = [ - [1-lam], - [lam, 0] - ] - return pathways - - if param is None: - info = dict( - Parameters = ['Modulation depth'], - Units = [''], - Start = np.asarray([0.3]), - Lower = np.asarray([0]), - Upper = np.asarray([1]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=1) - return model(param) + param = _parsargs(param,npar=1) + + # Dipolar pathways + lam = param[0] + pathways = [ + [1-lam], + [lam, 0] + ] + return pathways # =================================================================== + +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated components','Amplitude of 1st modulated pathway', + 'Amplitude of 2nd modulated pathway','Refocusing time of 2nd modulated pathway'], +units = ['','','','μs'], +start = np.asarray([0.7, 0.3, 0.1, 5]), +lower = np.asarray([0, 0, 0, 0]), +upper = np.asarray([1, 1, 1, 20])) @docstring() -def ex_ovl4pdeer(param=None): -# =================================================================== +def ex_ovl4pdeer(param): r""" 4-pulse DEER with band overlap experiment model @@ -146,36 +156,28 @@ def ex_ovl4pdeer(param=None): ``param[3]`` :math:`T_0^{(2)}` 5.0 0 20 2nd modulated pathway, refocusing time (μs) ============== ======================== ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param[[0,1,2]] - T0 = param[3] - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0] - pathways[2] = [lam[2], T0] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated components','Amplitude of 1st modulated pathway', - 'Amplitude of 2nd modulated pathway','Refocusing time of 2nd modulated pathway'], - Units = ['','','','μs'], - Start = np.asarray([0.7, 0.3, 0.1, 5]), - Lower = np.asarray([0, 0, 0, 0]), - Upper = np.asarray([1, 1, 1, 20]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=4) - return model(param) + param = _parsargs(param,npar=4) + + # Dipolar pathways + lam = param[[0,1,2]] + T0 = param[3] + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0] + pathways[2] = [lam[2], T0] + return pathways # =================================================================== +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated components','Amplitude of 1st modulated pathway','Amplitude of 2nd modulated pathway','Refocusing time of 2nd modulated pathway'], +units = ['','','','μs'], +start = np.asarray([0.4, 0.4, 0.2,5]), +lower = np.asarray([0, 0, 0, 0]), +upper = np.asarray([1, 1, 1, 20])) @docstring() -def ex_5pdeer(param=None): -# =================================================================== +def ex_5pdeer(param): r""" 5-pulse DEER experiment model @@ -203,36 +205,30 @@ def ex_5pdeer(param=None): ``param[3]`` :math:`T_0^{(2)}` 5.0 0 20 2nd modulated pathway, refocusing time (μs) ============== ======================== ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param[[0,1,2]] - T0 = param[3] - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0] - pathways[2] = [lam[2], T0] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated components','Amplitude of 1st modulated pathway','Amplitude of 2nd modulated pathway','Refocusing time of 2nd modulated pathway'], - Units = ['','','','μs'], - Start = np.asarray([0.4, 0.4, 0.2,5]), - Lower = np.asarray([0, 0, 0, 0]), - Upper = np.asarray([1, 1, 1, 20]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=4) - return model(param) + param = _parsargs(param,npar=4) + + # Dipolar pathways + lam = param[[0,1,2]] + T0 = param[3] + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0] + pathways[2] = [lam[2], T0] + return pathways # =================================================================== - +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated components','Amplitude of 1st modulated pathway', + 'Amplitude of 2nd modulated pathway','Amplitude of 3rd modulated pathway', + 'Refocusing time of 2nd modulated pathway','Refocusing time of 3rd modulated pathway'], +units = ['','','','','μs','μs'], +start = np.asarray([0.3, 0.5, 0.3, 0.2, 1.5, 3.5]), +lower = np.asarray([0, 0, 0, 0, 0, 0]), +upper = np.asarray([1, 1, 1, 1, 20, 20])) @docstring() -def ex_7pdeer(param=None): -# =================================================================== +def ex_7pdeer(param): r""" 7-pulse DEER experiment model @@ -263,39 +259,29 @@ def ex_7pdeer(param=None): ``param[5]`` :math:`T_0^{(3)}` 3.5 0 20 3rd modulated pathway, refocusing time (μs) ============== ======================== ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param[[0,1,2,3]] - T0 = param[[4,5]] - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0] - pathways[2] = [lam[2], T0[0]] - pathways[3] = [lam[3], T0[1]] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated components','Amplitude of 1st modulated pathway', - 'Amplitude of 2nd modulated pathway','Amplitude of 3rd modulated pathway', - 'Refocusing time of 2nd modulated pathway','Refocusing time of 3rd modulated pathway'], - Units = ['','','','','μs','μs'], - Start = np.asarray([0.3, 0.5, 0.3, 0.2, 1.5, 3.5]), - Lower = np.asarray([0, 0, 0, 0, 0, 0]), - Upper = np.asarray([1, 1, 1, 1, 20, 20]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=6) - return model(param) - + param = _parsargs(param,npar=6) + + # Dipolar pathways + lam = param[[0,1,2,3]] + T0 = param[[4,5]] + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0] + pathways[2] = [lam[2], T0[0]] + pathways[3] = [lam[3], T0[1]] + return pathways # =================================================================== +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st modulated pathway'], +units = ['',''], +start = np.asarray([0.3, 0.5]), +lower = np.asarray([0, 0]), +upper = np.asarray([1, 1])) @docstring() -def ex_ridme1(param=None): -# =================================================================== +def ex_ridme1(param): r""" RIDME experiment model (spin S=1/2) @@ -317,34 +303,27 @@ def ex_ridme1(param=None): ``param[1]`` :math:`\lambda_1` 0.3 0 1 Amplitude of 1st harmonic pathway ============== =================== ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param.copy() - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0, 1] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st modulated pathway'], - Units = ['',''], - Start = np.asarray([0.3, 0.5]), - Lower = np.asarray([0, 0]), - Upper = np.asarray([1, 1]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=2) - return model(param) - + param = _parsargs(param, npar=2) + + # Dipolar pathways + lam = param.copy() + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0, 1] + return pathways # =================================================================== +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st harmonic pathway', + 'Amplitude of 2nd harmonic pathway','Amplitude of 3rd harmonic pathway'], +units = ['','','',''], +start = np.asarray([0.3, 0.5, 0.3, 0.2]), +lower = np.asarray([0, 0, 0, 0]), +upper = np.asarray([1, 1, 1, 1])) @docstring() -def ex_ridme3(param=None): -# =================================================================== +def ex_ridme3(param): r""" RIDME experiment model (spin S=3/2) @@ -369,37 +348,30 @@ def ex_ridme3(param=None): ``param[3]`` :math:`\lambda_3` 0.2 0 1 Amplitude of 3rd harmonic pathway ============== =================== ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param.copy() - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0, 1] - pathways[2] = [lam[2], 0, 2] - pathways[3] = [lam[3], 0, 3] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st harmonic pathway', - 'Amplitude of 2nd harmonic pathway','Amplitude of 3rd harmonic pathway'], - Units = ['','','',''], - Start = np.asarray([0.3, 0.5, 0.3, 0.2]), - Lower = np.asarray([0, 0, 0, 0]), - Upper = np.asarray([1, 1, 1, 1]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=4) - return model(param) - + param = _parsargs(param,npar=4) + + # Dipolar pathways + lam = param.copy() + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0, 1] + pathways[2] = [lam[2], 0, 2] + pathways[3] = [lam[3], 0, 3] + return pathways # =================================================================== +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st harmonic pathway', + 'Amplitude of 2nd harmonic pathway','Amplitude of 3rd harmonic pathway', + 'Amplitude of 4th harmonic pathway','Amplitude of 5th harmonic pathway'], +units = ['','','','','',''], +start = np.asarray([0.3, 0.5, 0.3, 0.2,0.1,0.05]), +lower = np.asarray([0, 0, 0, 0, 0, 0]), +upper = np.asarray([1, 1, 1, 1, 1, 1])) @docstring() -def ex_ridme5(param=None): -# =================================================================== +def ex_ridme5(param): r""" RIDME experiment model (spin S=5/2) @@ -425,40 +397,33 @@ def ex_ridme5(param=None): ``param[5]`` :math:`\lambda_5` 0.05 0 1 Amplitude of 5th harmonic pathway ============== =================== ============= ============ ============ ================================================ """ - def model(param): - # Dipolar pathways - lam = param.copy() - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0, 1] - pathways[2] = [lam[2], 0, 2] - pathways[3] = [lam[3], 0, 3] - pathways[4] = [lam[4], 0, 4] - pathways[5] = [lam[5], 0, 5] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st harmonic pathway', - 'Amplitude of 2nd harmonic pathway','Amplitude of 3rd harmonic pathway', - 'Amplitude of 4th harmonic pathway','Amplitude of 5th harmonic pathway'], - Units = ['','','','','',''], - Start = np.asarray([0.3, 0.5, 0.3, 0.2,0.1,0.05]), - Lower = np.asarray([0, 0, 0, 0, 0, 0]), - Upper = np.asarray([1, 1, 1, 1, 1, 1]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=6) - return model(param) - + param = _parsargs(param, npar=6) + + # Dipolar pathways + lam = param.copy() + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0, 1] + pathways[2] = [lam[2], 0, 2] + pathways[3] = [lam[3], 0, 3] + pathways[4] = [lam[4], 0, 4] + pathways[5] = [lam[5], 0, 5] + return pathways # =================================================================== +# ================================================================= +@setmetadata( +parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st harmonic pathway', + 'Amplitude of 2nd harmonic pathway','Amplitude of 3rd harmonic pathway', + 'Amplitude of 4th harmonic pathway','Amplitude of 5th harmonic pathway', + 'Amplitude of 6th harmonic pathway','Amplitude of 7th harmonic pathway'], +units = ['','','','','','','',''], +start = np.asarray([0.3, 0.5, 0.3, 0.2,0.1,0.05,0.02,0.01]), +lower = np.asarray([0, 0, 0, 0, 0, 0, 0, 0]), +upper = np.asarray([1, 1, 1, 1, 1, 1, 1, 1])) @docstring() -def ex_ridme7(param=None): -# =================================================================== +def ex_ridme7(param): r""" RIDME experiment model (spin S=7/2) @@ -486,38 +451,19 @@ def ex_ridme7(param=None): ``param[6]`` :math:`\lambda_6` 0.02 0 1 Amplitude of 6th harmonic pathway ``param[7]`` :math:`\lambda_7` 0.01 0 1 Amplitude of 7th harmonic pathway ============== =================== ============= ============ ============ ================================================ - - """ - def model(param): - # Dipolar pathways - lam = param.copy() - pathways = [[] for _ in lam] - pathways[0] = [lam[0]] - pathways[1] = [lam[1], 0, 1] - pathways[2] = [lam[2], 0, 2] - pathways[3] = [lam[3], 0, 3] - pathways[4] = [lam[4], 0, 4] - pathways[5] = [lam[5], 0, 5] - pathways[6] = [lam[6], 0, 6] - pathways[7] = [lam[7], 0, 7] - return pathways - - if param is None: - info = dict( - Parameters = ['Amplitude of unmodulated contribution','Amplitude of 1st harmonic pathway', - 'Amplitude of 2nd harmonic pathway','Amplitude of 3rd harmonic pathway', - 'Amplitude of 4th harmonic pathway','Amplitude of 5th harmonic pathway', - 'Amplitude of 6th harmonic pathway','Amplitude of 7th harmonic pathway'], - Units = ['','','','','','','',''], - Start = np.asarray([0.3, 0.5, 0.3, 0.2,0.1,0.05,0.02,0.01]), - Lower = np.asarray([0, 0, 0, 0, 0, 0, 0, 0]), - Upper = np.asarray([1, 1, 1, 1, 1, 1, 1, 1]), - ModelFcn = model - ) - return info - else: - param = _parsargs(param, npar=8) - return model(param) - + param = _parsargs(param,npar=8) + + # Dipolar pathways + lam = param.copy() + pathways = [[] for _ in lam] + pathways[0] = [lam[0]] + pathways[1] = [lam[1], 0, 1] + pathways[2] = [lam[2], 0, 2] + pathways[3] = [lam[3], 0, 3] + pathways[4] = [lam[4], 0, 4] + pathways[5] = [lam[5], 0, 5] + pathways[6] = [lam[6], 0, 6] + pathways[7] = [lam[7], 0, 7] + return pathways # =================================================================== \ No newline at end of file diff --git a/deerlab/fitmodel.py b/deerlab/fitmodel.py index 6da3a6bbc..2aa49a9ac 100644 --- a/deerlab/fitmodel.py +++ b/deerlab/fitmodel.py @@ -269,7 +269,7 @@ def fitmodel(Vexp, t, r, dd_model='P', bg_model=bg_hom3d, ex_model=ex_4pdeer, parfreeDistribution = dd_model == 'P' # Check if P(r) is non-parametric model includeForeground = np.array(dd_model is not None) # Check if there is a foreground or just background in the model - par0_dd, lower_dd, upper_dd, N_dd, dd_fcn = _getmodelparams(dd_model) # Get model built-in info + par0_dd, lower_dd, upper_dd, N_dd = _getmodelparams(dd_model) # Get model built-in info # Catch nonsensical case if includeForeground and not parametricDistribution and not parfreeDistribution: @@ -280,7 +280,7 @@ def fitmodel(Vexp, t, r, dd_model='P', bg_model=bg_hom3d, ex_model=ex_4pdeer, isparamodel = np.array([type(model) is types.FunctionType for model in bg_model]) # Check if B(t) is a parametric model includeBackground = np.array([model is not None for model in bg_model]) # Check if model includes B(t) or not - par0_bg, lower_bg, upper_bg, N_bg, bg_fcn = _getmodelparams(bg_model) # Get model built-in info + par0_bg, lower_bg, upper_bg, N_bg = _getmodelparams(bg_model) # Get model built-in info # Check whether the background model is a physical or phenomenological model isphenomenological = [_iserror(model,t,par0,1) for model,par0 in zip(bg_model,par0_bg)] @@ -294,7 +294,7 @@ def fitmodel(Vexp, t, r, dd_model='P', bg_model=bg_hom3d, ex_model=ex_4pdeer, isparamodel = np.array([type(model) is types.FunctionType for model in ex_model]) # Check if experiment is a parametric model includeExperiment = np.array([model is not None for model in ex_model]) # Check whether to include experiment in the model - par0_ex, lower_ex, upper_ex, N_ex, ex_fcn = _getmodelparams(ex_model) # Get model built-in info + par0_ex, lower_ex, upper_ex, N_ex = _getmodelparams(ex_model) # Get model built-in info # Catch nonsensical situations if np.any(~isparamodel & includeExperiment): @@ -335,15 +335,15 @@ def multiPathwayModel(par): # Prepare background basis function if includeBackground[iSignal]: if isphenomenological[iSignal]: - Bfcn = lambda t: bg_fcn[iSignal](t,bg_par) + Bfcn = lambda t: bg_model[iSignal](t,bg_par) else: - Bfcn = lambda t,lam: bg_fcn[iSignal](t,bg_par,lam) + Bfcn = lambda t,lam: bg_model[iSignal](t,bg_par,lam) else: Bfcn = lambda _,__: np.ones_like(Vexp[iSignal]) # Get pathway information if includeExperiment[iSignal]: - pathways = ex_fcn[iSignal](ex_par) + pathways = ex_model[iSignal](ex_par) else: pathways = [[1,0]] @@ -587,7 +587,7 @@ def scale_constraint(nonlinpar): penalty = np.zeros(nSignals) for i in range(nSignals): ex_par = nonlinpar[exidx[i]] - pathways = ex_fcn[i](ex_par) + pathways = ex_model[i](ex_par) lams = [pathway[0] for pathway in pathways] if np.sum(lams)<1 or ex_model[i].__name__=='ex_4pdeer': penalty[i] = 0 @@ -866,26 +866,23 @@ def _getmodelparams(models): if type(models) is not list: models = [models] - par0,lo,up,N,modelfcn = ([],[],[],[],[]) + par0,lo,up,N = ([],[],[],[]) for model in models: if type(model) is types.FunctionType: - info = model() - par0.append(info['Start']) - lo.append(info['Lower']) - up.append(info['Upper']) - modelfcn.append(info['ModelFcn']) + par0.append(model.start) + lo.append(model.lower) + up.append(model.upper) else: par0.append([]) lo.append([]) up.append([]) - modelfcn.append([]) N.append(len(par0[-1])) par0 = np.concatenate(par0) lo = np.concatenate(lo) up = np.concatenate(up) N = np.atleast_1d(N) - return par0,lo,up,N,modelfcn + return par0,lo,up,N # ============================================================================== def _parcombine(p,d): diff --git a/deerlab/fitmultimodel.py b/deerlab/fitmultimodel.py index 8a46f6da9..ead198540 100644 --- a/deerlab/fitmultimodel.py +++ b/deerlab/fitmultimodel.py @@ -216,14 +216,12 @@ def Kmodel(Kpar): Kmodel = lambda _: K # Extract information about the model - info = model() - nparam = len(info['Start']) + nparam = len(model.start) if lb is None: - lb = info['Lower'] + lb = model.lower if ub is None: - ub = info['Upper'] - paramnames = info['Parameters'] - modelfcn = info['ModelFcn'] + ub = model.upper + paramnames = model.parameters if lbK is None: lbK = [] @@ -263,7 +261,7 @@ def nonlinmodel(par,Nmodels): subset = np.arange(paramsused, paramsused+nparam) paramsused = paramsused + nparam # Get basis functions - Pbasis = modelfcn(r,par[subset]) + Pbasis = model(r,par[subset]) # Combine all non-linear functions into one Knonlin[:,iModel] = K@Pbasis return Knonlin @@ -284,7 +282,7 @@ def Pmodel(nlinpar,linpar): subset = np.arange(paramsused, paramsused+nparam) paramsused = paramsused + nparam # Add basis functions - Pfit = Pfit + amp*modelfcn(r,nlinpar[subset]) + Pfit = Pfit + amp*model(r,nlinpar[subset]) return Pfit #=============================================================================== diff --git a/deerlab/mixmodels.py b/deerlab/mixmodels.py index f58805170..670fe424d 100644 --- a/deerlab/mixmodels.py +++ b/deerlab/mixmodels.py @@ -44,17 +44,16 @@ def mixmodels(*models): Info = dict(Parameters=[],Units=[],Start=[],Lower=[],Upper=[]) pidx = [] pidx_amp = [] - for i in range(nModels): - info = models[i]() - nparam = len(info['Start']) + for i,model in enumerate(models): + nparam = len(model.start) pidx.append(idx + np.arange(0,nparam)) idx = idx + nparam for j in range(nparam): - Info['Parameters'].append(f'Model {i+1}: {info["Parameters"][j]}') - Info['Units'].append(info['Units'][j]) - Info['Lower'].append(info['Lower'][j]) - Info['Upper'].append(info['Upper'][j]) - Info['Start'].append(info['Start'][j]) + Info['Parameters'].append(f'Model {i+1}: {model.parameters[j]}') + Info['Units'].append(model.units[j]) + Info['Lower'].append(model.lower[j]) + Info['Upper'].append(model.upper[j]) + Info['Start'].append(model.start[j]) # Add amplitudes for each model Info['Parameters'].append(f'Model {i+1}: Amplitude') @@ -70,22 +69,37 @@ def mixmodels(*models): Info['Upper'] = np.asarray(Info['Upper']) Info['Start'] = np.asarray(Info['Start']) - def mixedFunction(*args): - # ======================================================================= + + # ================================================================= + def setmetadata(parameters,units,start,lower,upper): + """ + Decorator: Set model metadata as function attributes + """ + def _setmetadata(func): + func.parameters = parameters + func.units = units + func.start = start + func.lower = lower + func.upper = upper + return func + return _setmetadata + # ================================================================= + + + # ================================================================= + @setmetadata( + parameters = Info['Parameters'], + units = Info['Units'], + start = Info['Start'], + lower = Info['Lower'], + upper = Info['Upper']) + def mixedFunction(ax, params): """ Mixed model function handle --------------------------- Function to allow request of information structure or model values """ - if not args: - return Info - - if len(args)<2: - raise KeyError('At least two input arguments are required.') - elif len(args)>3: - raise KeyError('Only two input arguments are allowed.') - - ax, params = args + params = np.atleast_1d(params) evaled = 0 for k in range(nModels): diff --git a/test/test_bgmodels.py b/test/test_bgmodels.py index cb0bd6390..c173dd7f6 100644 --- a/test/test_bgmodels.py +++ b/test/test_bgmodels.py @@ -7,12 +7,11 @@ def assert_bgmodel(model,Bref,physical=False): t = np.linspace(-5,5,500) # Extract model information - info = model() - par0 = info['Start'] - lower = info['Lower'] - upper = info['Upper'] - paramnames = info['Parameters'] - units = info['Units'] + par0 = model.start + lower = model.lower + upper = model.upper + paramnames = model.parameters + units = model.units # Calculate under different conditions B1 = model(t,par0) diff --git a/test/test_ddmodels.py b/test/test_ddmodels.py index dcabfb791..a11ab7c6f 100644 --- a/test/test_ddmodels.py +++ b/test/test_ddmodels.py @@ -9,10 +9,9 @@ def assert_ddmodel(model): rnus = np.sqrt(np.linspace(1.5,6**2,500)) # Extract model information - info = model() - par0 = info['Start'] - lower = info['Lower'] - upper = info['Upper'] + par0 = model.start + lower = model.lower + upper = model.upper # Calculate under different conditions P1 = model(r,par0) diff --git a/test/test_fitmodel.py b/test/test_fitmodel.py index 595a21b27..45fa1da32 100644 --- a/test/test_fitmodel.py +++ b/test/test_fitmodel.py @@ -16,8 +16,7 @@ def assert_experiment_model(model): r = np.linspace(2,6,50) P = dd_gauss(r,[4.5, 0.25]) - info = model() - parIn = info['Start'] + parIn = model.start pathways = model(parIn) kappa = 0.4 @@ -236,8 +235,7 @@ def test_global_4pdeer(): r = np.linspace(2,6,90) P = dd_gauss(r,[4.5, 0.3]) - info = ex_4pdeer() - parIn = info['Start'] + parIn = ex_4pdeer.start pathways = ex_4pdeer(parIn) kappa = 0.4 @@ -340,8 +338,7 @@ def assert_confidence_intervals(pci50,pci95,pfit,lb,ub): r = np.linspace(2,6,40) P = ddmodel(r,[4.5, 0.25]) -info = exmodel() -parIn = info['Start'] +parIn = exmodel.start pathways = exmodel(parIn) kappa = 0.4 @@ -357,22 +354,22 @@ def assert_confidence_intervals(pci50,pci95,pfit,lb,ub): def assert_confinter_param(subset): #---------------------------------------------------------------------- if subset == 'ex': - info = exmodel() + model = exmodel pfit = fit.exparam ci50 = fit.exparamUncert.ci(50) ci95 = fit.exparamUncert.ci(95) if subset == 'dd': - info = ddmodel() + model = ddmodel pfit = fit.ddparam ci50 = fit.ddparamUncert.ci(50) ci95 = fit.ddparamUncert.ci(95) if subset == 'bg': - info = bgmodel() + model = bgmodel pfit = fit.bgparam ci50 = fit.bgparamUncert.ci(50) ci95 = fit.bgparamUncert.ci(95) - lb = info['Lower'] - ub = info['Upper'] + lb = model.lower + ub = model.upper assert_confidence_intervals(ci50,ci95,pfit,lb,ub) #---------------------------------------------------------------------- @@ -403,8 +400,7 @@ def test_confinter_ddparam(): r = np.linspace(2,6,40) P = dd_gauss(r,[4.5, 0.25]) -info = exmodel() -parIn = info['Start'] +parIn = exmodel.start pathways = exmodel(parIn) kappa = 0.4 @@ -530,8 +526,7 @@ def test_global_scale_4pdeer(): r = np.linspace(2,6,90) P = dd_gauss(r,[4.5, 0.25]) - info = ex_4pdeer() - parIn = info['Start'] + parIn = ex_4pdeer.start pathways = ex_4pdeer(parIn) kappa = 0.4 @@ -689,8 +684,7 @@ def test_cost_value(): r = np.linspace(2,6,40) P = dd_gauss(r,[4.5, 0.25]) - info = exmodel() - parIn = info['Start'] + parIn = exmodel.start pathways = exmodel(parIn) kappa = 0.4 diff --git a/test/test_fitparamodel.py b/test/test_fitparamodel.py index b017aec07..19bfd038b 100644 --- a/test/test_fitparamodel.py +++ b/test/test_fitparamodel.py @@ -38,10 +38,9 @@ def test_rice(): K = dipolarkernel(t,r) V = K@P - info = dd_rice() - par0 = info['Start'] - lb = info['Lower'] - ub = info['Upper'] + par0 = dd_rice.start + lb = dd_rice.lower + ub = dd_rice.upper model = lambda p: K@dd_rice(r,p) fit = fitparamodel(V,model,par0,lb,ub)