Skip to content

Commit

Permalink
less print statements in the functions
Browse files Browse the repository at this point in the history
vcantarella committed Feb 6, 2024
1 parent cf297b2 commit 31b702d
Showing 4 changed files with 611 additions and 74 deletions.
504 changes: 504 additions & 0 deletions examples/ammer/ammer_notebook.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,504 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Modelling the Tufa layer in the Ammer Valley Quaternary floodplain stratigraphy\n",
"\n",
"This notebook explain step-by-step how to create a object based sedimentary structure model using HyVR and the python environment. The model will represent the tufa layer in the Quaternary floodplain sediments of the Ammer Valler, Tuebingen, Germany."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Packages\n",
"\n",
"Besides HyVR, we use numpy for numerical computing. Since we are interested in using this model for posterior flow simulations, we will use the capabilities in flopy, specially for handling the grid and exporting VTKs (3D render models)."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"from hyvr.tools import ferguson_curve # used for the channel curve creation\n",
"from hyvr import channel # channel object creation\n",
"from hyvr import trough # trough creation\n",
"import scipy # general scientific calculations\n",
"import flopy # our modelling interface\n",
"import numpy as np # general numerical functions and array manipulation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Grid/Model creation\n",
"\n",
"HyVR should work on any structured grid. One example would be creating a grid with `np.meshgrid`, the numpy function for grids. However, we are interested in flow simulations, and MODFLOW is the standard. The python interface, flopy has grid creation capabilities that can be easily translated to MODFLOW grids, thus we use that for our grid creation"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# Model creation:\n",
"name = \"ammer_V0602\"\n",
"ws = \".\"\n",
"sim = flopy.mf6.MFSimulation(\n",
" sim_name=name,\n",
" exe_name=\"mf6\",\n",
" version=\"mf6\",\n",
" sim_ws=ws,\n",
")\n",
"# Nam file\n",
"model_nam_file = \"{}.nam\".format(name)\n",
"# Groundwater flow object:\n",
"gwf = flopy.mf6.ModflowGwf(\n",
" sim,\n",
" modelname=name,\n",
" model_nam_file=model_nam_file,\n",
" save_flows=True,\n",
")\n",
"# Grid properties:\n",
"Lx = 2000 # problem lenght [m]\n",
"Ly = 600 # problem width [m]\n",
"H = 7 # aquifer height [m]\n",
"delx = 1.5 # block size x direction\n",
"dely = 1.5 # block size y direction\n",
"delz = 0.2 # block size z direction\n",
"nlay = int(H / delz)\n",
"ncol = int(Lx / delx) # number of columns\n",
"nrow = int(Ly / dely) # number of layers\n",
"\n",
"# Flopy Discretizetion Objects (DIS)\n",
"dis = flopy.mf6.ModflowGwfdis(\n",
" gwf,\n",
" xorigin=0.0,\n",
" yorigin=0.0,\n",
" nlay=nlay,\n",
" nrow=nrow,\n",
" ncol=ncol,\n",
" delr=dely,\n",
" delc=delx,\n",
" top=7.0,\n",
" botm=np.arange(H - delz, 0 - delz, -delz),\n",
")\n",
"\n",
"# Node property flow\n",
"k = 1e-5 # Model conductivity in m/s\n",
"npf = flopy.mf6.ModflowGwfnpf(\n",
" gwf,\n",
" icelltype=0, # This we define the model as confined\n",
" k=k,\n",
")\n",
"\n",
"# Acessing the grid\n",
"grid = gwf.modelgrid\n",
"\n",
"# cell centers\n",
"centers = grid.xyzcellcenters\n",
"\n",
"X = centers[0]\n",
"Y = centers[1]\n",
"Z = centers[2]\n",
"\n",
"# broadcasting the X, Y to the same shape as Z (full grid shape)\n",
"X = np.broadcast_to(X, Z.shape)\n",
"Y = np.broadcast_to(Y, Z.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Modelling Sedimentary Structures\n",
"\n",
"We base the framework on the original work from Bennett et al (2018), which has written the first version of HyVR. The framework is hierarchical, meaning that structures can be organized in different scales depending on the context, but that they may represent complex sedimentary archictectures.\n",
"\n",
"From the analysis of the facies present in the tufa we came up with an achitecture formed by the following elements:\n",
"\n",
"The tufa is likely transported from upstream. Seepage from supersaturated groundwater from the carbonate formation intersects the ammer river upstream and probably caused the precipitation of tufa particles in association with adequate flora in a wetland-like environment. Then river water would carry this sediment downstream along with organic matter (often preserved as plant remains), and deposit this material in the Ammer floodplain. We see different facies with varying amounts of tufa and organic matter, and varying composition of organic matter. Likely the system stabilized for periods of time, giving structure to different local wetland environments. Depending on the stability and preservation characteristics peat lenses would form, and the river channel shape would be preserved as gravel deposits. During sediment inflow periods, continuous input of sedimentation would make the river shape unstable, meaning it would not preserve its shape. The wetland environments would then receive tufa sedimentation and deposit mixes of tufa clasts and phytoclasts and organic matter. We think of this period as a succession of discontinuous lenses which are elongated in the direction of flow of different facies associated with different local wetland environments reworked by transport and deposition of external tufa particles.\n",
"\n",
"Therefore, we can think of the system as an aggradational sequence of lenses of different mix composition between tufas and phytoclasts, organic particles, which comprises different facies recorded in the sedimentary analysis. At the end of the sequence we would have the reduction of sedimentary the sedimentary load, leading to a stable configuration where peat lenses would be preserved and the preservation of channel features. Upon the next increase of sedimentary load, the channel features would be preferentially filled with gravel particles, while the peat lenses then would be buried, and the sequence then repeats itself.\n",
"\n",
"The algorithm is organized as such:\n",
"\n",
"1. Define the sequence thicknesses (sampled from a distribution)\n",
"2. Over the thickness t:\n",
" 2.1. Iterate over each facies f associated with the aggradation period:\n",
" 2.1.1. generate lens of thickness t at a randomly sampled location and with reasonably chosen dimensions. Assign it to facies f\n",
" 2.1.2. repeat 2.1 until the proportion of facies f is slightly above the calculated proportion (since one object can erode the previous we use slightly bigger proportion in the algorithm).\n",
" 2.3. Generate lens of thickness t or max_peat_thickness at a randomly sampled location and with reasonably chosen dimension of the facies peat\n",
" 2.4. repeat 2.3 until proportion of peat is the same as the calculated proportion.\n",
" 2.5. Generate a channel starting on the left of the grid and on a randomly sampled high y value with thickness t or max_channel_thickness and width $\\approx$ 4 m.\n",
" 2.6. Generate a channel startubg on the left of the grid (x=0) and on a randomly sampled low y values with thickness t or max_channel_thickness and width $\\approx$ 4 m.\n",
"3. Add the base level by thickness t and repeat 2. unil the end of the sequence (7m high).\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Defining the sequence of thicknesses\n",
"\n",
"We will assume an average thickness of 0.7 m. The first layer in the system is modelled deterministically. It is $\\approx$ 0.4 m thickness and composed with light color tufa with fossil and low organic matter content. The remaining layers are modelled probabilistically.\n",
" which means that in the sequence of 7 m, we randomly sample 10 thicknesses:\n",
"\n",
"Below we have calculated a distribution function that randomly sampled thicknesses with the characteristics above:"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"With the number of layers: 9\n",
"The median thickness is:0.5477733148491694\n",
"The mean thickness is:0.7333333333333332\n"
]
}
],
"source": [
"# according to answer in : https://math.stackexchange.com/questions/291174/probability-distribution-of-the-subinterval-lengths-from-a-random-interval-divis\n",
"# The cumulative distribution function for intervals randomly sampled from the interval [0,a] is:\n",
"simulated_thickness = 7 - 0.4\n",
"n = 9\n",
"print(f\"With the number of layers: {n}\")\n",
"F = lambda t: 1 - ((simulated_thickness - t) / simulated_thickness) ** (n - 1)\n",
"# The probability density function is then:\n",
"f = (\n",
" lambda t: (n - 1)\n",
" / simulated_thickness\n",
" * ((simulated_thickness - t) / simulated_thickness) ** (n - 2)\n",
")\n",
"mean_thickness = scipy.integrate.quad(lambda t: t * f(t), 0, simulated_thickness)[0]\n",
"median_thickness = scipy.optimize.fsolve(lambda t: F(t) - 0.5, mean_thickness)[0]\n",
"# The median thickness would be:\n",
"print(f\"The median thickness is:{median_thickness}\")\n",
"# The mean thickness would be:\n",
"print(f\"The mean thickness is:{mean_thickness}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The code above has calculated that a 9 layer model would on average produce a thickness agreeable with measured average of 0.7 meters.\n",
"To generate the model thicknesses we can just generate random samples on the interval from 0 to the total modelled thickness. Unfortunately, due to cell size restrictions, we cannot model layers that are too small (< 0.3 m), therefore we iterate unil we have a thickness model with layers that are bigger than 0.3:"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [],
"source": [
"min_thick = 0\n",
"while min_thick < 0.3:\n",
" zs = np.random.uniform(0, simulated_thickness, size=n - 1)\n",
" ordered_zs = np.sort(zs)\n",
" ordered_zs = np.append(ordered_zs, simulated_thickness)\n",
" ordered_zs = np.append(0, ordered_zs)\n",
" thicknesses = np.diff(ordered_zs)\n",
" min_thick = np.min(thicknesses)\n"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The thickness array is:[0.4732641 0.73957832 0.91983533 0.94173358 0.56025054 0.46504586\n",
" 0.33035965 1.12023074 1.04970188])\n",
"The minimum thickness is:0.33035965221052077\n",
"The total thickness is:6.6\n",
"The mean thickness is:0.7333333333333333\n",
"THe number of layers is:9\n"
]
}
],
"source": [
"print(f\"The thickness array is:{thicknesses})\")\n",
"print(f\"The minimum thickness is:{min_thick}\")\n",
"print(f\"The total thickness is:{np.sum(thicknesses)}\")\n",
"print(f\"The mean thickness is:{np.mean(thicknesses)}\")\n",
"print(f\"THe number of layers is:{len(thicknesses)}\")\n"
]
},
{
"cell_type": "code",
"execution_count": 38,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"array([0])"
]
},
"execution_count": 38,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"testf = np.empty((nrow, ncol), dtype=np.int32)\n",
"np.unique(testf)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Creating and running the sedimentary structure algorithm"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"facies_tufa = np.array([2, 3, 4, 5], dtype=np.int32)\n",
"facies = np.empty_like(Z, dtype=np.int32)\n",
"z_0 = 0.0\n",
"for thick in thicknesses:\n",
" ## initial tufa sheets: they are modelled as very elongated ellipsoids representing discontinuous layers\n",
" p_t23 = 0\n",
" while p_t23 < 0.3:\n",
" x_c = np.random.uniform(0, 2000)\n",
" y_c = np.random.uniform(0, 600)\n",
" z_c = z_0 + thick + np.random.uniform(-0.2, 0)\n",
" a = np.random.uniform(200, 400)\n",
" b = np.random.uniform(100, 200)\n",
" c = thick\n",
" azim = np.random.uniform(70, 110)\n",
" facies_trough, dip_dir_trough, dip_trough = trough(\n",
" X,\n",
" Y,\n",
" Z,\n",
" center_coords=np.array([x_c, y_c, z_c]),\n",
" dims=np.array([a, b, c]),\n",
" azim=azim,\n",
" facies=np.array([2]),\n",
" )\n",
" facies[facies_trough != -1] = facies_trough[facies_trough != -1]\n",
" # k[facies_trough != -1] = np.random.lognormal(mu_tufa, sigma=sigma_tufa)\n",
" logic_tufa = (Z >= z_0) & (Z <= z_0 + thick)\n",
" p_t23 = np.sum(facies[logic_tufa] == 2) / np.sum(logic_tufa)\n",
"\n",
" p_t5 = 0\n",
" while p_t5 < 0.3:\n",
" x_c = np.random.uniform(0, 2000)\n",
" y_c = np.random.uniform(0, 600)\n",
" z_c = z_0 + thick + np.random.uniform(-0.2, 0)\n",
" a = np.random.uniform(200, 400)\n",
" b = np.random.uniform(100, 200)\n",
" c = thick # thickness until the original base (more or less)\n",
" azim = np.random.uniform(70, 110)\n",
" facies_trough, dip_dir_trough, dip_trough = trough(\n",
" X,\n",
" Y,\n",
" Z,\n",
" center_coords=np.array([x_c, y_c, z_c]),\n",
" dims=np.array([a, b, c]),\n",
" azim=azim,\n",
" facies=np.array([5]),\n",
" )\n",
" facies[facies_trough != -1] = facies_trough[facies_trough != -1]\n",
" # k[facies_trough != -1] = np.random.lognormal(mu_moss, sigma=sigma_moss)\n",
" logic_tufa = (Z >= z_0) & (Z <= z_0 + thick)\n",
" p_t5 = np.sum(facies[logic_tufa] == 5) / np.sum(logic_tufa)\n",
" p_t6 = 0\n",
" while p_t6 < 0.2:\n",
" x_c = np.random.uniform(0, 2000)\n",
" y_c = np.random.uniform(0, 600)\n",
" z_c = z_0 + thick + np.random.uniform(-0.2, 0)\n",
" a = np.random.uniform(200, 400)\n",
" b = np.random.uniform(100, 200)\n",
" c = thick # thickness until the original base (more or less)\n",
" azim = np.random.uniform(70, 110)\n",
" facies_trough, dip_dir_trough, dip_trough = trough(\n",
" X,\n",
" Y,\n",
" Z,\n",
" center_coords=np.array([x_c, y_c, z_c]),\n",
" dims=np.array([a, b, c]),\n",
" azim=azim,\n",
" facies=np.array([6]),\n",
" )\n",
" facies[facies_trough != -1] = facies_trough[facies_trough != -1]\n",
" # k[facies_trough != -1] = np.random.lognormal(mu_moss, sigma=sigma_moss)\n",
" logic_tufa = (Z >= z_0) & (Z <= z_0 + thick)\n",
" p_t6 = np.sum(facies[logic_tufa] == 6) / np.sum(logic_tufa)\n",
" p_t7 = 0\n",
" while p_t7 < 0.3:\n",
" x_c = np.random.uniform(0, 2000)\n",
" y_c = np.random.uniform(0, 600)\n",
" z_c = z_0 + thick + np.random.uniform(-0.2, 0)\n",
" a = np.random.uniform(200, 400)\n",
" b = np.random.uniform(100, 200)\n",
" c = thick # thickness until the original base (more or less)\n",
" azim = np.random.uniform(70, 110)\n",
" facies_trough, dip_dir_trough, dip_trough = trough(\n",
" X,\n",
" Y,\n",
" Z,\n",
" center_coords=np.array([x_c, y_c, z_c]),\n",
" dims=np.array([a, b, c]),\n",
" azim=azim,\n",
" facies=np.array([7]),\n",
" )\n",
" facies[facies_trough != -1] = facies_trough[facies_trough != -1]\n",
" # k[facies_trough != -1] = np.random.lognormal(mu_moss, sigma=sigma_moss)\n",
" logic_tufa = (Z >= z_0) & (Z <= z_0 + thick)\n",
" p_t7 = np.sum(facies[logic_tufa] == 7) / np.sum(logic_tufa)\n",
"\n",
" # facies 4 is the background facies, we make sure it has a minimum presence with more sheets if necessary:\n",
" p_t4 = np.sum(facies[logic_tufa] == 4) / np.sum(logic_tufa) + np.sum(\n",
" facies[logic_tufa] == 0\n",
" ) / np.sum(logic_tufa)\n",
" while p_t4 < 0.4:\n",
" x_c = np.random.uniform(0, 2000)\n",
" y_c = np.random.uniform(0, 600)\n",
" z_c = z_0 + thick + np.random.uniform(-0.2, 0)\n",
" a = np.random.uniform(200, 400)\n",
" b = np.random.uniform(100, 200)\n",
" c = thick\n",
" azim = np.random.uniform(70, 110)\n",
" facies_trough, dip_dir_trough, dip_trough = trough(\n",
" X,\n",
" Y,\n",
" Z,\n",
" center_coords=np.array([x_c, y_c, z_c]),\n",
" dims=np.array([a, b, c]),\n",
" azim=azim,\n",
" facies=np.array([4]),\n",
" )\n",
" facies[facies_trough != -1] = facies_trough[facies_trough != -1]\n",
"\n",
" logic_tufa = (Z >= z_0) & (Z <= z_0 + thick)\n",
" p_t4 = np.sum(facies[logic_tufa] == 4) / np.sum(logic_tufa) + np.sum(\n",
" facies[logic_tufa] == 0\n",
" ) / np.sum(logic_tufa)\n",
"\n",
" # peat lenses:\n",
" peat = 0.0\n",
" # peat lenses\n",
" while peat < 0.15:\n",
" x_c = np.random.uniform(0, 2000)\n",
" y_c = np.random.uniform(0, 600)\n",
" z_c = z_0 + thick + np.random.uniform(-0.2, 0)\n",
" a = np.random.uniform(100, 200)\n",
" b = np.random.uniform(70, 100)\n",
" azim = np.random.uniform(60, 120)\n",
" if thick > 0.7:\n",
" peat_depth = 0.7\n",
" else:\n",
" peat_depth = thick\n",
" c = peat_depth\n",
"\n",
" facies_trough, dip_dir_trough, dip_trough = trough(\n",
" X,\n",
" Y,\n",
" Z,\n",
" center_coords=np.array([x_c, y_c, z_c]),\n",
" dims=np.array([a, b, c]),\n",
" azim=azim,\n",
" facies=np.array([8]),\n",
" )\n",
" facies[facies_trough != -1] = facies_trough[facies_trough != -1]\n",
"\n",
" logic_peat = (Z >= z_0) & (Z <= z_0 + thick)\n",
" peat = np.sum(facies[logic_peat] == 8) / np.sum(logic_peat)\n",
" print(peat)\n",
" # channels\n",
" channel_curve_1 = ferguson_curve(\n",
" h=0.1,\n",
" k=np.pi / 200,\n",
" eps_factor=(np.pi / 1.5) ** 2,\n",
" flow_angle=0.0,\n",
" s_max=4000,\n",
" xstart=0,\n",
" ystart=25,\n",
" )\n",
" y_shift_1 = np.random.uniform(400, 500)\n",
" channel_1 = np.c_[channel_curve_1[0], channel_curve_1[1] + y_shift_1]\n",
" if thick > 0.6:\n",
" depth = 0.6\n",
" else:\n",
" depth = thick\n",
" channel_f, channel_dip_dir, channel_dip = channel(\n",
" X,\n",
" Y,\n",
" Z,\n",
" z_top=z_0 + thick,\n",
" curve=channel_1,\n",
" parabola_pars=np.array([4, depth]),\n",
" facies=np.array([11]),\n",
" )\n",
" facies[channel_f != -1] = channel_f[channel_f != -1]\n",
"\n",
" channel_curve_2 = ferguson_curve(\n",
" h=0.1,\n",
" k=np.pi / 200,\n",
" eps_factor=(np.pi / 1.5) ** 2,\n",
" flow_angle=0.0,\n",
" s_max=4000,\n",
" xstart=0,\n",
" ystart=25,\n",
" )\n",
" y_shift_2 = np.random.uniform(40, 150)\n",
" channel_2 = np.c_[channel_curve_2[0], channel_curve_2[1] + y_shift_2]\n",
" channel_f, channel_dip_dir, channel_dip = channel(\n",
" X,\n",
" Y,\n",
" Z,\n",
" z_top=z_0 + thick,\n",
" curve=channel_2,\n",
" parabola_pars=np.array([4, depth]),\n",
" facies=np.array([11]),\n",
" )\n",
" facies[channel_f != -1] = channel_f[channel_f != -1]\n",
"\n",
" # resetting z_0:\n",
" z_0 += thick\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "flopy-env-win",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
98 changes: 57 additions & 41 deletions src/hyvr/objects/channel.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,23 @@
import numpy as np
import numba
import numpy as np

from hyvr.utils import get_alternating_facies, min_distance


@numba.jit(nopython=True, parallel=True)
def channel(x,y,z,
z_top, curve, parabola_pars,
facies,
internal_layering=False, alternating_facies=False,
dip=0., layer_dist=0.,):
def channel(
x,
y,
z,
z_top,
curve,
parabola_pars,
facies,
internal_layering=False,
alternating_facies=False,
dip=0.0,
layer_dist=0.0,
):
"""
Assigns a channel to the grid points x,y,z.
The channel is defined by a curve, which represents the
@@ -24,90 +34,96 @@ def channel(x,y,z,
dip: dip of the internal dipping layers. Leave the default value for massive structure.
layer_dist: perpendicular to dip distance between layers
facies: np.array(int32) with the facies code (1 in case no layering or more in case of layering)
Returns:
---
facies: ndarray(int32) of the facies values at the coordinates (x,y,z)
dip: ndarray(float32) of the dip (positive value) of the internal structure at (x,y,z)
dip_direction: ndarray(float32) of the dip-direction of the internal structure
"""
#parabola_args:
# parabola_args:
width, depth = parabola_pars

# if the point is above the channel top, don't consider it
dz = z - z_top
dz = dz.ravel()

# first logic: testing if points are in the z range:
logic_z = (dz <= 0) & (dz >= -depth)

# Filter points to the vicinity of the curve:
xmin, ymin = np.min(curve[:,0])-width, np.min(curve[:,1])-width,
xmax, ymax = np.max(curve[:,0])+width, np.max(curve[:,1])+width
xmin, ymin = (
np.min(curve[:, 0]) - width,
np.min(curve[:, 1]) - width,
)
xmax, ymax = np.max(curve[:, 0]) + width, np.max(curve[:, 1]) + width
logic_x = (x >= xmin) & (x <= xmax)
logic_x = logic_x.ravel()
logic_y = (y >= ymin) & (y <= ymax)
logic_y = logic_y.ravel()
#Do heavy calculations only in points that make sense:

# Do heavy calculations only in points that make sense:
filter_zone = logic_z & logic_x & logic_y
filter_zone = filter_zone.ravel()

# Distance in the xy-plane
P = np.column_stack((x.ravel()[filter_zone], y.ravel()[filter_zone]))

xy_dist = np.ones(x.size)*1e10
xy_dist = np.ones(x.size) * 1e10
idx_curve = np.zeros(x.size, dtype=np.int32)

# Not a perfect method, to be improved:
x_curve = curve[:,0]
y_curve = curve[:,1]
x_curve = curve[:, 0]
y_curve = curve[:, 1]
xy_dist[filter_zone], idx_curve[filter_zone] = min_distance(x_curve, y_curve, P)

logic_xy = (xy_dist**2 <= (width**2/4 + width**2*dz/(4*depth))) # From the HyVR documentation.
logic_xy = xy_dist**2 <= (
width**2 / 4 + width**2 * dz / (4 * depth)
) # From the HyVR documentation.

logic_inside = logic_z & logic_xy
facies_output = np.ones_like(logic_inside, dtype=np.int32)*(-1)
facies_output = np.ones_like(logic_inside, dtype=np.int32) * (-1)
if internal_layering:
#distance between neighbouring points:
dif = np.diff(np.ascontiguousarray(curve[:,0:2]))**2
# distance between neighbouring points:
dif = np.diff(np.ascontiguousarray(curve[:, 0:2])) ** 2
srqt_dif = np.empty_like(dif)
for i in numba.prange(dif.shape[0]):
srqt_dif[i] = np.sqrt(np.sum(dif[i]))
srqt_dif = np.ravel(srqt_dif)
dist_curve = np.concatenate((np.zeros(1),srqt_dif))
#gradient values:
vx = curve[:,2]
vy = curve[:,3]
dist_curve = np.concatenate((np.zeros(1), srqt_dif))
# gradient values:
vx = curve[:, 2]
vy = curve[:, 3]
# azimuth from inverse distance weighted velocity
azim = np.where(logic_inside, np.arctan2(vy, vx)/np.pi*180, -1)
azim = np.where(logic_inside, np.arctan2(vy, vx) / np.pi * 180, -1)
dip = np.radians(dip)
#create facies array:
# create facies array:
curve_length = np.sum(dist_curve)
n_layers = int(np.ceil(curve_length*np.sin(dip)/layer_dist)
+ np.ceil(depth/layer_dist))

n_layers = int(
np.ceil(curve_length * np.sin(dip) / layer_dist)
+ np.ceil(depth / layer_dist)
)

n_layers += 10 # just to be sure (why?)
facies_array = get_alternating_facies(facies, n_layers, alternating_facies)
# To correct for the distance in z-direction we subtract |dz| * cos(dip)
# note that dz is negative here
d = dist_curve * np.sin(dip) + dz*np.cos(dip)
ns = np.rint(d/layer_dist).astype(np.int32)
d = dist_curve * np.sin(dip) + dz * np.cos(dip)
ns = np.rint(d / layer_dist).astype(np.int32)
d_grid = np.empty_like(idx_curve)
ns_grid = np.empty_like(idx_curve)
for i in numba.prange(idx_curve.shape[0]):
d_grid[i] = d[idx_curve[i]]
ns_grid[i] = ns[idx_curve[i]]
# print(np.max(ns))
# print(self.object_facies_array.shape)

facies = np.array([facies_array[n] for n in ns_grid])
facies_output[logic_inside] = facies
else:
facies_output[logic_inside] = facies[0]
dip_output = np.where(logic_inside, 0., np.nan)
dip_direction = np.where(logic_inside, 0., np.nan)
facies_output = np.reshape(facies_output,x.shape)

dip_output = np.where(logic_inside, 0.0, np.nan)
dip_direction = np.where(logic_inside, 0.0, np.nan)
facies_output = np.reshape(facies_output, x.shape)
dip_output = np.reshape(dip_output, x.shape)
dip_direction = np.reshape(dip_direction, x.shape)

return facies_output, dip_output, dip_direction

return facies_output, dip_output, dip_direction
74 changes: 46 additions & 28 deletions src/hyvr/objects/sheet.py
Original file line number Diff line number Diff line change
@@ -1,19 +1,31 @@
import numpy as np
import numpy.typing as npt
import numba
from hyvr.utils import normal_plane_from_dip_dip_dir
from hyvr.utils import coterminal_angle
from hyvr.utils import get_alternating_facies
import numpy as np

from hyvr.utils import (
coterminal_angle,
get_alternating_facies,
normal_plane_from_dip_dip_dir,
)


@numba.jit(nopython=True, parallel=True)
def sheet(x,y,z,
xmin,xmax,
ymin,ymax,
bottom_surface,top_surface,
facies,
internal_layering=False, alternating_facies=False,
dip=0.,dip_dir=0.,layer_dist=0.
):
def sheet(
x,
y,
z,
xmin,
xmax,
ymin,
ymax,
bottom_surface,
top_surface,
facies,
internal_layering=False,
alternating_facies=False,
dip=0.0,
dip_dir=0.0,
layer_dist=0.0,
):
"""
Assigns a sheet to the grid points x,y,z.
The sheet is a layer is defined by bounding x and y coordinates and top and bottom contacts.
@@ -30,40 +42,46 @@ def sheet(x,y,z,
dip: dip of the internal dipping layers. Leave the default value for massive structure.
layer_dist: perpendicular to dip distance between layers
facies: np.array(int32) with the facies code (1 in case no layering or more in case of layering)
Returns:
---
facies: ndarray(int32) of the facies values at the coordinates (x,y,z)
dip: ndarray(float32) of the dip (positive value) of the internal structure at (x,y,z)
dip_direction: ndarray(float32) of the dip-direction of the internal structure
"""
true_array_x = (x >= xmin) & (x <= xmax)
print(true_array_x.shape)
true_array_y = (y >= ymin) & (y <= ymax)
# if len(bottom_surface.shape) != len(z.shape):
# bottom_surface = np.broadcast_to(bottom_surface, z.shape)
# if len(top_surface.shape) != len(z.shape):
# top_surface = np.broadcast_to(top_surface, z.shape)

true_array_z = (z >= np.broadcast_to(bottom_surface,z.shape)) & (z<= np.broadcast_to(top_surface,z.shape))
print(true_array_z.shape)
true_array_z = (z >= np.broadcast_to(bottom_surface, z.shape)) & (
z <= np.broadcast_to(top_surface, z.shape)
)
true_array = true_array_z & true_array_y & true_array_x
true_array = np.ravel(true_array)
print(true_array.shape)
facies_output = np.ones(x.size, dtype = np.int32)*(-1)
facies_output = np.ones(x.size, dtype=np.int32) * (-1)
if internal_layering:
normal_vector = normal_plane_from_dip_dip_dir(dip, dip_dir)
xcenter = xmin + (xmax -xmin)/2
ycenter = ymin + (ymax - ymin)/2
xcenter = xmin + (xmax - xmin) / 2
ycenter = ymin + (ymax - ymin) / 2
zmax = np.max(top_surface)
shift = normal_vector[0] * xcenter + normal_vector[1] * ycenter + normal_vector[2] * zmax
d = normal_vector[0] * x.ravel()[true_array] + normal_vector[1] * y.ravel()[true_array] + normal_vector[2] * z.ravel()[true_array] - shift
print(d.shape)
class_distances = (np.floor(d/layer_dist)).astype(np.int16)
shift = (
normal_vector[0] * xcenter
+ normal_vector[1] * ycenter
+ normal_vector[2] * zmax
)
d = (
normal_vector[0] * x.ravel()[true_array]
+ normal_vector[1] * y.ravel()[true_array]
+ normal_vector[2] * z.ravel()[true_array]
- shift
)
class_distances = (np.floor(d / layer_dist)).astype(np.int16)
min_value = np.min(class_distances)
facies_indices = class_distances - min_value
n_layers = int(np.max(facies_indices)+1)
print(n_layers)
n_layers = int(np.max(facies_indices) + 1)
facies_array = get_alternating_facies(facies, n_layers, alternating_facies)
facies_ = np.array([facies_array[n] for n in facies_indices])
facies_output[true_array] = facies_
@@ -76,4 +94,4 @@ def sheet(x,y,z,
facies_output = np.reshape(facies_output, x.shape)
dip = np.reshape(dip, x.shape)
dip_direction = np.reshape(dip_direction, x.shape)
return facies_output, dip, dip_direction
return facies_output, dip, dip_direction
9 changes: 4 additions & 5 deletions src/hyvr/objects/trough.py
Original file line number Diff line number Diff line change
@@ -50,16 +50,15 @@ def trough(
& (z >= zmin)
& (z <= zmax)
)
print(np.sum(gross_limit_logic))

# copy for later
original_shape = x.shape
# modifying the reference so we calculate on less points:
gross_limit_logic = np.ravel(gross_limit_logic)
x = x.ravel()[gross_limit_logic]
y = y.ravel()[gross_limit_logic]
z = z.ravel()[gross_limit_logic]
print(x.shape)
print(original_shape)

# gross_limit_logic = np.ravel(gross_limit_logic)
# # To decide whether the point is inside, we can just use the normalized
# # distance. Therefore, we first calculate the normalized distance vector
@@ -72,8 +71,8 @@ def trough(
logic = logic & (np.ravel(dz) <= 0)

# print(np.sum(logic))
# if np.sum(logic) == 0:# return empty dataset:
# #(np.sum(logic))
if np.sum(logic) == 0: # return empty dataset:
print("No points inside the ellipsoid")
# return np.ones(original_shape,dtype=np.int64)*-1, np.empty(original_shape,dtype=np.float64), np.empty(original_shape,dtype=np.float64)
# At this point we know that the point is in the domain, so we have to
# assign values

0 comments on commit 31b702d

Please sign in to comment.