Skip to content

Commit

Permalink
improving code clarity and cleaning up the code + improvements in CI
Browse files Browse the repository at this point in the history
  • Loading branch information
vcantarella committed Sep 17, 2024
1 parent 839a02c commit c47c52d
Show file tree
Hide file tree
Showing 20 changed files with 355 additions and 468 deletions.
16 changes: 7 additions & 9 deletions .github/workflows/python-package-conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,15 +24,13 @@ jobs:
- name: Lint with ruff
run: |
pip install ruff
ruff check # Lint all files in the current directory (and any subdirectories).
ruff format # Format all files in the current directory (and any subdirectories).
ruff check # Lint all files in the current directory (and any subdirectories).
- name: Test with pytest
run: |
conda install pytest
pip install pytest-html
pytest
- name: Coverage tests
run: |
pip install coverage
NUMBA_DISABLE_JIT=1 coverage run -m pytest
coverage report -m
pip install pytest pytest-cov
NUMBA_DISABLE_JIT=1 pytest --cov
- name: Upload results to Codecov
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
3 changes: 2 additions & 1 deletion LICENCE
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
MIT License

Copyright (c) 2017
Copyright(c) 2023 Vitor Cantarella
Copyright(c) for portions of the HyVR module are held by Jeremy Bennett, 2017 and Samuel Schorr, 2018 as part of the original repository hyvr 2017

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
17 changes: 4 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,9 @@
# Status
# HyVR - simple

# Introduction

**HyVR**

This is a fork from the original HyVR package with minimal implementation of
the core features the idea to keep the original ideas alive,
but maintainable in the fast python development ecosystem.
Thus, most of the non-essencial functionality, that is nowadays done much better in other packages,
has been dropped. The functionality is now focus on the creation of geobodies to grids.

Eveything that could be done before is still doable, and more. But the user should understand a bit more about
operating on arrays and using python.
[![Build Status]!(https://github.com/vcantarella/hyvr/actions/workflows/python-package-conda.yml/badge.svg?event=push)]
[![codecov](https://codecov.io/github/vcantarella/hyvr/graph/badge.svg?token=QWGCQVEJ3G)](https://codecov.io/github/vcantarella/hyvr)

This is a fork from the original HyVR package with minimal implementation of the core features the idea to keep the original ideas alive, but maintainable in the fast python development ecosystem.

The Hydrogeological Virtual Reality simulation package (HyVR) is a Python module
that helps researchers and practitioners generate sedimentary subsurface models with
Expand Down
2 changes: 2 additions & 0 deletions examples/.gitingnore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ammer/*
channels/*
18 changes: 15 additions & 3 deletions examples/ammer/ammer_notebook.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,10 +22,22 @@
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"outputs": [
{
"ename": "ImportError",
"evalue": "cannot import name 'trough' from 'hyvr' (c:\\Users\\vcant\\miniconda3\\envs\\hyvr\\Lib\\site-packages\\hyvr\\__init__.py)",
"output_type": "error",
"traceback": [
"\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[1;31mImportError\u001b[0m Traceback (most recent call last)",
"Cell \u001b[1;32mIn[1], line 4\u001b[0m\n\u001b[0;32m 2\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mhyvr\u001b[39;00m\u001b[38;5;21;01m.\u001b[39;00m\u001b[38;5;21;01mtools\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m specsim_surface \u001b[38;5;66;03m# used for the surface creation\u001b[39;00m\n\u001b[0;32m 3\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mhyvr\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m channel \u001b[38;5;66;03m# channel object creation\u001b[39;00m\n\u001b[1;32m----> 4\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mhyvr\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m trough \u001b[38;5;66;03m# trough creation\u001b[39;00m\n\u001b[0;32m 5\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mscipy\u001b[39;00m \u001b[38;5;66;03m# general scientific calculations\u001b[39;00m\n\u001b[0;32m 6\u001b[0m \u001b[38;5;28;01mimport\u001b[39;00m \u001b[38;5;21;01mflopy\u001b[39;00m \u001b[38;5;66;03m# our modelling interface\u001b[39;00m\n",
"\u001b[1;31mImportError\u001b[0m: cannot import name 'trough' from 'hyvr' (c:\\Users\\vcant\\miniconda3\\envs\\hyvr\\Lib\\site-packages\\hyvr\\__init__.py)"
]
}
],
"source": [
"from hyvr.tools import ferguson_curve # used for the channel curve creation\n",
"from hyvr.tools import surface_gauss_regression # used for the surface creation\n",
"from hyvr.tools import specsim_surface # used for the surface creation\n",
"from hyvr import channel # channel object creation\n",
"from hyvr import trough # trough creation\n",
"import scipy # general scientific calculations\n",
Expand Down Expand Up @@ -711,7 +723,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
"version": "3.11.7"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion examples/ammer/ammer_notebook_hyvrmodel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1744,7 +1744,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.0"
"version": "3.11.7"
}
},
"nbformat": 4,
Expand Down
40 changes: 30 additions & 10 deletions examples/ammer/ammer_v2606.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@
import flopy # our modelling interface
import numpy as np # general numerical functions and array manipulation
from hyvr.utils import specsim
from hyvr.tools import specsim_surface
import numba

# %% [markdown]
Expand Down Expand Up @@ -63,7 +64,7 @@
ncol=ncol,
delr=dely,
delc=delx,
top=7.0,
top=H,
botm=np.arange(H - delz, 0 - delz, -delz),
)

Expand Down Expand Up @@ -115,21 +116,40 @@
#
# %%
# Define the top and bottom of the model as the top and bottom of the sequence. Assign the unassigned cells to the facies 4 (background facies).

np.random.seed(37893)
mean_top = H - 1.93
var_top = 6.59e-2
corl_top = np.array([2.0e2, 6e2])
mean_top = H - 1.86
var_top = 0.7
corl_top = np.array([70, 792])
surf_top = specsim(X[0, :, :], Y[0, :, :], mean=mean_top, var=var_top, corl=corl_top)

mean_botm = H - 5.82
var_botm = 0.67
corl_botm = np.array([200, 600])
mean_botm = H - 8.3
var_botm = .9
corl_botm = np.array([300, 900])
surf_botm = specsim(
X[0, :, :], Y[0, :, :], mean=mean_botm, var=var_botm, corl=corl_botm
X[0, :, :], Y[0, :, :], mean=mean_botm, var=var_botm, corl=corl_top
)


import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 6))
ax.plot( X[0, int(nrow/2), :],surf_top[int(nrow/2),:], label="Top")
ax.plot(X[0, int(nrow/2), :],surf_botm[int(nrow/2),:], label="Botm")
ax.legend()
plt.show()
# %% [markdown]
surf_top2 = specsim_surface(X[0, :, :], Y[0, :, :], mean=mean_top, var=var_top, corl=corl_top)
surf_botm2 = specsim_surface(X[0, :, :], Y[0, :, :], mean=mean_botm, var=var_botm, corl=corl_top)
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(12, 6))
ax.plot( X[0, int(nrow/2), :],surf_top[int(nrow/2),:], label="Top")
ax.plot( X[0, int(nrow/2), :],surf_top2[int(nrow/2),:], label="Top2")
ax.plot(X[0, int(nrow/2), :],surf_botm[int(nrow/2),:], label="Botm")
ax.plot(X[0, int(nrow/2), :],surf_botm2[int(nrow/2),:], label="Botm2")
ax.legend()
plt.show()


# %%
# ### Defining the sequence of thicknesses
#
# We will assume an average thickness of 0.7 m. The first layer in the system is modelled deterministically. It is $\approx$ 0.4 m thickness and composed with light color tufa with fossil and low organic matter content. The remaining layers are modelled probabilistically.
Expand All @@ -140,7 +160,7 @@
# %%
# according to answer in : https://math.stackexchange.com/questions/291174/probability-distribution-of-the-subinterval-lengths-from-a-random-interval-divis
# The cumulative distribution function for intervals randomly sampled from the interval [0,a] is:
simulated_thickness = (mean_top - 0) - 0.4
simulated_thickness = (mean_top) - 1
n = 8
print(f"With the number of layers: {n}")
F = lambda t: 1 - ((simulated_thickness - t) / simulated_thickness) ** (n - 1)
Expand Down
50 changes: 50 additions & 0 deletions examples/ammer/fromto_units.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
name,from,to,units
X011,0,2.4,clay1
X011,2.4,9.2,gravel
X012,0,2,clay1
X012,2,10,tufa
X012,10,10.8,clay2
X012,10.8,14,gravel
X013,0,0.8,clay1
X013,0.8,9.6,tufa
X013,9.6,10.7,clay2
X013,10.7,12,gravel
X014,0,2.3,clay1
X014,2.3,4.5,tufa
X014,4.5,12,clay2
X014,12,14,gravel
X015,0,2.3,clay1
X015,2.3,9.95,tufa
X015,10.3,14,gravel
X017,0,2,clay1
X017,2,8.6,tufa
X017,8.6,9.5,clay2
X017,9.5,12,gravel
X018,0,10,clay1
X018,10,11.75,gravel
X018,11.75,12,marge
X019,0,1.5,clay1
X019,1.5,4.55,tufa
X019,4.55,8.1,clay2
X019,8.1,14,gravel
X020,0,2.1,clay1
X020,2.1,9,tufa
X020,9,10.85,clay2
X020,10.85,13.65,gravel
X020,13.65,14,marge
X022,0,1.7,clay1
X022,1.7,9,tufa
X022,9,10.65,clay2
X022,10.65,14,gravel
X024,0,1.9,clay1
X024,1.9,9,tufa
X024,9,10.65,clay2
X024,10.65,14,gravel
X030,0,2,clay1
X030,2,8.8,tufa
X030,8.8,10,clay2
X030,10,16.5,gravel
X037,0,2.5,clay1
X037,2.5,8.2,tufa
X037,8.2,9.4,clay2
X037,8.2,12.5,gravel
52 changes: 29 additions & 23 deletions examples/ammer/gaussian_kernel_regression.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,32 @@
import george
import numpy as np
import pandas as pd
from george.kernels import ExpSquaredKernel
import scipy

loc = pd.read_csv("examples/ammer/loc.csv")
fromto = pd.read_csv("examples/ammer/fromto_units.csv")
fromto = fromto[fromto["units"] == "tufa"]
# merge loc and fromto
loc = loc.merge(fromto, on="name")

var = 2**2
corr_lengths = np.array([1000, 2000])
#top_botm = np.loadtxt("examples/ammer/top_botm.csv",delimiter=",", skiprows=1,usecols=(1,2,3,4))
top = loc["z"].to_numpy() - loc["from"].to_numpy()
bottom = loc["z"].to_numpy() - loc["to"].to_numpy()
x_y = loc[["x", "y"]].to_numpy()

kernel = var*ExpSquaredKernel(corr_lengths, ndim=2, )

var = 0.2**2
corr_lengths = np.array([70, 120])

kernel = 10*np.var(top)*ExpSquaredKernel(corr_lengths, ndim=2, )
print(kernel.get_parameter_names())
print(kernel.get_parameter_vector())
np.exp(kernel.get_parameter_vector())

top_botm = np.loadtxt("examples/ammer/top_botm.csv",delimiter=",", skiprows=1,usecols=(1,2,3,4))
top = top_botm[:,2]
bottom = top_botm[:,3]
x_y = top_botm[:,0:2]

gp_top = george.GP(kernel, mean=np.mean(top), fit_mean=False,
white_noise=np.log(0.5**2), fit_white_noise=False)
white_noise=0.2**2, fit_white_noise=False)
gp_top.compute(x_y)
print(gp_top.log_likelihood(top))
print(gp_top.grad_log_likelihood(top))
Expand All @@ -40,27 +47,28 @@ def grad_nll_top(p):

# Run the optimization routine.
p0 = gp_top.get_parameter_vector()
results = scipy.optimize.minimize(nll_top, p0, jac=grad_nll_top, method="L-BFGS-B")

results = scipy.optimize.minimize(nll_top, p0, jac=grad_nll_top, method="L-BFGS-B", tol = 1e-5)
print(results)
# Update the kernel and print the final log-likelihood.
gp_top.set_parameter_vector(results.x)
gp_top.compute(x_y)
print(gp_top.log_likelihood(top))

gp_top.get_parameter_names()
print(gp_top.get_parameter_names())
params = gp_top.get_parameter_vector()
params = np.concatenate([np.array([params[0]]), np.exp(params[1:])])
params
print(np.exp(params))
print(np.sqrt(np.exp(params)[1:3]))



kernelb = var*ExpSquaredKernel(corr_lengths, ndim=2, )
corr_b = np.array([200, 1000])
kernelb = var*ExpSquaredKernel(corr_b, ndim=2, )

kernelb.get_parameter_names()
kernelb.get_parameter_vector()


gp_bottom = george.GP(kernel, mean=np.mean(bottom), fit_mean=False,
white_noise=np.log(0.5**2), fit_white_noise=False)
gp_bottom = george.GP(kernelb, mean=np.mean(bottom), fit_mean=False,
white_noise=1**2, fit_white_noise=False)
gp_bottom.compute(x_y)
print(gp_bottom.log_likelihood(bottom))
print(gp_bottom.grad_log_likelihood(bottom))
Expand All @@ -81,7 +89,7 @@ def grad_nll_bottom(p):

# Run the optimization routine.
p0 = gp_bottom.get_parameter_vector()
results = scipy.optimize.minimize(nll_bottom, p0, jac=grad_nll_bottom, method="L-BFGS-B", tol = 1e-6)
results = scipy.optimize.minimize(nll_bottom, p0, jac=grad_nll_bottom, method="L-BFGS-B", tol = 1e-12)

# Update the kernel and print the final log-likelihood.
gp_bottom.set_parameter_vector(results.x)
Expand All @@ -90,8 +98,6 @@ def grad_nll_bottom(p):
gp_bottom.get_parameter_names()
paramsb = gp_bottom.get_parameter_vector()
kern_pars = np.exp(paramsb)
corr_lb = np.sqrt(kern_pars)
corr_lb
paramsb = np.concatenate([np.array([paramsb[0]]), np.exp(paramsb[1:])])
paramsb

print(kern_pars)
print(np.sqrt(kern_pars[1:3]))
mean_top = np.mean(top)
13 changes: 13 additions & 0 deletions examples/ammer/loc.csv
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name,x,y,z
X011,3500466.23,5375816.52,340.83
X012,3499758.76,5375662.68,341.62
X013,3499553.94,5375898.53,342.38
X014,3499121,5376120,344.22
X015,3499092.76,5375723.18,343.3
X017,3498622.48,5375995.81,344.46
X018,3498166.26,5375706.61,347.08
X019,3498078.02,5375935.71,346.44
X020,3499479.44,5375593.82,342.22
X022,3499472.76,5375563.75,342.19
X024,3499466.75,5375533.16,342.23
X030,3498948.341,5375919.338,343.776
39 changes: 0 additions & 39 deletions examples/channels/test_gauss_surface.ipynb

This file was deleted.

Loading

0 comments on commit c47c52d

Please sign in to comment.