Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Models and UI Redesign #99

Draft
wants to merge 45 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
a37d350
refactor: update models
annehaley Dec 30, 2024
74e9eb5
fix: Update `admin.py`
annehaley Dec 30, 2024
05116b0
refactor: Update rest API
annehaley Dec 31, 2024
4c295fb
test: Update test suite
annehaley Jan 2, 2025
c2cd3e3
build: Update dockerfile to support conversion of `.nc` files
annehaley Jan 5, 2025
a126080
refactor: Update tasks
annehaley Jan 5, 2025
347056a
feat: Add query param filtering to vector tile endpoint
annehaley Jan 6, 2025
3afbfcd
fix: Update `LayerSerializer`
annehaley Jan 6, 2025
0fe6753
fix: Use `band_ref` JSON field on `LayerFrame`
annehaley Jan 6, 2025
b7ca918
feat: Add migration file
annehaley Jan 6, 2025
114f9bf
fix: Update populate process and sample data
annehaley Jan 9, 2025
7d899d7
refactor: Rename `band_ref` field to `source_filters`
annehaley Jan 15, 2025
161697a
fix: Add combine option for zipped vector datasets
annehaley Jan 15, 2025
d225d4a
fix: Adjust ingest process for New York Energy use case
annehaley Jan 15, 2025
bcaf361
fix: Update list of ignored filetypes
annehaley Jan 15, 2025
1df004d
refactor: Consolidate layer and frame creation logic to single function
annehaley Jan 15, 2025
a6a2a01
fix: Additional dataset conversion adjustments
annehaley Jan 15, 2025
0c8673e
fix: Protect admin page from null source files on data objects
annehaley Jan 15, 2025
7299af9
fix: Include access control logic for RasterData and VectorData
annehaley Jan 30, 2025
750c135
refactor: Rename `SourceRegion` -> `Region`
annehaley Jan 31, 2025
42aec47
fix: Update default layer name generation
annehaley Jan 31, 2025
11f7cde
refactor: Remove `dataset` field from `Network` and make `vector_data…
annehaley Jan 31, 2025
3c24d8c
chore: Remove print statement
annehaley Jan 31, 2025
03ba49b
fix: Update populate test
annehaley Jan 31, 2025
afe2c7d
fix: Update tests with new Network-Dataset relationship
annehaley Jan 31, 2025
9560940
fix: Update API with new Network-Dataset relationship
annehaley Feb 4, 2025
81dffa1
fix: Update vector feature filtering to allow nested properties
annehaley Feb 5, 2025
a15e8da
feat: Allow additional filters specification in layer options
annehaley Feb 5, 2025
48c0a0e
fix: Update `get_filter_string` function
annehaley Feb 5, 2025
22bc1f6
fix: Update `data.py` for lint check
annehaley Feb 5, 2025
005d8fe
feat: Add `UVDATExplorer` Jupyter widget and example usage notebook
annehaley Jan 9, 2025
3a6b4e1
fix: Add token auth to rest API for `UVDATExplorer` to work
annehaley Jan 9, 2025
424d1bd
refactor: Move `jupyter` folder to top level
annehaley Jan 15, 2025
7af52d2
fix: Update references to old `band_ref` field
annehaley Jan 15, 2025
d7ac091
refactor: Rename explorer module
annehaley Jan 15, 2025
d8804c1
chore: Add item to `.gitignore`
annehaley Jan 15, 2025
5349f2e
fix: Only display "Session Authenticated" label if auth is successful
annehaley Jan 15, 2025
d791ef6
feat: Allow user to specify center and zoom
annehaley Jan 15, 2025
189a7d9
fix: Protect against null metadata
annehaley Jan 15, 2025
ed8fb57
fix: Add requirements file
annehaley Jan 15, 2025
e71623d
feat: add full screen toggle to ipyleaflet map
annehaley Jan 23, 2025
52352cf
fix: Update source filter formatting with API changes
annehaley Feb 5, 2025
ea2a57e
fix: Remove debugging statement
annehaley Feb 5, 2025
ebe1369
Merge pull request #94 from OpenGeoscience/models-update
annehaley Feb 10, 2025
f6d42f7
Merge pull request #96 from OpenGeoscience/jupyter-explorer
annehaley Feb 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ rabbitmq/*
.vscode/
staticfiles/
sample_data/downloads/*
jupyter/.ipynb_checkpoints

# osmnx data cache folder
cache
Expand Down
2 changes: 1 addition & 1 deletion dev/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ RUN python -m pip install ./tile2net
COPY ./setup.py /opt/uvdat-server/setup.py
COPY ./manage.py /opt/uvdat-server/manage.py
COPY ./uvdat /opt/uvdat-server/uvdat
RUN pip install large-image[gdal,pil] large-image-converter --find-links https://girder.github.io/large_image_wheels
RUN pip install large-image[gdal,pil,mapnik] large-image-converter --find-links https://girder.github.io/large_image_wheels
RUN pip install --editable /opt/uvdat-server[dev]

# Use a directory name which will never be an import name, as isort considers this as first-party.
Expand Down
3 changes: 3 additions & 0 deletions jupyter/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
ipyleaflet
ipywidgets
ipytree
41 changes: 41 additions & 0 deletions jupyter/uvdat_data_exploration.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "d3435282-d633-4edf-8131-462a038306f1",
"metadata": {},
"outputs": [],
"source": [
"from uvdat_explorer import UVDATExplorer\n",
"\n",
"UVDATExplorer(\n",
" api_url='http://localhost:8000/api/v1',\n",
" # email='myemail',\n",
" # password='mypassword',\n",
")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.16"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
258 changes: 258 additions & 0 deletions jupyter/uvdat_explorer.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,258 @@
from urllib.parse import urlencode

from IPython import display
from ipyleaflet import FullScreenControl, Map, TileLayer, VectorTileLayer, basemaps, projections
from ipytree import Node, Tree
import ipywidgets as widgets
import requests

DEFAULT_CENTER = [42.36, -71.06]
DEFAULT_ZOOM = 14


class LayerRepresentation:
def __init__(self, layer, api_url, session, token, center, zoom):
self.layer = layer
self.session = session
self.api_url = api_url
self.token = token
self.center = center
self.zoom = zoom

self.output = widgets.Output()
self.frame_index = 0
self.frames = self.layer.get('frames', [])
self.max_frame = max(frame.get('index') for frame in self.frames) if len(self.frames) else 0
self.play_widget = widgets.Play(
min=0,
max=self.max_frame,
interval=1500,
)
self.frame_slider = widgets.IntSlider(
description='Frame Index:',
min=0,
max=self.max_frame,
)
widgets.jslink((self.play_widget, 'value'), (self.frame_slider, 'value'))
label_text = 'Frame Name: '
if len(self.frames):
label_text += self.frames[0].get('name')
self.frame_name_label = widgets.Label(label_text)
self.frame_slider.observe(self.update_frame)
self.map = Map(
crs=projections.EPSG3857,
basemap=basemaps.OpenStreetMap.Mapnik,
center=self.center,
zoom=self.zoom,
max_zoom=20,
min_zoom=0,
scroll_wheel_zoom=True,
dragging=True,
attribution_control=False,
)
self.map.add(FullScreenControl())
self.map_layers = []
self.update_frame(dict(name='value'))

def get_frame_path_and_metadata(self, frame):
raster = frame.get('raster')
vector = frame.get('vector')
path, metadata = None, None
if raster:
raster_id = raster.get('id')
path = f'rasters/{raster_id}/'
metadata = raster.get('metadata')
elif vector:
vector_id = vector.get('id')
path = f'vectors/{vector_id}/'
metadata = vector.get('metadata')
return path, metadata

def get_flat_filters(self, filters):
flat = {}
for key, value in filters.items():
if isinstance(value, dict):
for k, v in self.get_flat_filters(value).items():
flat[f'{key}.{k}'] = v
else:
flat[key] = value
return flat

def update_frame(self, event):
with self.output:
if event.get('name') == 'value':
for map_layer in self.map_layers:
self.map.remove_layer(map_layer)
self.map_layers = []

self.frame_index = int(event.get('new', 0))
current_frames = [
frame for frame in self.frames if frame.get('index') == self.frame_index
]
for frame in current_frames:
tile_size = 256
frame_name = frame.get('name')
self.frame_name_label.value = f'Frame Name: {frame_name}'
url_path, metadata = self.get_frame_path_and_metadata(frame)
if metadata is not None:
tile_size = metadata.get('tileWidth', 256)
url_suffix = 'tiles/{z}/{x}/{y}'
layer_class = None
layer_kwargs = dict(min_zoom=0, max_zoom=20, tile_size=tile_size)
query = dict(token=self.token)
source_filters = frame.get('source_filters')
if source_filters is not None and source_filters != dict(band=1):
query.update(self.get_flat_filters(source_filters))

if 'raster' in url_path:
url_suffix += '.png'
layer_class = TileLayer
query.update(projection='EPSG:3857')
elif 'vector' in url_path:
layer_class = VectorTileLayer
if layer_class is not None:
query_string = urlencode(query)
map_layer = layer_class(
url=self.api_url + url_path + url_suffix + '?' + query_string,
**layer_kwargs,
)
self.map_layers.append(map_layer)
self.map.add_layer(map_layer)

def get_widget(self):
children = [
self.map,
self.output,
]
if self.max_frame:
children = [self.frame_slider, self.play_widget, self.frame_name_label, *children]
return widgets.VBox(children)


class UVDATExplorer:
def __init__(self, api_url=None, email=None, password=None, center=None, zoom=None):
if api_url is None:
msg = 'UVDATExplorer missing argument: %s must be specified.'
raise ValueError(msg % '`api_url`')
if not api_url.endswith('/'):
api_url += '/'
self.api_url = api_url
self.session = requests.Session()
self.token = None
self.authenticated = False
self.email = email
self.password = password
self.center = center or DEFAULT_CENTER
self.zoom = zoom or DEFAULT_ZOOM

# Widgets
self.tree = None
self.tree_nodes = {}
self.output = widgets.Output()
self.email_input = widgets.Text(description='Email:')
self.password_input = widgets.Password(description='Password:')
self.button = widgets.Button(description='Get Datasets')
self.button.on_click(self.get_datasets)
children = [self.output]

if email is None:
children.append(self.email_input)
if password is None:
children.append(self.password_input)

if email and password:
authenticated = self.authenticate()
if authenticated:
children.append(widgets.Label('Session Authenticated.'))
children.append(self.button)

# Display
self.display = display.display(widgets.VBox(children), display_id=True)
self.update_display(children)

def __del__(self):
self.session.close()

def authenticate(self):
with self.output:
self.output.clear_output()
email = self.email or self.email_input.value
password = self.password or self.password_input.value
self.email_input.value = ''
self.password_input.value = ''

response = requests.post(
self.api_url + 'token/',
dict(
username=email,
password=password,
),
)
if response.status_code == 200:
self.token = response.json().get('token')
self.session.headers['Authorization'] = f'Token {self.token}'
self.authenticated = True
return True
else:
print('Invalid login.')
return False

def get_datasets(self, *args):
with self.output:
if not self.authenticated:
self.authenticate()
response = self.session.get(self.api_url + 'datasets')
response.raise_for_status()
datasets = response.json().get('results')

self.tree = Tree()
for dataset in datasets:
node = Node(dataset.get('name'), icon='database')
node.observe(self.get_dataset_layers, 'selected')
self.tree_nodes[node._id] = dataset
self.tree.add_node(node)

children = [self.tree, self.output]
self.update_display(children)

def get_dataset_layers(self, event):
with self.output:
node = event.get('owner')
for child in node.nodes:
node.remove_node(child)
node_id = node._id
dataset = self.tree_nodes[node_id]
dataset_id = dataset.get('id')

response = self.session.get(self.api_url + f'datasets/{dataset_id}/layers')
response.raise_for_status()
layers = response.json()

for layer in layers:
child_node = Node(layer.get('name'), icon='file')
child_node.observe(self.select_layer, 'selected')
self.tree_nodes[child_node._id] = layer
node.add_node(child_node)

def select_layer(self, event):
with self.output:
node = event.get('owner')
node_id = node._id
layer = self.tree_nodes[node_id]

self.map = LayerRepresentation(
layer,
self.api_url,
self.session,
self.token,
self.center,
self.zoom,
)
children = [self.tree, self.output, self.map.get_widget()]
self.update_display(children)

def update_display(self, children):
self.display.update(widgets.VBox(children))

def _ipython_display_(self):
return self.display
17 changes: 8 additions & 9 deletions sample_data/ingest_use_case.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,16 +28,16 @@ def ingest_file(file_info, index=0, dataset=None, chart=None):
file_location = Path(DOWNLOADS_FOLDER, file_path)
file_type = file_path.split('.')[-1]
if not file_location.exists():
print(f'\t Downloading data file {file_name}.')
print(f'\t\t Downloading data file {file_name}.')
file_location.parent.mkdir(parents=True, exist_ok=True)
with open(file_location, 'wb') as f:
r = requests.get(file_url)
r.raise_for_status()
f.write(r.content)

existing = FileItem.objects.filter(name=file_name)
existing = FileItem.objects.filter(dataset=dataset, name=file_name)
if existing.count():
print('\t', f'FileItem {file_name} already exists.')
print('\t\t', f'FileItem {file_name} already exists.')
else:
new_file_item = FileItem.objects.create(
name=file_name,
Expand All @@ -51,7 +51,7 @@ def ingest_file(file_info, index=0, dataset=None, chart=None):
),
index=index,
)
print('\t', f'FileItem {new_file_item.name} created.')
print('\t\t', f'FileItem {new_file_item.name} created.')
with file_location.open('rb') as f:
new_file_item.file.save(file_path, ContentFile(f.read()))

Expand All @@ -74,7 +74,7 @@ def ingest_projects(use_case):
},
)
if created:
print('\t', f'Project {project_for_setting.name} created.')
print('\t\t', f'Project {project_for_setting.name} created.')

project_for_setting.datasets.set(Dataset.objects.filter(name__in=project['datasets']))
project_for_setting.set_permissions(owner=User.objects.filter(is_superuser=True).first())
Expand All @@ -100,7 +100,7 @@ def ingest_charts(use_case):
metadata=chart.get('metadata'),
editable=chart.get('editable', False),
)
print('\t', f'Chart {new_chart.name} created.')
print('\t\t', f'Chart {new_chart.name} created.')
for index, file_info in enumerate(chart.get('files', [])):
ingest_file(
file_info,
Expand All @@ -109,7 +109,7 @@ def ingest_charts(use_case):
)
chart_for_conversion = new_chart

print('\t', f'Converting data for {chart_for_conversion.name}...')
print('\t\t', f'Converting data for {chart_for_conversion.name}.')
chart_for_conversion.spawn_conversion_task(
conversion_options=chart.get('conversion_options'),
asynchronous=False,
Expand All @@ -124,6 +124,7 @@ def ingest_datasets(use_case, include_large=False, dataset_indexes=None):
data = json.load(datasets_json)
for index, dataset in enumerate(data):
if dataset_indexes is None or index in dataset_indexes:
print('\t- ', dataset['name'])
existing = Dataset.objects.filter(name=dataset['name'])
if existing.count():
dataset_for_conversion = existing.first()
Expand All @@ -133,10 +134,8 @@ def ingest_datasets(use_case, include_large=False, dataset_indexes=None):
name=dataset['name'],
description=dataset['description'],
category=dataset['category'],
dataset_type=dataset.get('type', 'vector').upper(),
metadata=dataset.get('metadata', {}),
)
print('\t', f'Dataset {new_dataset.name} created.')
for index, file_info in enumerate(dataset.get('files', [])):
ingest_file(
file_info,
Expand Down
Loading