Skip to content

Conversation

louispt1
Copy link

louispt1 and others added 14 commits August 6, 2025 10:03
* Custom curves to and from dataframe

* Upload custom curves runner, scenario methods and validation on the model

* Custom curves: tests for runner, scenario and validation
* Start from excel bulk

* Improved styling for packer

* Unpack for inputs and metadata first implementation no tests

* Unpack for sortables and custom curves early commit before playing around

* From dataset for sortables

* Parse options from main sheet to enable repeating submodels across all scenarios listed in main

* Added gqueries

* WIP refactoring packer to suite new input format - partially working

* New excel format working pre-clean up

* Split packer into individual packs

* Simplify query processing

* Tests for query pack and custom curves pack

* Fix tests for multi-index

* Simplifying normalisation

* Simplifying inputs pack

* Improved tests

* refining notebooks

---------

Co-authored-by: Nora Schinkel <ncschinkel@gmail.com>
* Export output curves to a separate file by default on to_excel

* Swap from pipenv to poetry

* Clean up Jupyter notebooks
* Updating how you handle title in meta
* Clearing notebooks, updating poetry and handling setting environment not base url

* Update readme and input excel

* Tidying output format in Jupyter Notebooks

* PARAMETERS to SLIDER_SETTINGS

* Min max once at the start of slider_settings

* Fix metadata output and refine scenario to_dataframe

* Fix custom curve handling and avoid double caching issue
* Updated tests
* Added fetching a scenario directly in the Jupyter notebook to the example
* Convert from yml settings to .env
* Separated inputs and outputs into their own folders at root
@louispt1 louispt1 force-pushed the concurrency branch 2 times, most recently from 5d9c1a2 to d7b3401 Compare August 18, 2025 12:13
Comment on lines +71 to +96
df = pd.read_csv(io.StringIO(csv_text), index_col=0)
except Exception as e:
return ServiceResult.fail([f"Failed to parse bulk CSV: {e}"])

results: Dict[str, io.StringIO] = {}
warnings: list[str] = []

groups: Dict[str, list[str]] = {}
for col in df.columns:
base = str(col)
for sep in (":", "/"):
if sep in base:
base = base.split(sep, 1)[0]
break
groups.setdefault(base, []).append(col)

for base, cols in groups.items():
try:
sub = df[cols].dropna(how="all")
buf = io.StringIO()
sub.to_csv(buf, index=True)
buf.seek(0)
results[base] = buf
except Exception as e:
warnings.append(f"{base}: Failed to prepare CSV: {e}")

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This logic does not belong here unfortunately. But we can fix it later.

@louispt1
Copy link
Author

@noracato I think with the new async approach this is an unnecessary change - it is 5 seconds slower than the async approach. If you agree I would close this PR and the Engine one as they are unnecessary.

Base automatically changed from louis to version-2 September 17, 2025 13:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants