Releases: openlayer-ai/openlayer-python
Releases · openlayer-ai/openlayer-python
v0.0.0a01
Shift back to PyPI because setuptools is not in test PyPI
v0.3.0
Added
- A
Project
helper class. - A convenience method
create_or_load_project
which loads in a project if it is already created. - Accepts AZURE as a
DeploymentType
.
Changed
- Compatibility with Unbox API OpenAPI refactor.
- Models and datasets must be added to projects.
- Deprecates
categorical_features_map
in favor ofcategorical_feature_names
for model and dataset uploads. - Moved
TaskType
attribute from theModel
level to theProject
level. Creating aProject
now requires specifying theTaskType
. - Removed
name
fromadd_dataset
. - Changed
description
tocommit_message
fromadd_dataset
,add_dataframe
andadd_model
. requirements_txt_file
no longer optional for model uploads.- NLP dataset character limit is now 1000 characters.
Fixed
- More comprehensive model and dataset upload validation.
- Bug with duplicate feature names for NLP datasets if uploading same dataset twice.
- Added
protobuf==3.2.0
to requirements to fix bug with model deployment.
0.3.0rc1
Added
- A
Project
helper class. - A convenience method
create_or_load_project
which loads in a project if it is already created. - Accepts AZURE as a
DeploymentType
.
Changed
- Compatibility with Unbox API OpenAPI refactor.
- Models and datasets must be added to projects.
- Deprecates
categorical_features_map
in favor ofcategorical_feature_names
for model and dataset uploads. - Moved
TaskType
attribute from theModel
level to theProject
level. Creating aProject
now requires specifying theTaskType
. - Removed
name
fromadd_dataset
. - Changed
description
tocommit_message
fromadd_dataset
,add_dataframe
andadd_model
. requirements_txt_file
no longer optional for model uploads.- NLP dataset character limit is now 1000 characters.
Fixed
- More comprehensive model and dataset upload validation.
- Bug with duplicate feature names for NLP datasets if uploading same dataset twice.
- Added
protobuf==3.2.0
to requirements to fix bug with model deployment.
0.3.0a5
Changed
- NLP dataset character limit to 1000 characters.
Fixed
- Fix issue with duplicate feature names for NLP datasets.
v0.3.0a4
Fixed
explainability_tokenizer
validation.
v0.3.0a3
Added
explainability_tokenizer
as an optional argument to theadd_model
method, to be used by the explainability techniques.
Changed
requirements_txt_file
no longer optional for model uploads.- Remove
id
from POST params to API server.
Fixed
- Added
protobuf==3.2.0
to requirements to fix bug with model deployment.
v0.3.0a1
Changed
- Default Unbox server URL (https://api-staging.unbox.ai/).
v0.3.0a2
Fixed
- Fixed link to project page when loading / creating a project.
- Presigned url endpoint when using AWS / GCP / Azure.
Changed
- Removed links when uploading dataset and models. Just the project link is appropriate.
v0.3.0a0
Added
- A
Project
helper class. - A convenience method
create_or_load_project
which loads in a project in if it is already created.
Changed
- Models and datasets must be added to projects.
- Deprecates
categorical_features_map
in favor ofcategorical_feature_names
for model and dataset uploads. - Moved
TaskType
attribute from theModel
level to theProject
level. Creating aProject
now requires specifying theTaskType
. - Removed
name
fromadd_dataset
. - Changed
description
tocommit_message
fromadd_dataset
,add_dataframe
andadd_model
.
v0.2.0a1
Fixed
- Fail early if
custom_model_code
,dependent_dir
orrequirements_txt_file
areNone
when model type isModelType.custom
. - Fail early if
model
is notNone
when model type isModelType.custom
.