Skip to content

Commit

Permalink
feat(datasets): make datasets arguments keywords only (kedro-org#358)
Browse files Browse the repository at this point in the history
* feat(datasets): make `APIDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `BioSequenceDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `ParquetDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `EmailMessageDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `GeoJSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `HoloviewsWriter.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `MatplotlibWriter.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `GMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `GraphMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make NetworkX `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `PickleDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `ImageDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make plotly `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `PlotlyDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make polars `CSVDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make polars `GenericDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make redis `PickleDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SnowparkTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SVMLightDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `TensorFlowModelDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `TextDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `YAMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `ManagedTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `VideoDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `CSVDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `DeltaTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `ExcelDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `FeatherDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `GBQTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `GenericDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make pandas `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make pandas `ParquerDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SQLTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `XMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `HDFDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `DeltaTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SparkDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SparkHiveDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SparkJDBCDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `SparkStreamingDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `IncrementalDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* feat(datasets): make `LazyPolarsDataset.__init__` keyword only

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* docs(datasets): update doctests for HoloviewsWriter

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>

* Update release notes

Signed-off-by: Merel Theisen <merel.theisen@quantumblack.com>

---------

Signed-off-by: Felix Scherz <felixwscherz@gmail.com>
Signed-off-by: Merel Theisen <merel.theisen@quantumblack.com>
Co-authored-by: Felix Scherz <felixwscherz@gmail.com>
Co-authored-by: Merel Theisen <49397448+merelcht@users.noreply.github.com>
Co-authored-by: Merel Theisen <merel.theisen@quantumblack.com>
Signed-off-by: tgoelles <thomas.goelles@gmail.com>
  • Loading branch information
4 people authored and tgoelles committed Jun 6, 2024
1 parent 677d0a2 commit 79f58a9
Show file tree
Hide file tree
Showing 56 changed files with 109 additions and 43 deletions.
2 changes: 2 additions & 0 deletions kedro-datasets/RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@

## Bug fixes and other changes
* Fixed bug with loading models saved with `TensorFlowModelDataset`.
* Make dataset parameters keyword-only.

## Community contributions
Many thanks to the following Kedroids for contributing PRs to this release:
* [Edouard59](https://github.com/Edouard59)
* [Miguel Rodriguez Gutierrez](https://github.com/MigQ2)
* [felixscherz](https://github.com/felixscherz)

# Release 1.8.0
## Major features and improvements
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/api/api_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -90,6 +90,7 @@ class APIDataset(AbstractDataset[None, requests.Response]):

def __init__( # noqa: PLR0913
self,
*,
url: str,
method: str = "GET",
load_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ class BioSequenceDataset(AbstractDataset[List, List]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/dask/parquet_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,7 @@ class ParquetDataset(AbstractDataset[dd.DataFrame, dd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -197,14 +197,14 @@ class ManagedTableDataset(AbstractVersionedDataset):

def __init__( # noqa: PLR0913
self,
*,
table: str,
catalog: str = None,
database: str = "default",
write_mode: Union[str, None] = None,
dataframe_type: str = "spark",
primary_key: Optional[Union[str, List[str]]] = None,
version: Version = None,
*,
# the following parameters are used by project hooks
# to create or update table properties
schema: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/email/message_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ class EmailMessageDataset(AbstractVersionedDataset[Message, Message]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/geopandas/geojson_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ class GeoJSONDataset(

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
3 changes: 2 additions & 1 deletion kedro-datasets/kedro_datasets/holoviews/holoviews_writer.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ class HoloviewsWriter(AbstractVersionedDataset[HoloViews, NoReturn]):
>>> from kedro_datasets.holoviews import HoloviewsWriter
>>>
>>> curve = hv.Curve(range(10))
>>> holoviews_writer = HoloviewsWriter("/tmp/holoviews")
>>> holoviews_writer = HoloviewsWriter(filepath="/tmp/holoviews")
>>>
>>> holoviews_writer.save(curve)
Expand All @@ -38,6 +38,7 @@ class HoloviewsWriter(AbstractVersionedDataset[HoloViews, NoReturn]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
fs_args: Dict[str, Any] = None,
credentials: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/json/json_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -50,6 +50,7 @@ class JSONDataset(AbstractVersionedDataset[Any, Any]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
save_args: Dict[str, Any] = None,
version: Version = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ class MatplotlibWriter(

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
fs_args: Dict[str, Any] = None,
credentials: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/networkx/gml_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ class GMLDataset(AbstractVersionedDataset[networkx.Graph, networkx.Graph]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/networkx/graphml_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ class GraphMLDataset(AbstractVersionedDataset[networkx.Graph, networkx.Graph]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/networkx/json_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ class JSONDataset(AbstractVersionedDataset[networkx.Graph, networkx.Graph]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/csv_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ class CSVDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/deltatable_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,7 @@ class DeltaTableDataset(AbstractDataset):

def __init__( # noqa: PLR0913
self,
*,
filepath: Optional[str] = None,
catalog_type: Optional[DataCatalog] = None,
catalog_name: Optional[str] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/excel_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ class ExcelDataset(

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
engine: str = "openpyxl",
load_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/feather_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ class FeatherDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/gbq_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,7 @@ class GBQTableDataset(AbstractDataset[None, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
dataset: str,
table_name: str,
project: str = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/generic_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ class GenericDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
file_format: str,
load_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/hdf_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ class HDFDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
key: str,
load_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/json_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ class JSONDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/parquet_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ class ParquetDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/sql_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -153,6 +153,7 @@ class SQLTableDataset(AbstractDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
table_name: str,
credentials: dict[str, Any],
load_args: dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pandas/xml_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ class XMLDataset(AbstractVersionedDataset[pd.DataFrame, pd.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ class IncrementalDataset(PartitionedDataset):

def __init__( # noqa: PLR0913
self,
*,
path: str,
dataset: str | type[AbstractDataset] | dict[str, Any],
checkpoint: str | dict[str, Any] | None = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pickle/pickle_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ class PickleDataset(AbstractVersionedDataset[Any, Any]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
backend: str = "pickle",
load_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/pillow/image_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ class ImageDataset(AbstractVersionedDataset[Image.Image, Image.Image]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
save_args: Dict[str, Any] = None,
version: Version = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/plotly/json_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,7 @@ class JSONDataset(

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
10 changes: 9 additions & 1 deletion kedro-datasets/kedro_datasets/plotly/plotly_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ class PlotlyDataset(JSONDataset):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
plotly_args: Dict[str, Any],
load_args: Dict[str, Any] = None,
Expand Down Expand Up @@ -113,7 +114,14 @@ def __init__( # noqa: PLR0913
metadata: Any arbitrary metadata.
This is ignored by Kedro, but may be consumed by users or external plugins.
"""
super().__init__(filepath, load_args, save_args, version, credentials, fs_args)
super().__init__(
filepath=filepath,
load_args=load_args,
save_args=save_args,
version=version,
credentials=credentials,
fs_args=fs_args,
)
self._plotly_args = plotly_args

_fs_args = deepcopy(fs_args) or {}
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/polars/csv_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ class CSVDataset(AbstractVersionedDataset[pl.DataFrame, pl.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ class EagerPolarsDataset(AbstractVersionedDataset[pl.DataFrame, pl.DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
file_format: str,
load_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ class LazyPolarsDataset(AbstractVersionedDataset[pl.LazyFrame, PolarsFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
file_format: str,
load_args: Optional[Dict[str, Any]] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/redis/redis_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ class PickleDataset(AbstractDataset[Any, Any]):

def __init__( # noqa: PLR0913
self,
*,
key: str,
backend: str = "pickle",
load_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,7 @@ class SnowparkTableDataset(AbstractDataset):

def __init__( # noqa: PLR0913
self,
*,
table_name: str,
schema: str = None,
database: str = None,
Expand Down
2 changes: 1 addition & 1 deletion kedro-datasets/kedro_datasets/spark/deltatable_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ class DeltaTableDataset(AbstractDataset[None, DeltaTable]):
# using ``ThreadRunner`` instead
_SINGLE_PROCESS = True

def __init__(self, filepath: str, metadata: Dict[str, Any] = None) -> None:
def __init__(self, *, filepath: str, metadata: Dict[str, Any] = None) -> None:
"""Creates a new instance of ``DeltaTableDataset``.
Args:
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/spark/spark_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,7 @@ class SparkDataset(AbstractVersionedDataset[DataFrame, DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
file_format: str = "parquet",
load_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/spark/spark_hive_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ class SparkHiveDataset(AbstractDataset[DataFrame, DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
database: str,
table: str,
write_mode: str = "errorifexists",
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/spark/spark_jdbc_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ class SparkJDBCDataset(AbstractDataset[DataFrame, DataFrame]):

def __init__( # noqa: PLR0913
self,
*,
url: str,
table: str,
credentials: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ class SparkStreamingDataset(AbstractDataset):

def __init__(
self,
*,
filepath: str = "",
file_format: str = "",
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/svmlight/svmlight_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,7 @@ class SVMLightDataset(AbstractVersionedDataset[_DI, _DO]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@ class TensorFlowModelDataset(AbstractVersionedDataset[tf.keras.Model, tf.keras.M

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: Dict[str, Any] = None,
save_args: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/text/text_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,7 @@ class TextDataset(AbstractVersionedDataset[str, str]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
version: Version = None,
credentials: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/video/video_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -268,6 +268,7 @@ class VideoDataset(AbstractDataset[AbstractVideo, AbstractVideo]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
fourcc: Optional[str] = "mp4v",
credentials: Dict[str, Any] = None,
Expand Down
1 change: 1 addition & 0 deletions kedro-datasets/kedro_datasets/yaml/yaml_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ class YAMLDataset(AbstractVersionedDataset[Dict, Dict]):

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
save_args: Dict[str, Any] = None,
version: Version = None,
Expand Down
Loading

0 comments on commit 79f58a9

Please sign in to comment.