Skip to content

Commit

Permalink
Deploying to gh-pages from @ 50541d0 🚀
Browse files Browse the repository at this point in the history
  • Loading branch information
github-merge-queue[bot] committed Aug 28, 2024
1 parent fe13433 commit 750daab
Show file tree
Hide file tree
Showing 7 changed files with 136 additions and 207 deletions.
141 changes: 65 additions & 76 deletions _modules/arkouda/io.html

Large diffs are not rendered by default.

46 changes: 17 additions & 29 deletions _sources/autoapi/arkouda/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -37785,7 +37785,7 @@ Package Contents
:raises RuntimeError: Raised if there's a server-side error thrown


.. py:function:: load(path_prefix: str, file_format: str = 'INFER', dataset: str = 'array', calc_string_offsets: bool = False, column_delim: str = ',') -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]
.. py:function:: load(path_prefix: str, file_format: str = 'INFER', dataset: str = 'array', calc_string_offsets: bool = False, column_delim: str = ',') -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]

Load a pdarray previously saved with ``pdarray.save()``.

Expand Down Expand Up @@ -45193,7 +45193,7 @@ Package Contents
array(['+5"f', '-P]3', '4k', '~HFF', 'F'])


.. py:function:: read(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, iterative: bool = False, strictTypes: bool = True, allow_errors: bool = False, calc_string_offsets=False, column_delim: str = ',', read_nested: bool = True, has_non_float_nulls: bool = False, fixed_len: int = -1) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]
.. py:function:: read(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, iterative: bool = False, strictTypes: bool = True, allow_errors: bool = False, calc_string_offsets=False, column_delim: str = ',', read_nested: bool = True, has_non_float_nulls: bool = False, fixed_len: int = -1) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]

Read datasets from files.
File Type is determined automatically.
Expand Down Expand Up @@ -45234,11 +45234,8 @@ Package Contents
calculation, which can have an impact on performance.
:type fixed_len: int

:returns:

or Arkouda ArrayViews.
Dictionary of {datasetName: pdarray, String, SegArray, or ArrayView}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, Arkouda Segarrays,
:returns: Dictionary of {datasetName: pdarray, String, or SegArray}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, or Arkouda Segarrays.

:raises RuntimeError: If invalid filetype is detected

Expand Down Expand Up @@ -45271,7 +45268,7 @@ Package Contents
>>> x = ak.read('path/name_prefix*') # Reads HDF5


.. py:function:: read_csv(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, column_delim: str = ',', allow_errors: bool = False) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]
.. py:function:: read_csv(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, column_delim: str = ',', allow_errors: bool = False) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]

Read CSV file(s) into Arkouda objects. If more than one dataset is found, the objects
will be returned in a dictionary mapping the dataset name to the Arkouda object
Expand All @@ -45289,11 +45286,8 @@ Package Contents
the total number of files skipped due to failure and up to 10 filenames.
:type allow_errors: bool

:returns:

or Arkouda ArrayViews.
Dictionary of {datasetName: pdarray, String, SegArray, or ArrayView}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, Arkouda Segarrays,
:returns: Dictionary of {datasetName: pdarray, String, or SegArray}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, or Arkouda Segarrays.

:raises ValueError: Raised if all datasets are not present in all parquet files or if one or
more of the specified files do not exist
Expand All @@ -45314,7 +45308,7 @@ Package Contents
bytes as uint(8).


.. py:function:: read_hdf(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, iterative: bool = False, strict_types: bool = True, allow_errors: bool = False, calc_string_offsets: bool = False, tag_data=False) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]
.. py:function:: read_hdf(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, iterative: bool = False, strict_types: bool = True, allow_errors: bool = False, calc_string_offsets: bool = False, tag_data=False) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]

Read Arkouda objects from HDF5 file/s

Expand Down Expand Up @@ -45343,11 +45337,8 @@ Package Contents
that the data was pulled from.
:type tagData: bool

:returns:

or Arkouda ArrayViews.
Dictionary of {datasetName: pdarray, String, SegArray, or ArrayView}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, Arkouda Segarrays,
:returns: Dictionary of {datasetName: pdarray, String, SegArray}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, or Arkouda Segarrays.

:raises ValueError: Raised if all datasets are not present in all hdf5 files or if one or
more of the specified files do not exist
Expand Down Expand Up @@ -45382,7 +45373,7 @@ Package Contents
>>> x = ak.read_hdf('path/name_prefix*') # Reads HDF5


.. py:function:: read_parquet(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, iterative: bool = False, strict_types: bool = True, allow_errors: bool = False, tag_data: bool = False, read_nested: bool = True, has_non_float_nulls: bool = False, fixed_len: int = -1) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]
.. py:function:: read_parquet(filenames: Union[str, List[str]], datasets: Optional[Union[str, List[str]]] = None, iterative: bool = False, strict_types: bool = True, allow_errors: bool = False, tag_data: bool = False, read_nested: bool = True, has_non_float_nulls: bool = False, fixed_len: int = -1) -> Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.categorical.Categorical, arkouda.dataframe.DataFrame, arkouda.client_dtypes.IPv4, arkouda.timeclass.Datetime, arkouda.timeclass.Timedelta, arkouda.index.Index]]]

Read Arkouda objects from Parquet file/s

Expand Down Expand Up @@ -45418,11 +45409,8 @@ Package Contents
calculation, which can have an impact on performance.
:type fixed_len: int

:returns:

or Arkouda ArrayViews.
Dictionary of {datasetName: pdarray, String, SegArray, or ArrayView}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, Arkouda Segarrays,
:returns: Dictionary of {datasetName: pdarray, String, or SegArray}
:rtype: Returns a dictionary of Arkouda pdarrays, Arkouda Strings, or Arkouda Segarrays.

:raises ValueError: Raised if all datasets are not present in all parquet files or if one or
more of the specified files do not exist
Expand Down Expand Up @@ -45683,7 +45671,7 @@ Package Contents
array([1, 3, 3])


.. py:function:: save_all(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]]], prefix_path: str, names: Optional[List[str]] = None, file_format='HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: Optional[str] = None) -> None
.. py:function:: save_all(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]]], prefix_path: str, names: Optional[List[str]] = None, file_format='HDF5', mode: str = 'truncate', file_type: str = 'distribute', compression: Optional[str] = None) -> None

DEPRECATED
Save multiple named pdarrays to HDF5/Parquet files.
Expand Down Expand Up @@ -47946,7 +47934,7 @@ Package Contents
bytes as uint(8).


.. py:function:: to_hdf(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]]], prefix_path: str, names: Optional[List[str]] = None, mode: str = 'truncate', file_type: str = 'distribute') -> None
.. py:function:: to_hdf(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]]], prefix_path: str, names: Optional[List[str]] = None, mode: str = 'truncate', file_type: str = 'distribute') -> None

Save multiple named pdarrays to HDF5 files.

Expand Down Expand Up @@ -47994,7 +47982,7 @@ Package Contents
>>> ak.to_hdf([a, b], 'path/name_prefix', names=['a', 'b'])


.. py:function:: to_parquet(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]]], prefix_path: str, names: Optional[List[str]] = None, mode: str = 'truncate', compression: Optional[str] = None, convert_categoricals: bool = False) -> None
.. py:function:: to_parquet(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]]], prefix_path: str, names: Optional[List[str]] = None, mode: str = 'truncate', compression: Optional[str] = None, convert_categoricals: bool = False) -> None

Save multiple named pdarrays to Parquet files.

Expand Down Expand Up @@ -48743,7 +48731,7 @@ Package Contents

.. py:function:: unsqueeze(p)

.. py:function:: update_hdf(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray, arkouda.array_view.ArrayView]]], prefix_path: str, names: Optional[List[str]] = None, repack: bool = True)
.. py:function:: update_hdf(columns: Union[Mapping[str, Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]], List[Union[arkouda.pdarrayclass.pdarray, arkouda.strings.Strings, arkouda.segarray.SegArray]]], prefix_path: str, names: Optional[List[str]] = None, repack: bool = True)

Overwrite the datasets with name appearing in names or keys in columns if columns
is a dictionary
Expand Down
Loading

0 comments on commit 750daab

Please sign in to comment.