argopy
is a python library that aims to ease Argo data access, visualisation and manipulation for regular users as well as Argo experts and operators. Documentation is at https://argopy.readthedocs.io/en/latest/
Several python packages exist: we continuously try to build on these libraries to provide you with a single powerfull tool. List your tool here !
By default, argopy
relies on online services to fetch data, you can check web API services status here.
Install the last release with conda:
conda install -c conda-forge argopy
or pip:
pip install argopy
But since this is a young library in active development, use direct install from this repo to benefit from the lastest version:
pip install git+http://github.com/euroargodev/argopy.git@master
The argopy
library should work under all OS (Linux, Mac and Windows) and with python versions 3.6, 3.7 and 3.8.
Init the default data fetcher like:
from argopy import DataFetcher as ArgoDataFetcher
argo_loader = ArgoDataFetcher()
and then, request data for a specific space/time domain:
ds = argo_loader.region([-85,-45,10.,20.,0,10.]).to_xarray()
ds = argo_loader.region([-85,-45,10.,20.,0,1000.,'2012-01','2012-12']).to_xarray()
for profiles of a given float:
ds = argo_loader.profile(6902746, 34).to_xarray()
ds = argo_loader.profile(6902746, np.arange(12,45)).to_xarray()
ds = argo_loader.profile(6902746, [1,12]).to_xarray()
or for one or a collection of floats:
ds = argo_loader.float(6902746).to_xarray()
ds = argo_loader.float([6902746, 6902747, 6902757, 6902766]).to_xarray()
By default fetched data are returned in memory as xarray.DataSet. From there, it is easy to convert it to other formats like a Pandas dataframe:
ds = ArgoDataFetcher().profile(6902746, 34).to_xarray()
df = ds.to_dataframe()
or to export it to files:
ds = argo_loader.region([-85,-45,10.,20.,0,100.]).to_xarray()
ds.to_netcdf('my_selection.nc')
# or by profiles:
ds.argo.point2profile().to_netcdf('my_selection.nc')
Index object is returned as a pandas dataframe.
Init the fetcher:
from argopy import IndexFetcher as ArgoIndexFetcher
index_loader = ArgoIndexFetcher()
index_loader = ArgoIndexFetcher(backend='erddap')
#Local ftp backend
#index_loader = ArgoIndexFetcher(backend='localftp',path_ftp='/path/to/your/argo/ftp/',index_file='ar_index_global_prof.txt')
and then, set the index request index for a domain:
idx=index_loader.region([-85,-45,10.,20.])
idx=index_loader.region([-85,-45,10.,20.,'2012-01','2014-12'])
or for a collection of floats:
idx=index_loader.float(6902746)
idx=index_loader.float([6902746, 6902747, 6902757, 6902766])
then you can see you index as a pandas dataframe or a xarray dataset :
idx.to_dataframe()
idx.to_xarray()
For plottings methods, you'll need matplotlib
, cartopy
and seaborn
installed (they're not in requirements).
For plotting the map of your query :
idx.plot('trajectory)
For plotting the distribution of DAC or profiler type of the indexed profiles :
idx.plot('dac')
idx.plot('profiler')`
Our next big steps:
- To provide Bio-geochemical variables
We aim to provide high level helper methods to load Argo data and meta-data from:
- Ifremer erddap
- local copy of the GDAC ftp folder
- Index files (local and online)
- Argovis
- Online GDAC ftp
- any other useful access point to Argo data ?
We also aim to provide tutorial and high level helper methods to visualise and plot Argo data and meta-data:
- Map with trajectories
- Waterfall plots
- T/S diagram
- etc !