Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix typos #193

Merged
merged 1 commit into from
Dec 21, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ option(BUILD_Z5PY "Build z5 python bindings" ON)
# Include gtest submodule
###############################

# NOTE I tried to replace thhis with the conda package and
# NOTE I tried to replace this with the conda package and
# use find_pacakge(GTest), but although CMake could find the
# libraries, this led to horrible linker errors
if(BUILD_TESTS)
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@ int main() {
// get handle to a File on the filesystem
z5::filesystem::handle::File f("data.zr");
// if you wanted to use a different backend, for example AWS, you
// would need to use this insetead:
// would need to use this instead:
// z5::s3::handle::File f;

// create the file in zarr format
Expand Down
4 changes: 2 additions & 2 deletions include/z5/common.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@
// include boost::filesystem or std::filesystem header
// and define the namespace fs
#ifdef WITH_BOOST_FS
#ifndef BOOST_FILESYSTEM_NO_DEPERECATED
#define BOOST_FILESYSTEM_NO_DEPERECATED
#ifndef BOOST_FILESYSTEM_NO_DEPRECATED
#define BOOST_FILESYSTEM_NO_DEPRECATED
Copy link
Contributor Author

@DimitriPapadopoulos DimitriPapadopoulos Dec 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is BOOST_FILESYSTEM_NO_DEPRECATED still required? I'm asking because BOOST_FILESYSTEM_NO_DEPRECATED was clearly not defined until now, and yet this has been working.

BTW, it is clearly a z5 typo, not a Boost typo, because as far as I can see, BOOST_FILESYSTEM_NO_DEPERECATED has never been used in Boost:
https://github.com/search?q=org%3Aboostorg+BOOST_FILESYSTEM_NO_DEPERECATED
unlike BOOST_FILESYSTEM_NO_DEPRECATED:
https://github.com/search?q=org%3Aboostorg+BOOST_FILESYSTEM_NO_DEPRECATED&type=commits

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably not required anymore since all major builds are using c++17 and std::filesystem now,
Anyway, good to fix this in case some legacy builds with boost filesystem are still around.

Copy link
Contributor Author

@DimitriPapadopoulos DimitriPapadopoulos Dec 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually BOOST_FILESYSTEM_NO_DEPRECATED will disable deprecated parts of the Boost API. I understand this is an internal z5 issue that will not result in z5 API changes. Since all tests still pass, I guess it's OK.

#endif
#include <boost/filesystem.hpp>
#include <boost/filesystem/fstream.hpp>
Expand Down
2 changes: 1 addition & 1 deletion include/z5/filesystem/metadata.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ namespace metadata_detail {
j["zarr_format"] = metadata.zarrFormat;
} else {
// n5 stores attributes and metadata in the same file,
// so we need to make sure that we don't ovewrite attributes
// so we need to make sure that we don't overwrite attributes
try {
readAttributes(handle, j);
} catch(std::runtime_error) {} // read attributes throws RE if there are no attributes, we can just ignore this
Expand Down
2 changes: 1 addition & 1 deletion include/z5/handle.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
namespace z5 {
namespace handle {

// TODO this should go into docstrings and then a documnetation should be generated from them
// TODO this should go into docstrings and then a documentation should be generated from them
// (using doxygen?!)
/*
* This header includes the base classes for the z5 handle objects.
Expand Down
2 changes: 1 addition & 1 deletion include/z5/metadata.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ namespace z5 {
types::ShapeType shape;
types::ShapeType chunkShape;

// compressor name and opyions
// compressor name and options
types::Compressor compressor;
types::CompressionOptions compressionOptions;

Expand Down
2 changes: 1 addition & 1 deletion include/z5/multiarray/xtensor_access.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ namespace multiarray {
chunkSize = std::accumulate(chunkShape.begin(), chunkShape.end(),
1, std::multiplies<std::size_t>());

// read the data from storge
// read the data from storage
std::vector<char> dataBuffer;
ds.readRawChunk(chunkId, dataBuffer);

Expand Down
24 changes: 12 additions & 12 deletions include/z5/multiarray/xtensor_util.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ namespace multiarray {

template<typename T, typename VIEW, typename SHAPE_TYPE>
inline void copyBufferToViewND(const std::vector<T> & buffer,
xt::xexpression<VIEW> & viewExperession,
xt::xexpression<VIEW> & viewExpression,
const SHAPE_TYPE & arrayStrides) {
// get the view into the out array and the number of dimension
auto & view = viewExperession.derived_cast();
auto & view = viewExpression.derived_cast();
const std::size_t dim = view.dimension();
// buffer size and view shape
const std::size_t bufSize = buffer.size();
Expand All @@ -66,7 +66,7 @@ namespace multiarray {
// we start the outer loop at the second from last dimension
// (last dimension is the fastest moving and consecutive in memory)
for(int d = dim - 2; d >= 0;) {
// copy the piece of buffer that is consectuve to our view
// copy the piece of buffer that is consecutive to our view
std::copy(buffer.begin() + bufferOffset,
buffer.begin() + bufferOffset + memLen,
&view(0) + viewOffset);
Expand Down Expand Up @@ -127,26 +127,26 @@ namespace multiarray {
// TODO this only works for row-major (C) memory layout
template<typename T, typename VIEW, typename SHAPE_TYPE>
inline void copyBufferToView(const std::vector<T> & buffer,
xt::xexpression<VIEW> & viewExperession,
xt::xexpression<VIEW> & viewExpression,
const SHAPE_TYPE & arrayStrides) {
auto & view = viewExperession.derived_cast();
auto & view = viewExpression.derived_cast();
// ND impl doesn't work for 1D
if(view.dimension() == 1) {
// std::copy(buffer.begin(), buffer.end(), view.begin());
const auto bufferView = xt::adapt(buffer, view.shape());
view = bufferView;
} else {
copyBufferToViewND(buffer, viewExperession, arrayStrides);
copyBufferToViewND(buffer, viewExpression, arrayStrides);
}
}


template<typename T, typename VIEW, typename SHAPE_TYPE>
inline void copyViewToBufferND(const xt::xexpression<VIEW> & viewExperession,
inline void copyViewToBufferND(const xt::xexpression<VIEW> & viewExpression,
std::vector<T> & buffer,
const SHAPE_TYPE & arrayStrides) {
// get the view into the out array and the number of dimension
const auto & view = viewExperession.derived_cast();
const auto & view = viewExpression.derived_cast();
const std::size_t dim = view.dimension();
// buffer size and view shape
const std::size_t bufSize = buffer.size();
Expand All @@ -166,7 +166,7 @@ namespace multiarray {
// we start the outer loop at the second from last dimension
// (last dimension is the fastest moving and consecutive in memory)
for(int d = dim - 2; d >= 0;) {
// copy the piece of buffer that is consectuve to our view
// copy the piece of buffer that is consecutive to our view
std::copy(&view(0) + viewOffset,
&view(0) + viewOffset + memLen,
buffer.begin() + bufferOffset);
Expand Down Expand Up @@ -226,18 +226,18 @@ namespace multiarray {

// TODO this only works for row-major (C) memory layout
template<typename T, typename VIEW, typename SHAPE_TYPE>
inline void copyViewToBuffer(const xt::xexpression<VIEW> & viewExperession,
inline void copyViewToBuffer(const xt::xexpression<VIEW> & viewExpression,
std::vector<T> & buffer,
const SHAPE_TYPE & arrayStrides) {
const auto & view = viewExperession.derived_cast();
const auto & view = viewExpression.derived_cast();
// can't use the ND implementation in 1d, hence we resort to xtensor
// which should be fine in 1D
if(view.dimension() == 1) {
// std::copy(view.begin(), view.end(), buffer.begin());
auto bufferView = xt::adapt(buffer, view.shape());
bufferView = view;
} else {
copyViewToBufferND(viewExperession, buffer, arrayStrides);
copyViewToBufferND(viewExpression, buffer, arrayStrides);
}
}

Expand Down
2 changes: 1 addition & 1 deletion include/z5/util/file_mode.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ namespace z5 {

class FileMode {
public:
// the inidividual options
// the individual options
static const char can_write = 1;
static const char can_create = 2;
static const char must_not_exist = 4;
Expand Down
4 changes: 2 additions & 2 deletions include/z5/util/functions.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ namespace util {


// TODO add option to ignore fill value
// TODO maybe it would be benefitial to have intermediate unordered sets
// TODO maybe it would be beneficial to have intermediate unordered sets
template<class T>
void unique(const Dataset & dataset, const int nThreads, std::set<T> & uniques) {

Expand Down Expand Up @@ -101,7 +101,7 @@ namespace util {


// TODO add option to ignore fill value
// TODO maybe it would be benefitial to have intermediate unordered maps
// TODO maybe it would be beneficial to have intermediate unordered maps
template<class T>
void uniqueWithCounts(const Dataset & dataset, const int nThreads, std::map<T, size_t> & uniques) {
// allocate the per thread data
Expand Down
2 changes: 1 addition & 1 deletion include/z5/util/threadpool.hxx
Original file line number Diff line number Diff line change
Expand Up @@ -459,7 +459,7 @@ inline void parallel_foreach_impl(
}

// Runs foreach on a single thread.
// Used for API compatibility when the numbe of threads is 0.
// Used for API compatibility when the number of threads is 0.
template<class ITER, class F>
inline void parallel_foreach_single_thread(
ITER begin,
Expand Down
2 changes: 1 addition & 1 deletion src/bench/bench_python/bench.py
Original file line number Diff line number Diff line change
Expand Up @@ -317,4 +317,4 @@ def main(path, name=None, save_folder='./tmp_files', iterations=5, compressors=N
parser.add_argument('--iterations', '-i', default=5)

args = parser.parse_args()
main(args.path, args.name, args.save_folder, args.iterstions)
main(args.path, args.name, args.save_folder, args.iterations)
4 changes: 2 additions & 2 deletions src/python/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ macro(addPythonModule)

set(options "")
set(oneValueArgs NESTED_NAME)
set(multiValueArgs SOURCES LIBRRARIES)
set(multiValueArgs SOURCES LIBRARIES)
Copy link
Contributor Author

@DimitriPapadopoulos DimitriPapadopoulos Dec 21, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope this wasn't on purpose...

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, not on purpose ;)

cmake_parse_arguments(ADD_PY_MOD "${options}" "${oneValueArgs}" "${multiValueArgs}" ${ARGN} )

# get name of the module
Expand All @@ -84,7 +84,7 @@ macro(addPythonModule)
# link additional libraries
target_link_libraries(_${MODULE_NAME}
PUBLIC
${ADD_PY_MOD_LIBRRARIES}
${ADD_PY_MOD_LIBRARIES}
)

IF(${CMAKE_SYSTEM_NAME} MATCHES "Darwin")
Expand Down
2 changes: 1 addition & 1 deletion src/python/lib/CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ addPythonModule(
factory.cxx
handles.cxx
util.cxx
LIBRRARIES
LIBRARIES
${COMPRESSION_LIBRARIES}
${FILESYSTEM_LIBRARIES}
${CLOUD_LIBRARIES}
Expand Down
2 changes: 1 addition & 1 deletion src/python/lib/dataset.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ namespace z5 {
try {
path = ds.path().string();
} catch(...) {
throw std::runtime_error("Can only picke filesystem datasets");
throw std::runtime_error("Can only pick filesystem datasets");
}
return py::make_tuple(path, ds.mode().mode());
},
Expand Down
2 changes: 1 addition & 1 deletion src/python/module/z5py/converter.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ def convert_to_h5(in_path, out_path,
fit_to_roi=False, **h5_kwargs):
""" Convert n5 ot zarr dataset to hdf5 dataset.

The chunks of the output dataset must be spcified.
The chunks of the output dataset must be specified.
The dataset is converted to hdf5 in parallel over the chunks.
Note that hdf5 does not support parallel write access, so more threads
may not speed up the conversion.
Expand Down
6 changes: 3 additions & 3 deletions src/python/module/z5py/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ def __init__(self, dset_impl, handle, parent, name, n_threads=1):
def __array__(self, dtype=None):
""" Create a numpy array containing the whole dataset.

NOTE: Datasets are not interchangeble with arrays!
NOTE: Datasets are not interchangeable with arrays!
Every time this method is called the whole dataset is loaded into memory!
"""
arr = self[...]
Expand Down Expand Up @@ -409,7 +409,7 @@ def read_direct(self, dest, source_sel=None, dest_sel=None):
dest (array) destination object into which the read data is written to.
dest_sel (slice array) selection of data to write to ``dest``. Defaults to the whole range of ``dest``.
source_sel (slice array) selection in dataset to read from. Defaults to the whole range of the dataset.
Spaces, defined by ``source_sel`` and ``dest_sel`` must be in the same size but dont need to have the same
Spaces, defined by ``source_sel`` and ``dest_sel`` must be in the same size but don't need to have the same
offset
"""
if source_sel is None:
Expand All @@ -428,7 +428,7 @@ def write_direct(self, source, source_sel=None, dest_sel=None):
source_sel (slice array) selection of data to write from ``source``. Defaults to the whole range of
``source``.
dest_sel (slice array) selection in dataset to write to. Defaults to the whole range of the dataset.
Spaces, defined by ``source_sel`` and ``dest_sel`` must be in the same size but dont need to have the same
Spaces, defined by ``source_sel`` and ``dest_sel`` must be in the same size but don't need to have the same
offset
"""
if dest_sel is None:
Expand Down
2 changes: 1 addition & 1 deletion src/python/module/z5py/shape_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ def rectify_shape(arr, required_shape):
important_shape = tuple(important_shape)

msg = ('could not broadcast input array from shape {} into shape {}; '
'complicated broacasting not supported').format(arr.shape, required_shape)
'complicated broadcasting not supported').format(arr.shape, required_shape)

if len(important_shape) > len(required_shape):
raise ValueError(msg)
Expand Down
2 changes: 1 addition & 1 deletion src/python/module/z5py/util.py
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ def copy_dataset_impl(f_in, f_out, in_path_in_file, out_path_in_file,
else:
compression_opts = {'compression_opts': None} if compression_opts is None else compression_opts

# if we don't have block-shape explitictly given, use chunk size
# if we don't have block-shape explicitly given, use chunk size
# otherwise check that it's a multiple of chunks
if block_shape is None:
block_shape = chunks
Expand Down
2 changes: 1 addition & 1 deletion src/python/test/test_s3.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def make_test_data(bucket_name=None):
if bucket_name is None:
bucket_name = TestS3.bucket_name

# access the s3 filesysyem
# access the s3 filesystem
fs = s3fs.S3FileSystem(anon=False)
store = s3fs.S3Map(root=bucket_name, s3=fs)

Expand Down
2 changes: 1 addition & 1 deletion src/test/multiarray/test_broadcast.cxx
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ namespace multiarray {
ArrayShape arrayShape(shape.begin(), shape.end());
const T val = 42;

// write scalcar and load for completely overlapping array consisting of 8 chunks
// write scalar and load for completely overlapping array consisting of 8 chunks
{
ArrayShape offset({0, 0, 0});
ArrayShape subShape({20, 20, 20});
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -102,7 +102,7 @@
// assert floatData[i] == 42.;
// }
//
// // test the doubl block
// // test the double block
// final DataBlock<?> loadedDouble = n5.readBlock(doubleSetRaw, attrsDouble, new long[]{x, y, z});
// final double[] doubleData = (double[]) loadedDouble.getData();
// for(int i = 0; i < doubleData.length; i++) {
Expand Down