Skip to content

Commit

Permalink
[DOC] Fix typos (#1290)
Browse files Browse the repository at this point in the history
Found via `codespell -L
nam,ans,bage,te,mapp,zar,caf,fro,som,tha,tje,yot,bu,fo,ressources,onl,regon,licens,variabl`
  • Loading branch information
kianmeng authored Sep 23, 2023
1 parent 6bb6a2a commit 43a606a
Show file tree
Hide file tree
Showing 14 changed files with 23 additions and 23 deletions.
2 changes: 1 addition & 1 deletion examples/notebooks/Pivoting Data from Wide to Long.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -3564,7 +3564,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The data above requires extracting `a`, `ab` and `ac` from `1` and `2`. This is another example of a paired column. We could solve this using [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html); infact there is a very good solution from [Stack Overflow](https://stackoverflow.com/a/45124775/7175713)"
"The data above requires extracting `a`, `ab` and `ac` from `1` and `2`. This is another example of a paired column. We could solve this using [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html); in fact there is a very good solution from [Stack Overflow](https://stackoverflow.com/a/45124775/7175713)"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions examples/notebooks/anime.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Supress user warnings when we try overwriting our custom pandas flavor functions\n",
"# Suppress user warnings when we try overwriting our custom pandas flavor functions\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
Expand Down Expand Up @@ -1316,7 +1316,7 @@
" :param df: A pandas DataFrame.\n",
" :param column_name: A `str` indicating which column the split action is to be made.\n",
" :param start: optional An `int` for the start index of the slice\n",
" :param stop: optinal An `int` for the end index of the slice\n",
" :param stop: optional An `int` for the end index of the slice\n",
" :param pat: String or regular expression to split on. If not specified, split on whitespace.\n",
"\n",
" \"\"\"\n",
Expand Down
2 changes: 1 addition & 1 deletion examples/notebooks/board_games.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -938,7 +938,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### What is the relationship between games' player numbers, reccomended minimum age, and the game's estimated length?"
"### What is the relationship between games' player numbers, recommended minimum age, and the game's estimated length?"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion examples/notebooks/dirty_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -708,7 +708,7 @@
{
"cell_type": "markdown",
"source": [
"Note how now we have really nice column names! You might be wondering why I'm not modifying the two certifiation columns -- that is the next thing we'll tackle."
"Note how now we have really nice column names! You might be wondering why I'm not modifying the two certification columns -- that is the next thing we'll tackle."
],
"metadata": {}
},
Expand Down
8 changes: 4 additions & 4 deletions examples/notebooks/medium_franchise.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"\n",
"* String operations with regular expressions (with `pandas-favor`)\n",
"* Data type changes (with `pyjanitor`)\n",
"* Split strings in cells into seperate rows (with `pandas-flavor`)\n",
"* Split strings in cells into separate rows (with `pandas-flavor`)\n",
"* Split strings in cells into separate columns (with `pyjanitor` + `pandas-flavor`)\n",
"* Filter dataframe values based on substring pattern (with `pyjanitor`)\n",
"* Column value remapping with fuzzy substring matching (with `pyjanitor` + `pandas-flavor`)\n",
Expand Down Expand Up @@ -66,7 +66,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Supress user warnings when we try overwriting our custom pandas flavor functions\n",
"# Suppress user warnings when we try overwriting our custom pandas flavor functions\n",
"import warnings\n",
"warnings.filterwarnings('ignore')"
]
Expand Down Expand Up @@ -220,7 +220,7 @@
"# [pandas-flavor]\n",
"@pf.register_dataframe_method\n",
"def str_remove(df, column_name: str, pattern: str = ''):\n",
" \"\"\"Remove string patten from a column\n",
" \"\"\"Remove string pattern from a column\n",
"\n",
" Wrapper around df.str.replace()\n",
"\n",
Expand Down Expand Up @@ -595,7 +595,7 @@
" column_name: str\n",
" Name of the column to be operated on\n",
" into: List[str], default to None\n",
" New column names for the splitted columns\n",
" New column names for the split columns\n",
" sep: str, default to ''\n",
" Separator at which to split the column\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion janitor/functions/complete.py
Original file line number Diff line number Diff line change
Expand Up @@ -326,7 +326,7 @@ def _computations_complete(
# instead of assign (which is also a for loop),
# to cater for scenarios where the column_name is not a string
# assign only works with keys that are strings
# Also, the output wil be floats (for numeric types),
# Also, the output will be floats (for numeric types),
# even if all the columns could be integers
# user can always convert to int if required
for column_name, value in fill_value.items():
Expand Down
2 changes: 1 addition & 1 deletion janitor/functions/conditional_join.py
Original file line number Diff line number Diff line change
Expand Up @@ -798,7 +798,7 @@ def _multiple_conditional_join_le_lt(
# and then build the remaining indices,
# using _generate_indices function
# the aim of this for loop is to see if there is
# the possiblity of a range join, and if there is,
# the possibility of a range join, and if there is,
# then use the optimised path
le_lt = None
ge_gt = None
Expand Down
2 changes: 1 addition & 1 deletion janitor/functions/factorize_columns.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def factorize_columns(
This method will create a new column with the string `_enc` appended
after the original column's name.
This can be overriden with the suffix parameter.
This can be overridden with the suffix parameter.
Internally, this method uses pandas `factorize` method.
It takes in an optional suffix and keyword arguments also.
Expand Down
8 changes: 4 additions & 4 deletions mkdocs/devguide.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ and mount the repository directory inside your Docker container.
Follow best practices to submit a pull request by making a feature branch.
Now, hack away, and submit in your pull request!

You shouln't be able to access the cloned repo
You shouldn't be able to access the cloned repo
on your local hard drive.
If you do want local access, then clone the repo locally first
before selecting "Remote Containers: Open Folder In Container".
Expand Down Expand Up @@ -153,7 +153,7 @@ Now you can make your changes locally.

### Check your environment

To ensure that your environemnt is properly set up, run the following command:
To ensure that your environment is properly set up, run the following command:

```bash
python -m pytest -m "not turtle"
Expand All @@ -165,7 +165,7 @@ development and you are ready to contribute 🥳.
### Check your code

When you're done making changes, commit your staged files with a meaningful message.
While we have automated checks that run before code is commited via pre-commit and GitHub Actions
While we have automated checks that run before code is committed via pre-commit and GitHub Actions
to run tests before code can be merged,
you can still manually run the following commands to check that your changes are properly
formatted and that all tests still pass.
Expand All @@ -188,7 +188,7 @@ To do so:
the optional dependencies (e.g. `rdkit`) installed.

!!! info
* pre-commit **does not run** your tests locally rather all tests are run in continous integration (CI).
* pre-commit **does not run** your tests locally rather all tests are run in continuous integration (CI).
* All tests must pass in CI before the pull request is accepted,
and the continuous integration system up on GitHub Actions
will help run all of the tests before they are committed to the repository.
Expand Down
4 changes: 2 additions & 2 deletions nbconvert_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,7 +269,7 @@
#
# you can overwrite :meth:`preprocess_cell` to apply a transformation
# independently on each cell or :meth:`preprocess` if you prefer your own logic.
# See corresponding docstring for informations.
# See corresponding docstring for information.
#
# Disabled by default and can be enabled via the config by
# 'c.YourPreprocessorName.enabled = True'
Expand Down Expand Up @@ -430,7 +430,7 @@
# DebugWriter(WriterBase) configuration
# ------------------------------------------------------------------------------

## Consumes output from nbconvert export...() methods and writes usefull
## Consumes output from nbconvert export...() methods and writes useful
# debugging information to the stdout. The information includes a list of
# resources that were extracted from the notebook(s) during export.

Expand Down
2 changes: 1 addition & 1 deletion tests/functions/test_coalesce.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def test_coalesce_without_target(df):

@pytest.mark.functions
def test_coalesce_without_delete():
"""Test ouptut if nulls remain and `default_value` is provided."""
"""Test output if nulls remain and `default_value` is provided."""
df = pd.DataFrame(
{"s1": [np.nan, np.nan, 6, 9, 9], "s2": [np.nan, 8, 7, 9, 9]}
)
Expand Down
2 changes: 1 addition & 1 deletion tests/functions/test_pivot_wider.py
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,7 @@ def test_pivot_long_wide_long():
assert_frame_equal(result, df_in)


@pytest.mark.xfail(reason="doesnt match, since pivot implicitly sorts")
@pytest.mark.xfail(reason="doesn't match, since pivot implicitly sorts")
def test_pivot_wide_long_wide():
"""
Test that transformation from pivot_longer to wider and
Expand Down
2 changes: 1 addition & 1 deletion tests/functions/test_select_columns.py
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ def test_callable_length(numbers):

@pytest.mark.functions
def test_callable_dtype(dataframe):
"""Test output when selecting columnns based on dtype"""
"""Test output when selecting columns based on dtype"""
expected = dataframe.select_dtypes("number")
actual = dataframe.select_columns(is_numeric_dtype)
assert_frame_equal(expected, actual)
Expand Down
4 changes: 2 additions & 2 deletions tests/functions/test_sort_column_value_order.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Below, company_sales and company_sales_2 are both dfs.
company_sales_2 is inverted, April is the first month
where in comapny_sales Jan is the first month
where in company_sales Jan is the first month
The values found in each row are the same
company_sales's Jan row contains the
Expand All @@ -14,7 +14,7 @@
Test 3 asserts that company_sales_2 and
company_sales with columns sorted
will become equivilent, meaning
will become equivalent, meaning
the columns have been successfully ordered.
"""

Expand Down

0 comments on commit 43a606a

Please sign in to comment.