Skip to content

Commit

Permalink
update notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
Sushobhan Parajuli committed Nov 21, 2024
1 parent 8ad62b8 commit f908dd1
Show file tree
Hide file tree
Showing 7 changed files with 22 additions and 19 deletions.
12 changes: 6 additions & 6 deletions notebooks/dvc.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,26 +2,26 @@ stages:
results-mind-val:
cmd: jupytext --to notebook --execute mind-val.md
deps:
- mind-val.md
- mind-val/mind-val.md
- ../outputs/mind-val/profile-metrics.csv.gz
outs:
- mind-val.ipynb:
- mind-val/mind-val.ipynb:
cache: false

results-mind-small:
cmd: jupytext --to notebook --execute mind-small.md
deps:
- mind-small.md
- mind-small/mind-small.md
- ../outputs/mind-small/profile-metrics.csv.gz
outs:
- mind-small.ipynb:
- mind-small/mind-small.ipynb:
cache: false

results-mind-subset:
cmd: jupytext --to notebook --execute mind-subset.md
deps:
- mind-subset.md
- mind-subset/mind-subset.md
- ../outputs/mind-subset/profile-metrics.csv.gz
outs:
- mind-subset.ipynb:
- mind-subset/mind-subset.ipynb:
cache: false
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"# Offline Evaluation Metrics Visualizations\n",
"This notebook visualizes user-specific performance metrics of various recommenders in the mind-small dataset to assess effectiveness and ranking overlap. We explore two metric groups:\n",
"1. **Effectiveness Metrics**: We use ranking-based metrics, Normalized Discounted Cumulative Gain (NDCG) and Reciprocal Rank (RR), to evaluate recommender effectiveness.\n",
"2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to fianl rankings."
"2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to final rankings."
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ kernelspec:
# Offline Evaluation Metrics Visualizations
This notebook visualizes user-specific performance metrics of various recommenders in the mind-small dataset to assess effectiveness and ranking overlap. We explore two metric groups:
1. **Effectiveness Metrics**: We use ranking-based metrics, Normalized Discounted Cumulative Gain (NDCG) and Reciprocal Rank (RR), to evaluate recommender effectiveness.
2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to fianl rankings.
2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to final rankings.

+++

Expand All @@ -30,17 +30,17 @@ This notebook visualizes user-specific performance metrics of various recommende
PyData packages:

```{code-cell} ipython3
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
```

Local code:

```{code-cell} ipython3
from poprox_recommender.eval_tables import EvalTable
from IPython.display import HTML
from poprox_recommender.eval_tables import EvalTable
```

Set up progress:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"# Offline Evaluation Metrics Visualizations\n",
"This notebook visualizes user-specific performance metrics of various recommenders in the mind-subset dataset to assess effectiveness and ranking overlap. We explore two metric groups:\n",
"1. **Effectiveness Metrics**: We use ranking-based metrics, Normalized Discounted Cumulative Gain (NDCG) and Reciprocal Rank (RR), to evaluate recommender effectiveness.\n",
"2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to fianl rankings."
"2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to final rankings."
]
},
{
Expand Down Expand Up @@ -814,6 +814,9 @@
}
],
"metadata": {
"jupytext": {
"formats": "ipynb,md:myst"
},
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ kernelspec:
---

# Offline Evaluation Metrics Visualizations
This notebook visualizes user-specific performance metrics of various recommenders in the MIND-Subset dataset to assess effectiveness and ranking overlap. We explore two metric groups:
This notebook visualizes user-specific performance metrics of various recommenders in the mind-subset dataset to assess effectiveness and ranking overlap. We explore two metric groups:
1. **Effectiveness Metrics**: We use ranking-based metrics, Normalized Discounted Cumulative Gain (NDCG) and Reciprocal Rank (RR), to evaluate recommender effectiveness.
2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to fianl rankings.
2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to final rankings.

+++

Expand All @@ -30,17 +30,17 @@ This notebook visualizes user-specific performance metrics of various recommende
PyData packages:

```{code-cell} ipython3
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
```

Local code:

```{code-cell} ipython3
from poprox_recommender.eval_tables import EvalTable
from IPython.display import HTML
from poprox_recommender.eval_tables import EvalTable
```

Set up progress:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
"# Offline Evaluation Metrics Visualizations\n",
"This notebook visualizes user-specific performance metrics of various recommenders in the mind-val dataset to assess effectiveness and ranking overlap. We explore two metric groups:\n",
"1. **Effectiveness Metrics**: We use ranking-based metrics, Normalized Discounted Cumulative Gain (NDCG) and Reciprocal Rank (RR), to evaluate recommender effectiveness.\n",
"2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to fianl rankings."
"2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to final rankings."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions notebooks/mind-val.md → notebooks/mind-val/mind-val.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ kernelspec:
# Offline Evaluation Metrics Visualizations
This notebook visualizes user-specific performance metrics of various recommenders in the mind-val dataset to assess effectiveness and ranking overlap. We explore two metric groups:
1. **Effectiveness Metrics**: We use ranking-based metrics, Normalized Discounted Cumulative Gain (NDCG) and Reciprocal Rank (RR), to evaluate recommender effectiveness.
2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to fianl rankings.
2. **Ranking Overlap Metrics**: We use Rank-Based Overlap (RBO) to assess consistency in top-k recommendations relative to final rankings.

+++

Expand All @@ -30,17 +30,17 @@ This notebook visualizes user-specific performance metrics of various recommende
PyData packages:

```{code-cell} ipython3
import numpy as np
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
```

Local code:

```{code-cell} ipython3
from poprox_recommender.eval_tables import EvalTable
from IPython.display import HTML
from poprox_recommender.eval_tables import EvalTable
```

Set up progress:
Expand Down

0 comments on commit f908dd1

Please sign in to comment.