Skip to content

Commit

Permalink
Merge pull request #507 from neurodata/staging
Browse files Browse the repository at this point in the history
v0.0.6
  • Loading branch information
jdey4 authored Oct 21, 2021
2 parents 894d259 + 634d4d1 commit 31b0f3d
Show file tree
Hide file tree
Showing 73 changed files with 53,979 additions and 4,695 deletions.
24 changes: 17 additions & 7 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
version: 2.1

orbs:
codecov: codecov/codecov@1.0.2
codecov: codecov/codecov@3.1.1

jobs:
build:
Expand Down Expand Up @@ -34,6 +34,12 @@ jobs:
parameters:
module:
type: string
benchmarks:
type: string
experiments:
type: string
tutorials:
type: string
docker:
- image: cimg/python:3.8
steps:
Expand All @@ -56,6 +62,9 @@ jobs:
command: |
. venv/bin/activate
black --check --diff ./<< parameters.module >>
black --check --diff ./<< parameters.benchmarks >>
black --check --diff ./<< parameters.experiments >>
black --check --diff ./<< parameters.tutorials >>
- run:
name: run tests and coverage
command: |
Expand Down Expand Up @@ -95,12 +104,12 @@ jobs:
name: init .pypirc
command: |
echo -e "[pypi]" >> ~/.pypirc
echo -e "username = $PYPI_USERNAME" >> ~/.pypirc
echo -e "password = $PYPI_PASSWORD" >> ~/.pypirc
echo -e "username = __token__" >> ~/.pypirc
echo -e "password = $PIP_TOKEN" >> ~/.pypirc
- run:
name: create packages
command: |
python setup.py sdist
python setup.py sdist bdist_wheel
- run:
name: upload to pypi
command: |
Expand All @@ -121,13 +130,14 @@ workflows:
- test-module:
name: "proglearn"
module: "proglearn"
benchmarks: "benchmarks/"
experiments: "docs/experiments/"
tutorials: "docs/tutorials/"
requires:
- "v3.8"
- deploy:
requires:
- "proglearn"
filters:
tags:
only: /[0-9]+(\.[0-9]+)*/
only: /v[0-9]+(\.[0-9]+)*/
branches:
ignore: /.*/
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
name: Bug Report
about: Create a report to help us improve ProgLearn
label: bug
label:

---

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/documentation_fix.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
name: Documentation Fix
about: Create a report to help us improve the documentation of ProgLearn
label: documentation
label:

---

Expand Down
18 changes: 13 additions & 5 deletions .github/ISSUE_TEMPLATE/feature_request.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,25 @@
---
name: Feature Request
about: Suggest an idea for ProgLearn
label: enhancement
label:

---

**Is your feature request related to a problem? Please describe.**
<!--
Thank you for taking the time to file a bug report.
Please fill in the fields below, deleting the sections that
don't apply to your issue. You can view the final output
by clicking the preview button above.
Note: This is a comment, and won't appear in the output.
-->

#### Is your feature request related to a problem? Please describe.

**Describe the solution you'd like**

#### Describe the solution you'd like

**Describe alternatives you've considered**

#### Describe alternatives you've considered

**Additional context (e.g. screenshots)**

#### Additional context (e.g. screenshots)
10 changes: 7 additions & 3 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,15 @@ authors:
affiliation: "Johns Hopkins University, Baltimore, MD"
family-names: Priebe
given-names: Carey
cff-version: "1.1.0"
cff-version: "1.2.0"
date-released: 2021-09-18
identifiers:
-
type: url
value: "https://arxiv.org/pdf/2004.12908.pdf"
-
type: doi
value: 10.5281/zenodo.4060264
keywords:
- Python
- classification
Expand All @@ -77,8 +80,9 @@ keywords:
- "transfer learning"
- "domain adaptation"
license: MIT
message: "If you use this software, please cite it using these metadata."
doi: 10.5281/zenodo.4060264
message: "If you use ProgLearn, please cite it using these metadata."
repository-code: "https://github.com/neurodata/ProgLearn"
title: "Omnidirectional Transfer for Quasilinear Lifelong Learning"
version: "0.0.5"
version: "0.0.6"
...
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
MIT License

Copyright (c) 2020 Dr. Joshua T. Vogelstein
Copyright (c) 2020 Neurodata

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,4 +22,4 @@
Some system/package requirements:
- **Python**: 3.6+
- **OS**: All major platforms (Linux, macOS, Windows)
- **Dependencies**: keras, scikit-learn, scipy, numpy, joblib
- **Dependencies**: tensorflow, scikit-learn, scipy, numpy, joblib
148 changes: 88 additions & 60 deletions benchmarks/cifar_exp/appendix_tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,8 @@
"import pickle\n",
"import matplotlib.pyplot as plt\n",
"from matplotlib import rcParams\n",
"rcParams.update({'figure.autolayout': True})\n",
"\n",
"rcParams.update({\"figure.autolayout\": True})\n",
"import numpy as np\n",
"from itertools import product\n",
"import seaborn as sns\n",
Expand All @@ -39,77 +40,80 @@
"\n",
"#%%\n",
"def unpickle(file):\n",
" with open(file, 'rb') as fo:\n",
" dict = pickle.load(fo, encoding='bytes')\n",
" with open(file, \"rb\") as fo:\n",
" dict = pickle.load(fo, encoding=\"bytes\")\n",
" return dict\n",
"\n",
"\n",
"def get_fte_bte(err, single_err, ntrees):\n",
" bte = [[] for i in range(10)]\n",
" te = [[] for i in range(10)]\n",
" fte = []\n",
" \n",
"\n",
" for i in range(10):\n",
" for j in range(i,10):\n",
" #print(err[j][i],j,i)\n",
" bte[i].append(err[i][i]/err[j][i])\n",
" te[i].append(single_err[i]/err[j][i])\n",
" \n",
" for j in range(i, 10):\n",
" # print(err[j][i],j,i)\n",
" bte[i].append(err[i][i] / err[j][i])\n",
" te[i].append(single_err[i] / err[j][i])\n",
"\n",
" for i in range(10):\n",
" #print(single_err[i],err[i][i])\n",
" fte.append(single_err[i]/err[i][i])\n",
" \n",
" \n",
" return fte,bte,te\n",
" # print(single_err[i],err[i][i])\n",
" fte.append(single_err[i] / err[i][i])\n",
"\n",
"def calc_mean_bte(btes,task_num=10,reps=6):\n",
" mean_bte = [[] for i in range(task_num)]\n",
" return fte, bte, te\n",
"\n",
"\n",
"def calc_mean_bte(btes, task_num=10, reps=6):\n",
" mean_bte = [[] for i in range(task_num)]\n",
"\n",
" for j in range(task_num):\n",
" tmp = 0\n",
" for i in range(reps):\n",
" tmp += np.array(btes[i][j])\n",
" \n",
" tmp=tmp/reps\n",
"\n",
" tmp = tmp / reps\n",
" mean_bte[j].extend(tmp)\n",
" \n",
" return mean_bte \n",
"\n",
"def calc_mean_te(tes,task_num=10,reps=6):\n",
" return mean_bte\n",
"\n",
"\n",
"def calc_mean_te(tes, task_num=10, reps=6):\n",
" mean_te = [[] for i in range(task_num)]\n",
"\n",
" for j in range(task_num):\n",
" tmp = 0\n",
" for i in range(reps):\n",
" tmp += np.array(tes[i][j])\n",
" \n",
" tmp=tmp/reps\n",
"\n",
" tmp = tmp / reps\n",
" mean_te[j].extend(tmp)\n",
" \n",
" return mean_te \n",
"\n",
"def calc_mean_fte(ftes,task_num=10,reps=6):\n",
" return mean_te\n",
"\n",
"\n",
"def calc_mean_fte(ftes, task_num=10, reps=6):\n",
" fte = np.asarray(ftes)\n",
" \n",
" return list(np.mean(np.asarray(fte_tmp),axis=0))\n",
"\n",
"def calc_mean_err(err,task_num=10,reps=6):\n",
" mean_err = [[] for i in range(task_num)]\n",
" return list(np.mean(np.asarray(fte_tmp), axis=0))\n",
"\n",
"\n",
"def calc_mean_err(err, task_num=10, reps=6):\n",
" mean_err = [[] for i in range(task_num)]\n",
"\n",
" for j in range(task_num):\n",
" tmp = 0\n",
" for i in range(reps):\n",
" tmp += np.array(err[i][j])\n",
" \n",
" tmp=tmp/reps\n",
" #print(tmp)\n",
"\n",
" tmp = tmp / reps\n",
" # print(tmp)\n",
" mean_err[j].extend([tmp])\n",
" \n",
" return mean_err \n",
"\n",
" return mean_err\n",
"\n",
"\n",
"#%%\n",
"reps = slots*shifts\n",
"reps = slots * shifts\n",
"\n",
"btes = [[] for i in range(task_num)]\n",
"ftes = [[] for i in range(task_num)]\n",
Expand All @@ -121,39 +125,49 @@
"fte_tmp = [[] for _ in range(reps)]\n",
"err_tmp = [[] for _ in range(reps)]\n",
"\n",
"count = 0 \n",
"count = 0\n",
"for slot in range(slots):\n",
" for shift in range(shifts):\n",
" filename = 'result/'+model+str(ntrees)+'_'+str(shift+1)+'_'+str(slot)+'.pickle'\n",
" filename = (\n",
" \"result/\"\n",
" + model\n",
" + str(ntrees)\n",
" + \"_\"\n",
" + str(shift + 1)\n",
" + \"_\"\n",
" + str(slot)\n",
" + \".pickle\"\n",
" )\n",
" multitask_df, single_task_df = unpickle(filename)\n",
"\n",
" err = [[] for _ in range(10)]\n",
"\n",
" for ii in range(10):\n",
" err[ii].extend(\n",
" 1 - np.array(\n",
" multitask_df[multitask_df['base_task']==ii+1]['accuracy']\n",
" )\n",
" 1\n",
" - np.array(\n",
" multitask_df[multitask_df[\"base_task\"] == ii + 1][\"accuracy\"]\n",
" )\n",
" )\n",
" single_err = 1 - np.array(single_task_df['accuracy'])\n",
" fte, bte, te = get_fte_bte(err,single_err,ntrees)\n",
" \n",
" single_err = 1 - np.array(single_task_df[\"accuracy\"])\n",
" fte, bte, te = get_fte_bte(err, single_err, ntrees)\n",
"\n",
" err_ = [[] for i in range(task_num)]\n",
" for i in range(task_num):\n",
" for j in range(task_num-i):\n",
" #print(err[i+j][i])\n",
" err_[i].append(err[i+j][i])\n",
" \n",
" for j in range(task_num - i):\n",
" # print(err[i+j][i])\n",
" err_[i].append(err[i + j][i])\n",
"\n",
" te_tmp[count].extend(te)\n",
" bte_tmp[count].extend(bte)\n",
" fte_tmp[count].extend(fte)\n",
" err_tmp[count].extend(err_)\n",
" count+=1\n",
" \n",
"te = calc_mean_te(te_tmp,reps=reps)\n",
"bte = calc_mean_bte(bte_tmp,reps=reps)\n",
"fte = calc_mean_fte(fte_tmp,reps=reps)\n",
"error = calc_mean_err(err_tmp,reps=reps)"
" count += 1\n",
"\n",
"te = calc_mean_te(te_tmp, reps=reps)\n",
"bte = calc_mean_bte(bte_tmp, reps=reps)\n",
"fte = calc_mean_fte(fte_tmp, reps=reps)\n",
"error = calc_mean_err(err_tmp, reps=reps)"
]
},
{
Expand All @@ -162,9 +176,11 @@
"metadata": {},
"outputs": [],
"source": [
"flat_te_per_rep = [[np.mean(te_tmp[rep][i]) for i in range(len(te))] for rep in range(reps)]\n",
"mean_te_per_rep = np.mean(flat_te_per_rep, axis = 1)\n",
"min_te_per_rep = np.min(flat_te_per_rep, axis = 1)"
"flat_te_per_rep = [\n",
" [np.mean(te_tmp[rep][i]) for i in range(len(te))] for rep in range(reps)\n",
"]\n",
"mean_te_per_rep = np.mean(flat_te_per_rep, axis=1)\n",
"min_te_per_rep = np.min(flat_te_per_rep, axis=1)"
]
},
{
Expand Down Expand Up @@ -211,8 +227,16 @@
}
],
"source": [
"print(\"Mean FTE(Task 10): ({} +- {})\".format(np.mean(task_fte_task_10_per_rep), np.std(task_fte_task_10_per_rep)))\n",
"print(\"Mean BTE(Task 1): ({} +- {})\".format(np.mean(task_bte_task_1_per_rep), np.std(task_bte_task_1_per_rep)))"
"print(\n",
" \"Mean FTE(Task 10): ({} +- {})\".format(\n",
" np.mean(task_fte_task_10_per_rep), np.std(task_fte_task_10_per_rep)\n",
" )\n",
")\n",
"print(\n",
" \"Mean BTE(Task 1): ({} +- {})\".format(\n",
" np.mean(task_bte_task_1_per_rep), np.std(task_bte_task_1_per_rep)\n",
" )\n",
")"
]
},
{
Expand Down Expand Up @@ -240,7 +264,11 @@
"source": [
"for task in range(task_num):\n",
" final_te_of_task_per_rep = [te_tmp[rep][task][-1] for rep in range(reps)]\n",
" print(\"Final TE of Task {}: ({} +- {})\".format(task, np.mean(final_te_of_task_per_rep), np.std(final_te_of_task_per_rep)))"
" print(\n",
" \"Final TE of Task {}: ({} +- {})\".format(\n",
" task, np.mean(final_te_of_task_per_rep), np.std(final_te_of_task_per_rep)\n",
" )\n",
" )"
]
},
{
Expand Down
Loading

0 comments on commit 31b0f3d

Please sign in to comment.