Modeling the Drosophila larva connectome
If you are new to Python development, my recommended practices are here.
Currently, the recommended setup is to use conda or miniconda to create a virtual environment.
The following will assume some familiarity with the command line, git
, Github
, and conda
.
- Hit "Fork" in the upper left corner on Github
- We'll be downloading a few repos, so I'd recommend having a dedicated folder to store them all. I'll refer to this folder as the "top-level directory" throughout.
- Click "Clone or download" button in the top right
- Copy the provided
- From the top-level directory,
-
(Recommended) Clone just the most recent version of master:
git clone --depth 1 -b master <link>
-
OR to clone the whole repo and all history (large) do
git clone <link>
-
conda config --append channels conda-forge
conda config --set channel_priority strict
conda create -n {insert name} python==3.7
conda info --envs
conda activate {insert name}
conda install --file requirements.txt
conda install conda-build
Both of these packages, while available on PyPI
, are still undergoing development. For now, I recommend installing these two packages via cloning and installing locally
rather than doing so via pip
or similar.
From the directory where you would like to store GraSPy and Hyppo, do
git clone https://github.com/neurodata/graspy.git
cd graspy
conda develop .
Rather than installing a "static" version of GraSPy
, the command above will install the
package located in the current folder (in this case, GraSPy
) while tracking any changes
made to those files.
Similarly for Hyppo, navigate to the top level directory for this project, and do
git clone https://github.com/neurodata/hyppo.git
cd hyppo
conda develop .
Now you should have GraSPy
and Hyppo
installed, and if you need to get the latest version,
this is as simple as
cd graspy
git pull
From the top level directory for this project, do
cd maggot_models
conda develop .
The data is not yet public, talk to @bdpedigo about how to find and store the data.
Place the .graphml
files in maggot_models/data/processed/<data>/
From the top-level directory, do
python
<-- Start python in the terminal
from src.data import load_metagraph
<-- Import a function to load data
mg = load_metagraph("G")
<-- Load the "sum" graph
print(mg.adj)
<-- Access and print the adjacency matrix
print(mg.meta.head())
<-- Access and print the first few rows of metadata
├── LICENSE
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── external <- Data from third party sources.
│ ├── interim <- Intermediate data that has been transformed.
│ ├── processed <- The final, canonical data sets for modeling.
│ └── raw <- The original, immutable data dump.
│
├── docs <- A default Sphinx project; see sphinx-doc.org for details
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── notebooks <- Jupyter notebooks. Naming convention is a number (for ordering),
│ | the creator's initials, and a short `-` delimited description, e.g.
│ | `1.0-jqp-initial-data-exploration`.
| |
| └── outs <- figures and intermediate results labeled by notebook that generated them.
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Generated analysis as HTML, PDF, LaTeX, etc.
│ └── figures <- Generated graphics and figures to be used in reporting
│
├── simulations <- Synthetic data experiments and outputs
│ └── runs <- Sacred output for individual experiment runs
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── data <- Scripts to download or generate data
│ │ └── make_dataset.py
│ │
│ ├── features <- Scripts to turn raw data into features for modeling
│ │ └── build_features.py
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── predict_model.py
│ │ └── train_model.py
│ │
│ └── visualization <- Scripts to create exploratory and results oriented visualizations
│ └── visualize.py
│
└── tox.ini <- tox file with settings for running tox; see tox.testrun.org
Project based on the cookiecutter data science project template. #cookiecutterdatascience