Skip to content
/ vak Public
forked from vocalpy/vak

a neural network toolbox for animal vocalizations and bioacoustics

License

Notifications You must be signed in to change notification settings

NickleDave/vak

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation



A neural network framework for animal acoustic communication and bioacoustics

DOI

All Contributors

PyPI version License Build Status codecov

🚧 vak version 1.0.0 is in development! 🚧 πŸ“£ Test out the alpha release: pip install vak==1.0.0a3. πŸ“£ For more info, please see this forum post.

vak is a Python framework for neural network models, designed for researchers studying animal acoustic communication and bioacoustics. Many people will be familiar with work in this area on animal vocalizations such as birdsong, bat calls, and even human speech. Neural network models have provided a powerful new tool for researchers in this area, as in many other fields.

The library has two main goals:

  1. Make it easier for researchers studying animal vocalizations to apply neural network algorithms to their data
  2. Provide a common framework that will facilitate benchmarking neural network algorithms on tasks related to animal vocalizations

Currently, the main use is an automatic annotation of vocalizations and other animal sounds. By annotation, we mean something like the example of annotated birdsong shown below:

spectrogram of birdsong with syllables annotated

You give vak training data in the form of audio or spectrogram files with annotations, and then vak helps you train neural network models and use the trained models to predict annotations for new files.

We developed vak to benchmark a neural network model we call tweetynet.
Please see the eLife article here: https://elifesciences.org/articles/63853

To learn more about the goals and design of vak, please see this talk from the SciPy 2023 conference, and the associated Proceedings paper here.

Thumbnail of SciPy 2023 talk on vak

For more background on animal acoustic communication and deep learning, and how these intersect with related fields like computational ethology and neuroscience, please see the "About" section below.

Installation

Short version:

with pip

$ pip install vak

with conda

$ conda install vak -c pytorch -c conda-forge
$ #                  ^ notice additional channel!

Notice that for conda you specify two channels, and that the pytorch channel should come first, so it takes priority when installing the dependencies pytorch and torchvision.

For more details, please see:
https://vak.readthedocs.io/en/latest/get_started/installation.html

We test vak on Ubuntu and MacOS. We have run on Windows and know of other users successfully running vak on that operating system, but installation on Windows may require some troubleshooting. A good place to start is by searching the issues.

Usage

Tutorial

Currently the easiest way to work with vak is through the command line. terminal showing vak help command output

You run it with configuration files, using one of a handful of commands.

For more details, please see the "autoannotate" tutorial here:
https://vak.readthedocs.io/en/latest/get_started/autoannotate.html

How can I use my data with vak?

Please see the How-To Guides in the documentation here:
https://vak.readthedocs.io/en/latest/howto/index.html

Support / Contributing

For help, please begin by checking out the Frequently Asked Questions:
https://vak.readthedocs.io/en/latest/faq.html.

To ask a question about vak, discuss its development, or share how you are using it, please start a new "Q&A" topic on the VocalPy forum with the vak tag:
https://forum.vocalpy.org/

To report a bug, or to request a feature, please use the issue tracker on GitHub:
https://github.com/vocalpy/vak/issues

For a guide on how you can contribute to vak, please see: https://vak.readthedocs.io/en/latest/development/index.html

Citation

If you use vak for a publication, please cite both the Proceedings paper and the software.

Proceedings paper (BiBTex)

@inproceedings{nicholson2023vak,
  title={vak: a neural network framework for researchers studying animal acoustic communication},
  author={Nicholson, David and Cohen, Yarden},
  booktitle={Python in Science Conference},
  pages={59--67},
  year={2023}
}

Software

DOI

License

License
is here.

About

Are humans unique among animals? We speak languages, but is speech somehow like other animal behaviors, such as birdsong? Questions like these are answered by studying how animals communicate with sound. This research requires cutting edge computational methods and big team science across a wide range of disciplines, including ecology, ethology, bioacoustics, psychology, neuroscience, linguistics, and genomics 123. As in many other domains, this research is being revolutionized by deep learning algorithms 123. Deep neural network models enable answering questions that were previously impossible to address, in part because these models automate analysis of very large datasets. Within the study of animal acoustic communication, multiple models have been proposed for similar tasks, often implemented as research code with different libraries, such as Keras and Pytorch. This situation has created a real need for a framework that allows researchers to easily benchmark models and apply trained models to their own data. To address this need, we developed vak. We originally developed vak to benchmark a neural network model, TweetyNet 45, that automates annotation of birdsong by segmenting spectrograms. TweetyNet and vak have been used in both neuroscience 678 and bioacoustics 9. For additional background and papers that have used vak, please see: https://vak.readthedocs.io/en/latest/reference/about.html

"Why this name, vak?"

It has only three letters, so it is quick to type, and it wasn't taken on pypi yet. Also I guess it has something to do with speech. "vak" rhymes with "squawk" and "talk".

Does your library have any poems?

Yes.

Contributors ✨

Thanks goes to these wonderful people (emoji key):

  <td align="center" valign="top" width="14.28%"><a href="http://marisbasha.com"><img src="https://avatars.githubusercontent.com/u/41847328?v=4?s=100" width="100px;" alt="Maris Basha"/><br /><sub><b>Maris Basha</b></sub></a><br /><a href="#ideas-marisbasha" title="Ideas, Planning, & Feedback">πŸ€”</a> <a href="https://github.com/vocalpy/vak/commits?author=marisbasha" title="Code">πŸ’»</a></td>
</tr>
avanikop
avanikop

πŸ›
Luke Poeppel
Luke Poeppel

πŸ“–
yardencsGitHub
yardencsGitHub

πŸ’» πŸ€” πŸ“’ πŸ““ πŸ’¬
David Nicholson
David Nicholson

πŸ› πŸ’» πŸ”£ πŸ“– πŸ’‘ πŸ€” πŸš‡ 🚧 πŸ§‘β€πŸ« πŸ“† πŸ‘€ πŸ’¬ πŸ“’ ⚠️ βœ…
marichard123
marichard123

πŸ“–
Therese Koch
Therese Koch

πŸ“– πŸ›
alyndanoel
alyndanoel

πŸ€”
adamfishbein
adamfishbein

πŸ“–
vivinastase
vivinastase

πŸ› πŸ““
kaiyaprovost
kaiyaprovost

πŸ’» πŸ€”
ymk12345
ymk12345

πŸ› πŸ“–
neuronalX
neuronalX

πŸ› πŸ“–
Khoa
Khoa

πŸ“–
sthaar
sthaar

πŸ“– πŸ› πŸ€”
yangzheng-121
yangzheng-121

πŸ› πŸ€”
lmpascual
lmpascual

πŸ“–
ItamarFruchter
ItamarFruchter

πŸ“–
Hjalmar K. Turesson
Hjalmar K. Turesson

πŸ› πŸ€”
nhoglen
nhoglen

πŸ›
Ja-sonYun
Ja-sonYun

πŸ’»
Jacqueline
Jacqueline

πŸ›
Mark Muldoon
Mark Muldoon

πŸ›
zhileiz1992
zhileiz1992

πŸ› πŸ’»

This project follows the all-contributors specification. Contributions of any kind welcome!

Footnotes

  1. https://www.frontiersin.org/articles/10.3389/fnbeh.2021.811737/full ↩ ↩2

  2. https://peerj.com/articles/13152/ ↩ ↩2

  3. https://www.jneurosci.org/content/42/45/8514 ↩ ↩2

  4. https://elifesciences.org/articles/63853 ↩

  5. https://github.com/yardencsGitHub/tweetynet ↩

  6. https://www.nature.com/articles/s41586-020-2397-3 ↩

  7. https://elifesciences.org/articles/67855 ↩

  8. https://elifesciences.org/articles/75691 ↩

  9. https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0278522 ↩

About

a neural network toolbox for animal vocalizations and bioacoustics

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 100.0%