Skip to content

Presubmission inquiry for plenoptic #97

Closed
@billbrod

Description

@billbrod

Submitting Author: @billbrod

Package Name: plenoptic

One-Line Description of Package: a python library for model-based image synthesis.

Repository Link (if existing): https://github.com/LabForComputationalVision/plenoptic/ (this PR docs build may be more useful for an introduction, still haven't merged into main yet)


Code of Conduct & Commitment to Maintain Package

Description

  • Include a brief paragraph describing what your package does: plenoptic provides tools to help researchers understand their model by synthesizing novel informative stimuli, which help build intuition for what features the model ignores and what it is sensitive to.

Community Partnerships

We partner with communities to support peer review with an additional layer of
checks that satisfy community requirements. If your package fits into an
existing community please check below:

Scope

  • Please indicate which category or categories.
    Check out our package scope page to learn more about our
    scope. (If you are unsure of which category you fit, we suggest you make a pre-submission inquiry):

    • Data retrieval
    • Data extraction
    • Data processing/munging
    • Data deposition
    • Data validation and testing
    • Data visualization
    • Workflow automation
    • Citation management and bibliometrics
    • Scientific software wrappers
    • Database interoperability

Domain Specific & Community Partnerships

- [ ] Geospatial
- [ ] Education
- [ ] Pangeo
- [X] Unsure/Other (explain below)
  • Explain how and why the package falls under these categories (briefly, 1-2 sentences). Please note any areas you are unsure of:

I think plenoptic is actually out of scope, but I wanted to check, because pyopensci looks cool. This package is intended for use by the vision science, machine learning, and neuroscience communities, but could be used by any researcher that builds models that take something image-, video-, or audio-like as input. The package generates new stimuli (for use in further experiments) rather than facilitates the visualization of existing data.

  • Who is the target audience and what are the scientific applications of this package?

Researchers in vision science, machine learning, and neuroscience, largely. The goal is to generate novel stimuli (images, videos, audio) that researchers can use in new experiments to better understand their computational models.

  • Are there other Python packages that accomplish similar things? If so, how does yours differ?

Not that I'm aware of.

  • Any other questions or issues we should be aware of:

P.S. Have feedback/comments about our review process? Leave a comment here

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    Status

    Closed

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions