Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support save contexts for generated views #17

Open
wants to merge 2 commits into
base: cloned_release_v0.25.0_11757
Choose a base branch
from

Conversation

korbit-ai[bot]
Copy link

@korbit-ai korbit-ai bot commented Aug 29, 2024

Resolves #4619

Prior to fiftyone==0.23.8, methods like compute_embeddings() and apply_model() were calling sample.save() in a loop, which is inefficient, so we updated these methods to use save contexts in voxel51/fiftyone#4244.

However, save contexts were not yet implemented for generated views, so the above methods stopped working when invoked on generated views such as to_patches() 😭

This PR resolves this by fully implementing save contexts for generated views 🥂

Tested by

Added unit tests 🪄

Example usage

import fiftyone as fo
import fiftyone.zoo as foz

dataset = foz.load_zoo_dataset("quickstart")
model = foz.load_zoo_model("resnet50-imagenet-torch")

patches = dataset.limit(10).to_patches("ground_truth")

# Started failing in 0.23.8, now succeeds again
patches.compute_patch_embeddings(model, "ground_truth", embeddings_field="resnet")

Summary by CodeRabbit

  • New Features

    • Improved handling of deferred saves in sample management, allowing for more flexible saving options.
    • Enhanced tracking and synchronization of generated samples within collections.
    • Added functionality for managing source synchronization in patch operations.
  • Bug Fixes

    • Resolved constraints on deferred saves that previously restricted functionality.
  • Tests

    • Introduced new tests to validate context saving for patches, frames, and clips, ensuring robust dataset handling.
  • Refactor

    • Streamlined synchronization logic in various components to improve maintainability and functionality.

Description by Korbit AI

What change is being made?

Enable save contexts for generated views in the FiftyOne library, including support for Clips, Patches, and Frames views, and add corresponding unit tests.

Why are these changes being made?

Previously, generated views such as Clips, Patches, and Frames did not support save contexts, which limited their functionality. This change allows these views to save changes back to their source datasets, enhancing the usability and flexibility of the library. The added unit tests ensure that the new functionality works as expected and maintains data integrity.

Copy link
Author

korbit-ai bot commented Aug 29, 2024

Clone of the PR voxel51/fiftyone#4636

Copy link

👋 I'm here to help you review your pull request. When you're ready for me to perform a review, you can comment anywhere on this pull request with this command: /korbit-review.

As a reminder, here are some helpful tips on how we can collaborate together:

  • To have me re-scan your pull request, simply re-invoke the /korbit-review command in a new comment.
  • You can interact with me by tagging @development-korbit-ai-mentor in any conversation in your pull requests.
  • On any comment I make on your code, please leave a 👍 if it is helpful and a 👎 if it is unhelpful. This will help me learn and improve as we work together
  • Lastly, to learn more, check out our Docs.

Copy link
Author

korbit-ai bot commented Aug 29, 2024

My review is in progress 📖 - I will have feedback for you in a few minutes!

Copy link
Contributor

coderabbitai bot commented Aug 29, 2024

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Base branches to auto review (4)
  • develop
  • main
  • release/.*
  • feat/.*

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Author

@korbit-ai korbit-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review Summary by Korbit AI

Code Execution Comments

  • Implement error handling to check for ground_truth presence before setting attributes to avoid potential AttributeError.

Korbit Guide: Usage and Customization

Interacting with Korbit

  • You can manually ask Korbit to review your PR using the /korbit-review command in a comment at the root of your PR.
  • Chat with Korbit on issues we post by tagging @korbit-ai in your reply.
  • Help train Korbit to improve your reviews by giving a 👍 or 👎 on the comments Korbit posts.

Customizing Korbit

  • Check out our docs on how you can make Korbit work best for you and your team.
  • Customize Korbit for your organization through the Korbit Console.

Current Korbit Configuration

General Settings
Setting Value
Review Schedule Automatic excluding drafts
Max Issue Count 10
Automatic PR Descriptions
Issue Categories
Category Enabled
Naming
Database Operations
Documentation
Logging
Error Handling
Systems and Environment
Objects and Data Structures
Tests
Readability and Maintainability
Asynchronous Processing
Design Patterns
Third-Party Libraries
Performance
Security
Functionality

Feedback and Support

Comment on lines +751 to +752
for sample in view.iter_samples(autosave=True):
sample.ground_truth.foo = "bar"
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

category Error Handling

The code iterates through the samples in the view and sets sample.ground_truth.foo = "bar", but it does not handle the case when a sample may not have a ground_truth field, such as sample2 in this test. This can lead to an AttributeError if ground_truth does not exist. Consider adding error handling, such as a try/except block or an if statement, to check for the presence of ground_truth before accessing and setting the foo attribute.

Chat with Korbit by mentioning @korbit-ai, and give a 👍 or 👎 to help Korbit improve your reviews.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@korbit-ai How would I fix this issue?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @julianser, it's great that you're working on making the save contexts available for more interfaces. One suggestion to prevent the possible AttributeError when the ground_truth field may not exist could be to add an if statement in this manner: if 'ground_truth' in sample: before accessing sample.ground_truth and setting the foo attribute. This would check for the presence of ground_truth before attempting to access and set the foo attribute, thereby preventing any AttributeError from occurring. Keep up the great work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants