Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dev: move storage metadata collection to background job #5818

Merged
merged 3 commits into from
Oct 16, 2024

Conversation

pablohashescobar
Copy link
Collaborator

@pablohashescobar pablohashescobar commented Oct 14, 2024

  • fix storage metadata logic to look for empty dict instead of None.
  • move metadata collection logic to background job.

Summary by CodeRabbit

  • New Features

    • Introduced asynchronous metadata retrieval for asset management.
    • Added file type and size validation for attachments before upload.
    • Enhanced data retrieval for cycle archive with detailed estimates and distributions.
  • Bug Fixes

    • Enhanced asset deletion logic to mark assets as deleted instead of removing them outright.
  • Documentation

    • Updated method signatures for clarity and consistency across various endpoints.
  • Chores

    • Modified Nginx configuration for improved request handling to dynamic buckets.
    • Updated environment variable declaration for AWS region in Docker configuration.

Copy link
Contributor

coderabbitai bot commented Oct 14, 2024

Walkthrough

The changes introduced in this pull request involve modifications across multiple endpoints to implement asynchronous retrieval of asset metadata using the get_asset_object_metadata function. This function is designed as a Celery task and is invoked in several patch methods across different classes. Additionally, method signatures in the ProjectAssetEndpoint and IssueAttachmentV2Endpoint classes are updated to accommodate new parameters and functionalities, while the S3Storage class sees enhancements for improved flexibility in client construction.

Changes

File Path Change Summary
apiserver/plane/app/views/asset/v2.py - Added import for get_asset_object_metadata.
- Updated patch method in ProjectAssetEndpoint to call get_asset_object_metadata.delay(asset_id=str(asset_id)).
- Method signature updated to def patch(self, request, slug, project_id, pk).
apiserver/plane/app/views/issue/attachment.py - Added import for get_asset_object_metadata.
- Updated post, delete, get, and patch methods with new signatures and functionalities, including validation and asynchronous metadata retrieval.
apiserver/plane/bgtasks/storage_metadata_task.py - Introduced get_asset_object_metadata function as a Celery task for asynchronous metadata retrieval.
apiserver/plane/settings/storage.py - Modified S3Storage class __init__ method for improved flexibility in S3 client construction. No changes to method signatures.
apiserver/plane/space/views/asset.py - Added import for get_asset_object_metadata.
- Updated patch method in EntityAssetEndpoint to call get_asset_object_metadata.delay.
- Corrected indentation for asset variable assignment.
deploy/selfhost/docker-compose.yml - Updated AWS_REGION environment variable from AWS_REGION=${AWS_REGION:-""} to AWS_REGION=${AWS_REGION:-} for simplification.
nginx/nginx.conf.dev - Modified location block for bucket handling by removing trailing slash and updating proxy_pass directive to target bucket directly.
nginx/nginx.conf.template - Modified location block for dynamic bucket handling by removing trailing slash and updating proxy_pass directive to target bucket directly.
apiserver/plane/app/views/cycle/archive.py - Updated logic for avatar_url in CycleArchiveUnarchiveEndpoint, enhancing data retrieval for estimates and distributions.

Possibly related PRs

Suggested labels

🔄migrations

Suggested reviewers

  • NarayanBavisetti
  • sriramveeraghanta

Poem

In the burrow where changes bloom,
Assets gather, dispelling gloom.
With tasks that hop and swiftly glide,
Metadata waits, no need to hide.
A patch here, a patch there,
Asynchronous magic fills the air! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@pablohashescobar pablohashescobar added 🐛bug Something isn't working ⚙️backend labels Oct 14, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (4)
apiserver/plane/bgtasks/storage_metadata_task.py (1)

1-28: Overall assessment: Good implementation with room for improvement

The new background task for retrieving asset object metadata is well-structured and appropriately uses Celery for asynchronous processing. The error handling and use of S3Storage are good practices. However, there are opportunities to enhance error handling, improve debugging capabilities, and add return values for better observability.

The suggested improvements will make the function more robust and easier to debug. Additionally, verifying the usage of this function in other parts of the codebase is crucial to ensure it's being called correctly and used as intended.

apiserver/plane/settings/storage.py (2)

42-46: Approve with suggestions: Dynamic endpoint URL setting

The introduction of dynamic endpoint URL setting based on the request context is a good improvement. However, there are a few points to consider:

  1. Type hinting and validation:
    Add type hinting for the request parameter and implement validation to ensure it has the required attributes.

  2. Consistency:
    Apply the same logic to the regular S3 client creation for consistent behavior across different storage backends.

  3. Security considerations:
    Ensure that the use of request.scheme and request.get_host() is safe in your context. Consider implementing additional validation or using a whitelist of allowed hosts to prevent potential security issues.

Here's a suggested implementation addressing these points:

from typing import Optional
from django.http import HttpRequest

class S3Storage(S3Boto3Storage):
    def __init__(self, request: Optional[HttpRequest] = None):
        # ... (existing code)

        def get_endpoint_url():
            if request and hasattr(request, 'scheme') and hasattr(request, 'get_host'):
                return f"{request.scheme}://{request.get_host()}"
            return self.aws_s3_endpoint_url

        endpoint_url = get_endpoint_url()

        if os.environ.get("USE_MINIO") == "1":
            # Create an S3 client for MinIO
            self.s3_client = boto3.client(
                "s3",
                aws_access_key_id=self.aws_access_key_id,
                aws_secret_access_key=self.aws_secret_access_key,
                region_name=self.aws_region,
                endpoint_url=endpoint_url,
                config=boto3.session.Config(signature_version="s3v4"),
            )
        else:
            # Create an S3 client
            self.s3_client = boto3.client(
                "s3",
                aws_access_key_id=self.aws_access_key_id,
                aws_secret_access_key=self.aws_secret_access_key,
                region_name=self.aws_region,
                endpoint_url=endpoint_url,
                config=boto3.session.Config(signature_version="s3v4"),
            )

This implementation adds type hinting, validates the request object, and applies the dynamic endpoint URL setting to both MinIO and regular S3 client creation.


Line range hint 1-180: Summary: Localized change with potential for broader impact

The changes made to the S3Storage class are localized to the __init__ method and introduce a more flexible way of setting the endpoint URL for the S3 client. While this change doesn't directly affect other methods in the class, it does alter the class's initialization behavior, which could have broader implications:

  1. Any code that instantiates S3Storage might need to be updated to pass a request object if dynamic endpoint URL setting is desired.
  2. The change currently only affects MinIO client creation, which could lead to inconsistent behavior between MinIO and regular S3 usage.
  3. The new behavior introduces a potential security consideration that should be carefully evaluated in the context of the entire application.

Overall, the change is an improvement in flexibility, but care should be taken to ensure it's implemented consistently and securely across the application.

Consider creating a configuration or environment variable to explicitly enable or disable this dynamic endpoint URL setting. This would provide more control over when and where this feature is used, making it easier to manage in different environments (development, staging, production).

apiserver/plane/space/views/asset.py (1)

166-167: LGTM: Improved metadata handling with background task.

The changes effectively address the PR objectives:

  1. The condition now checks for a falsy value of storage_metadata, which is more robust than checking for None.
  2. The metadata retrieval is now offloaded to a background task, which should improve performance.

Consider adding error handling for the background task call:

try:
    get_asset_object_metadata.delay(str(asset.id))
except Exception as e:
    # Log the error
    logger.error(f"Failed to queue metadata retrieval for asset {asset.id}: {str(e)}")
    # Optionally, you could also set a flag on the asset to indicate the metadata retrieval needs to be retried

This will ensure that any issues with queuing the background task are logged and don't silently fail.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 701af73 and 1245d20.

📒 Files selected for processing (5)
  • apiserver/plane/app/views/asset/v2.py (4 hunks)
  • apiserver/plane/app/views/issue/attachment.py (2 hunks)
  • apiserver/plane/bgtasks/storage_metadata_task.py (1 hunks)
  • apiserver/plane/settings/storage.py (1 hunks)
  • apiserver/plane/space/views/asset.py (2 hunks)
🧰 Additional context used
🔇 Additional comments (10)
apiserver/plane/bgtasks/storage_metadata_task.py (2)

1-7: LGTM: Imports are well-organized and relevant.

The imports are appropriately separated into third-party and module imports. All imported items are used in the code, and the necessary components for the task are included.


10-28: 🛠️ Refactor suggestion

Improve error handling and add return values for better observability.

The function looks good overall, but there are a few areas that could be improved:

  1. Silent returns in exception cases make debugging difficult.
  2. There's no logging for the FileAsset.DoesNotExist case.
  3. The function doesn't return any value indicating success or failure.
  4. There's no validation of the retrieved metadata before saving.

Consider applying the following improvements:

 @shared_task
 def get_asset_object_metadata(asset_id):
     try:
         # Get the asset
         asset = FileAsset.objects.get(pk=asset_id)
         # Create an instance of the S3 storage
         storage = S3Storage()
         # Get the storage
         metadata = storage.get_object_metadata(
             object_name=asset.asset.name
         )
+        # Validate metadata
+        if not metadata:
+            raise ValueError("Retrieved metadata is empty")
         # Save the asset
+        asset.storage_metadata = metadata
         asset.save()
-        return
+        return True
     except FileAsset.DoesNotExist:
+        log_exception(f"FileAsset with id {asset_id} does not exist")
-        return
+        return False
     except Exception as e:
         log_exception(e)
-        return
+        return False

These changes will:

  1. Add logging for the FileAsset.DoesNotExist case.
  2. Return boolean values to indicate success or failure.
  3. Add basic validation for the retrieved metadata.
  4. Ensure that all return paths have a value.

To ensure that this function is being called correctly from other parts of the codebase, you can run the following script:

This script will help verify that the function is being imported and called correctly, and that it's being used as a Celery task where appropriate.

✅ Verification successful

The proposed improvements enhance error handling and observability without introducing issues.

The suggested changes to the get_asset_object_metadata function are valid and beneficial:

  1. Added logging for the FileAsset.DoesNotExist case improves debugging capabilities.
  2. Returning boolean values (True for success and False for failure) provides better observability of the task's outcome.
  3. Validation of retrieved metadata ensures data integrity before saving.

These modifications will help maintain the robustness of the task and facilitate easier troubleshooting.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the usage of get_asset_object_metadata in the codebase

# Search for imports of the function
echo "Checking imports:"
rg --type python "from plane.bgtasks.storage_metadata_task import get_asset_object_metadata"

# Search for calls to the function
echo "\nChecking function calls:"
rg --type python "get_asset_object_metadata\s*\("

# Check if the function is being used as a Celery task
echo "\nChecking Celery task usage:"
rg --type python "get_asset_object_metadata\.delay\s*\("

Length of output: 475


Script:

#!/bin/bash
# Description: Revised verification of get_asset_object_metadata usage in the codebase

# Search for imports of the function using the correct type identifier
echo "Checking imports:"
rg --type py "from plane\.bgtasks\.storage_metadata_task import get_asset_object_metadata"

# Search for calls to the function
echo "\nChecking function calls:"
rg --type py "get_asset_object_metadata\s*\("

# Check if the function is being used as a Celery task
echo "\nChecking Celery task usage:"
rg --type py "get_asset_object_metadata\.delay\s*\("

Length of output: 1329

apiserver/plane/space/views/asset.py (2)

18-18: LGTM: New import aligns with PR objectives.

The addition of get_asset_object_metadata from the background tasks module aligns well with the PR objective of moving storage metadata collection to a background job. This change should help improve performance by offloading the metadata collection process.


Line range hint 1-285: Overall assessment: Changes meet PR objectives and improve code quality.

The modifications in this file successfully address the PR objectives:

  1. The storage metadata logic has been improved by changing the condition to check for a falsy value instead of None.
  2. The metadata collection process has been moved to a background job, which should enhance performance.

These changes should lead to more efficient handling of asset metadata without blocking the main execution thread. The code is now more robust and scalable.

apiserver/plane/app/views/issue/attachment.py (2)

23-23: Import get_asset_object_metadata for asynchronous metadata collection

The import statement correctly adds get_asset_object_metadata from plane.bgtasks.storage_metadata_task, enabling the use of a background task for storage metadata collection.


258-259: Verify that deferring metadata collection does not affect functionality

By moving the storage metadata collection to a background task when issue_attachment.storage_metadata is falsy, there may be a delay before this data becomes available. Please ensure that no immediate operations depend on issue_attachment.storage_metadata being populated immediately after the patch method execution.

apiserver/plane/app/views/asset/v2.py (4)

25-25: Import statement for background task is appropriate.

The addition of get_asset_object_metadata import enables asynchronous retrieval of storage metadata using a background task.


200-201: Ensure correct handling of empty storage_metadata.

The condition if not asset.storage_metadata will evaluate to True when asset.storage_metadata is None or an empty dictionary {}. Confirm that this behavior is intentional and that an empty dictionary signifies missing metadata.


450-451: Ensure correct handling of empty storage_metadata.


687-688: Ensure correct handling of empty storage_metadata.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (5)
apiserver/plane/bgtasks/storage_metadata_task.py (1)

1-28: Overall assessment: Good implementation with room for improvement

The introduction of this background task for retrieving asset metadata is a positive addition to the project. It aligns well with the PR objectives of moving storage metadata collection to a background job. The use of Celery for this purpose is appropriate and will help in improving the overall performance of the application.

However, there are several areas where the implementation could be enhanced:

  1. More detailed return values for better task result tracking.
  2. Increased flexibility in S3Storage configuration.
  3. More robust handling of potential edge cases (like empty metadata).
  4. Improved logging for both successful and unsuccessful operations.

These improvements would make the task more robust, flexible, and easier to monitor and debug. Consider implementing the suggested changes to further enhance the quality and maintainability of this background task.

apiserver/plane/space/views/asset.py (1)

166-167: LGTM: Improved metadata handling with background task.

The changes successfully address the PR objectives:

  1. The condition now correctly checks for an empty dictionary instead of None.
  2. Metadata collection is moved to a background job, which should improve performance.

Consider adding a comment or log statement to indicate that metadata retrieval has been queued as a background task. This could be helpful for debugging or monitoring purposes.

Example:

if not asset.storage_metadata:
    get_asset_object_metadata.delay(str(asset.id))
    logger.info(f"Metadata retrieval queued for asset {asset.id}")

Also, consider if there are any scenarios where the metadata might be needed immediately after this operation. If so, you may want to add a flag to optionally wait for the metadata retrieval.

apiserver/plane/app/views/asset/v2.py (3)

200-201: LGTM: Improved metadata check and background task implementation

The changes effectively implement the PR objectives:

  1. The condition for checking storage metadata has been improved to handle empty dictionaries.
  2. The metadata collection has been moved to a background task using get_asset_object_metadata.delay().

These changes should improve performance by offloading the metadata collection process.

Consider adding error handling for the background task to ensure any failures are logged or handled appropriately:

try:
    get_asset_object_metadata.delay(asset_id=str(asset_id))
except Exception as e:
    logger.error(f"Failed to start background task for asset {asset_id}: {str(e)}")

687-688: LGTM: Consistent implementation of background task for project assets

The changes in the ProjectAssetEndpoint.patch method are consistent with those in the UserAssetsV2Endpoint.patch method:

  1. The condition for checking storage metadata has been improved.
  2. The metadata collection has been moved to a background task.

This consistency ensures that both user and project assets are handled similarly.

As suggested for the UserAssetsV2Endpoint.patch method, consider adding error handling:

try:
    get_asset_object_metadata.delay(asset_id=str(pk))
except Exception as e:
    logger.error(f"Failed to start background task for asset {pk}: {str(e)}")

Line range hint 25-688: Overall LGTM: Successful implementation of background tasks for metadata collection

The changes in this file successfully address the PR objectives:

  1. The storage metadata logic has been fixed by improving the condition checks.
  2. Metadata collection has been moved to a background job using Celery's delay() method.

These modifications are consistently applied across both UserAssetsV2Endpoint and ProjectAssetEndpoint classes, ensuring uniform behavior for different types of assets. The implementation should lead to improved performance by offloading the potentially time-consuming metadata collection process to background tasks.

To further enhance this implementation, consider:

  1. Implementing a retry mechanism for failed background tasks.
  2. Adding monitoring and logging for the background tasks to track their performance and any potential issues.
  3. Ensuring that the Celery worker configuration is optimized to handle the expected load of metadata collection tasks.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 701af73 and 1245d20.

📒 Files selected for processing (5)
  • apiserver/plane/app/views/asset/v2.py (4 hunks)
  • apiserver/plane/app/views/issue/attachment.py (2 hunks)
  • apiserver/plane/bgtasks/storage_metadata_task.py (1 hunks)
  • apiserver/plane/settings/storage.py (1 hunks)
  • apiserver/plane/space/views/asset.py (2 hunks)
🧰 Additional context used
🔇 Additional comments (7)
apiserver/plane/bgtasks/storage_metadata_task.py (1)

1-7: LGTM: Imports are well-organized and relevant.

The imports are appropriately separated into third-party and module imports. All imported modules and functions are used in the code, and they cover the necessary dependencies for the task.

apiserver/plane/settings/storage.py (2)

Line range hint 1-164: Note: Misalignment with PR objectives

While the changes in this file introduce flexibility in endpoint URL construction, they don't directly address the main objectives stated in the PR description:

  1. Moving storage metadata collection to a background job
  2. Fixing storage metadata logic (changing the condition to check for an empty dictionary instead of None)

Consider clarifying how these changes relate to the PR objectives or if they should be part of a separate PR.

To ensure we haven't missed any relevant changes, let's check for metadata-related modifications:

✅ Verification successful

Confirmed: Changes Do Not Address PR Objectives

The updates in apiserver/plane/settings/storage.py do not implement the main objectives of the PR:

  1. Moving storage metadata collection to a background job.
  2. Fixing storage metadata logic by changing the condition to check for an empty dictionary instead of None.

Please ensure these objectives are addressed in the relevant parts of the codebase or consider separating them into a different PR.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Search for metadata-related changes
# Expected: No significant changes to metadata handling in this file

rg --type python "metadata" apiserver/plane/settings/storage.py

Length of output: 188


Script:

#!/bin/bash
# Description: Search for metadata-related changes in storage.py
# Expected: Details of metadata handling within the specified file

rg "metadata" apiserver/plane/settings/storage.py

Length of output: 145


42-46: Approve with suggestions: Enhance flexibility and improve code clarity

The change introduces welcome flexibility by allowing dynamic endpoint URL construction based on the request object. However, there are a few points to address:

  1. Add the request parameter to the method signature for clarity:

    def __init__(self, request=None):
  2. Add type hints and documentation for the request parameter:

    from django.http import HttpRequest
    
    def __init__(self, request: Optional[HttpRequest] = None):
        """
        Initialize the S3Storage.
        
        :param request: The current HTTP request, used for dynamic endpoint URL construction.
                        If None, falls back to the configured endpoint URL.
        """
  3. Consider adding a comment explaining why this change was made, as it's not immediately clear how it relates to the PR objectives of moving storage metadata collection to a background job.

To ensure this change doesn't introduce inconsistencies, let's verify its usage:

✅ Verification successful

Approve with clarification: Consistent usage of request parameter confirmed

The verification reveals that S3Storage is consistently initialized with the request parameter across multiple files in the codebase. This demonstrates that the introduced flexibility aligns with existing usage patterns and does not introduce inconsistencies.

No further action is required regarding the request parameter usage.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check for other occurrences of S3Storage initialization
# Expected: No other occurrences of S3Storage being initialized with a request parameter

rg --type python "S3Storage\(" -C 3

Length of output: 188


Script:

#!/bin/bash
# Description: Check for other occurrences of S3Storage initialization
# Expected: No other occurrences of S3Storage being initialized with a request parameter

rg "S3Storage\s*\(" --type py -C 3

Length of output: 6318

apiserver/plane/space/views/asset.py (2)

18-18: LGTM: New import for background task.

The import of get_asset_object_metadata from the background tasks module aligns with the PR objective of moving storage metadata collection to a background job.


Line range hint 1-280: Overall assessment: Changes align with PR objectives and improve code quality.

The modifications in this file successfully implement the intended changes:

  1. Fixed the storage metadata logic to check for an empty dictionary instead of None.
  2. Moved the metadata collection to a background job, potentially improving performance.

These changes are focused and don't introduce unrelated alterations to the file. The overall structure and functionality of the file remain intact, with improvements to the asset metadata handling process.

apiserver/plane/app/views/asset/v2.py (1)

25-25: LGTM: New import for background task

The new import statement for get_asset_object_metadata is correctly placed and aligns with the PR objective of moving storage metadata collection to a background job.

apiserver/plane/app/views/issue/attachment.py (1)

23-23: Import statement added correctly

The import of get_asset_object_metadata from plane.bgtasks.storage_metadata_task is appropriate and necessary for the background task implementation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 1245d20 and 35c9c18.

📒 Files selected for processing (3)
  • deploy/selfhost/docker-compose.yml (1 hunks)
  • nginx/nginx.conf.dev (1 hunks)
  • nginx/nginx.conf.template (1 hunks)
🧰 Additional context used
🔇 Additional comments (1)
deploy/selfhost/docker-compose.yml (1)

37-37: LGTM: Simplified AWS_REGION environment variable declaration

The change from AWS_REGION=${AWS_REGION:-""} to AWS_REGION=${AWS_REGION:-} is a minor improvement in the Docker Compose configuration. Both versions effectively set an empty string as the default value for AWS_REGION, but the new version is more concise. This change:

  1. Maintains the same functionality
  2. Slightly improves readability
  3. Aligns with common Docker Compose practices

nginx/nginx.conf.dev Show resolved Hide resolved
nginx/nginx.conf.template Show resolved Hide resolved
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Files that changed from the base of the PR and between 35c9c18 and 4f3fb6b.

📒 Files selected for processing (1)
  • apiserver/plane/app/views/cycle/archive.py (1 hunks)
🧰 Additional context used

apiserver/plane/app/views/cycle/archive.py Show resolved Hide resolved
@sriramveeraghanta sriramveeraghanta merged commit 9b85306 into preview Oct 16, 2024
14 of 15 checks passed
@sriramveeraghanta sriramveeraghanta deleted the fix-storage-metadata branch October 16, 2024 08:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
⚙️backend 🐛bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants