Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(server): move export csv items and GeoJSON to usecase layer #1329

Open
wants to merge 11 commits into
base: main
Choose a base branch
from

Conversation

jasonkarel
Copy link
Contributor

@jasonkarel jasonkarel commented Dec 2, 2024

Overview

Moving ItemsAsCSV, ItemsAsGeoJSON, ItemsWithProjectAsCSV and ItemsWithProjectAsGeoJSON data export ability to usecase in other to be used in other layers such as GQL, and APIs

What I've done

  1. moved CSVToItems, ItemsAsGeoJSON, ItemsWithProjectAsCSV and ItemsWithProjectAsGeoJSON function to usecase layer
  2. Add CSVToItems, ItemsAsGeoJSON, ItemsWithProjectAsCSV and ItemsWithProjectAsGeoJSONfunction interface to Items usecase
  3. UT for all public methods

What I haven't done

How I tested

local:
testing in local environment. ensuring csv is generated through /api/models/01jd19w4esd28p6m45sn35mycs/items.csv?page=1&perPage=50&ref=latest

cases:

  1. success case. generates the CSV/GeoJSON
  2. error case pointFieldIsNotSupportedError.
  3. pagination cases.
  4. ID not found cases

Screenshot

Which point I want you to review particularly

  1. Code best practice.
  2. Code functionality to the existing import csv feature.

Memo

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Introduced methods for exporting items as CSV and GeoJSON, enhancing data export capabilities.
    • Added pagination support for item retrieval during CSV generation.
    • Enhanced the Item interface with new methods for retrieving items in various formats based on model and project IDs.
  • Bug Fixes

    • Improved error handling for unsupported geometry fields during CSV export.
  • Tests

    • Added unit tests for the new CSV and GeoJSON export functionalities, ensuring robustness and error handling.

Copy link

coderabbitai bot commented Dec 2, 2024

Walkthrough

The changes in this pull request enhance the functionality for exporting items in CSV and GeoJSON formats within the server's item processing components. Key methods in the Server struct have been streamlined to utilize new methods in the Item struct, which now include pagination and schema retrieval. Additionally, a new file for CSV export logic has been introduced, along with comprehensive unit tests for the CSV export functionality. The modifications ensure consistent error handling across methods related to item filtering and data export.

Changes

File Path Change Summary
server/internal/adapter/integration/item.go - Simplified ItemsAsGeoJSON, ItemsAsCSV, ItemsWithProjectAsGeoJSON, and ItemsWithProjectAsCSV methods by calling new use case methods.
- Consistent error handling retained.
server/internal/usecase/interactor/item.go - Added ItemsAsCSV, ItemsAsGeoJSON, ItemsWithProjectAsCSV, and ItemsWithProjectAsGeoJSON methods for exporting data with pagination.
- Introduced buildSchemaPackage and fromPagination helper functions.
server/internal/usecase/interactor/item_export.go - Modified package declaration and logging functionality.
- Updated error handling in handleCSVGeneration to use new logging.
server/internal/usecase/interactor/item_export_test.go - New file created with unit tests for csvFromItems, covering various scenarios for schema fields, particularly geometry fields.
server/internal/usecase/interfaces/item.go - Added method signatures for ItemsAsCSV, ItemsAsGeoJSON, ItemsWithProjectAsCSV, and ItemsWithProjectAsGeoJSON to the Item interface.

Possibly related PRs

Suggested reviewers

  • yk-eukarya

Poem

In the server's heart, a change so bright,
Items dance in CSV light.
With filters strong and exports neat,
A rabbit's joy, oh what a treat!
Hopping through code, we celebrate,
For every item, a new fate! 🐇✨


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR. (Experiment)
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added the server label Dec 2, 2024
Copy link

netlify bot commented Dec 2, 2024

Deploy Preview for reearth-cms ready!

Name Link
🔨 Latest commit fd90086
🔍 Latest deploy log https://app.netlify.com/sites/reearth-cms/deploys/675fceac21ab9c0008d0d871
😎 Deploy Preview https://deploy-preview-1329--reearth-cms.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (3)
server/internal/usecase/interactor/item_export.go (1)

35-38: Handle errors returned by Close and CloseWithError

In the handleCSVGeneration function, the errors returned by pw.CloseWithError(err) and pw.Close() are being ignored. Although it's common to ignore these errors, it's better practice to check and handle them to ensure that any potential issues are not silently overlooked.

Consider handling the errors as follows:

 func handleCSVGeneration(pw *io.PipeWriter, l item.VersionedList, s *schema.Schema) {
 	err := generateCSV(pw, l, s)
 	if err != nil {
 		log.Errorf("failed to generate CSV: %v", err)
-		_ = pw.CloseWithError(err)
+		if cerr := pw.CloseWithError(err); cerr != nil {
+			log.Errorf("failed to close pipe with error: %v", cerr)
+		}
 	} else {
-		_ = pw.Close()
+		if cerr := pw.Close(); cerr != nil {
+			log.Errorf("failed to close pipe: %v", cerr)
+		}
 	}
 }
server/internal/usecase/interfaces/item.go (1)

95-95: Add documentation comment for exported method ItemsAsCSV

To adhere to Go conventions, consider adding a documentation comment for the ItemsAsCSV method in the Item interface. This helps other developers understand its purpose and usage.

For example:

// ItemsAsCSV exports items of a model as CSV.
// It supports pagination through the `page` and `perPage` parameters.
ItemsAsCSV(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*io.PipeReader, error)
server/internal/adapter/integration/item.go (1)

Line range hint 166-197: Refactor ItemsWithProjectAsCSV to use uc.Item.ItemsAsCSV

In the ItemsWithProjectAsCSV method, the logic for fetching items and generating CSV output is similar to ItemsAsCSV. To enhance code reuse and maintainability, consider refactoring this method to utilize uc.Item.ItemsAsCSV, promoting consistency across the codebase.

This change would simplify the method and reduce duplication:

 func (s *Server) ItemsWithProjectAsCSV(ctx context.Context, request ItemsWithProjectAsCSVRequestObject) (ItemsWithProjectAsCSVResponseObject, error) {
 	op := adapter.Operator(ctx)
 	uc := adapter.Usecases(ctx)

 	prj, err := uc.Project.FindByIDOrAlias(ctx, request.ProjectIdOrAlias, op)
 	if err != nil {
 		if errors.Is(err, rerror.ErrNotFound) {
 			return ItemsWithProjectAsCSV404Response{}, err
 		}
 		return ItemsWithProjectAsCSV400Response{}, err
 	}

 	m, err := uc.Model.FindByIDOrKey(ctx, prj.ID(), request.ModelIdOrKey, op)
 	if err != nil {
 		if errors.Is(err, rerror.ErrNotFound) {
 			return ItemsWithProjectAsCSV404Response{}, err
 		}
 		return ItemsWithProjectAsCSV400Response{}, err
 	}

-	sp, err := uc.Schema.FindByModel(ctx, m.ID(), op)
-	if err != nil {
-		return ItemsWithProjectAsCSV400Response{}, err
-	}
-
-	p := fromPagination(request.Params.Page, request.Params.PerPage)
-	items, _, err := uc.Item.FindBySchema(ctx, sp.Schema().ID(), nil, p, op)
-	if err != nil {
-		if errors.Is(err, rerror.ErrNotFound) {
-			return ItemsWithProjectAsCSV404Response{}, err
-		}
-		return ItemsWithProjectAsCSV400Response{}, err
-	}
-
-	pr, pw := io.Pipe()
-	err = csvFromItems(pw, items, sp.Schema())
+	pr, err := uc.Item.ItemsAsCSV(ctx, m.ID(), request.Params.Page, request.Params.PerPage, op)
 	if err != nil {
 		return ItemsWithProjectAsCSV400Response{}, err
 	}

 	return ItemsWithProjectAsCSV200TextcsvResponse{
 		Body: pr,
 	}, nil
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between eeca1e8 and f27d71e.

📒 Files selected for processing (5)
  • server/internal/adapter/integration/item.go (1 hunks)
  • server/internal/usecase/interactor/item.go (3 hunks)
  • server/internal/usecase/interactor/item_export.go (1 hunks)
  • server/internal/usecase/interactor/item_export_test.go (1 hunks)
  • server/internal/usecase/interfaces/item.go (2 hunks)

server/internal/usecase/interactor/item.go Outdated Show resolved Hide resolved
server/internal/usecase/interactor/item_export_test.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (7)
server/internal/usecase/interactor/item.go (1)

30-31: Ensure consistent numeric types for pagination constants

The constants maxPerPage and defaultPerPage are declared with different types (int and int64, respectively). For consistency and to prevent potential type mismatches during calculations, consider declaring both constants using the same numeric type.

server/internal/usecase/interactor/item_export.go (1)

17-17: Clarify error message for unsupported point fields

The error message "point type is not supported in any geometry field in this model" might be unclear to users. Consider rephrasing it for clarity, such as "The schema does not support point geometry types in any of its fields."

server/internal/usecase/interfaces/item.go (2)

5-5: Organize import statements according to Go conventions

The new import of "io" is not grouped with standard library imports. Move "io" to the standard library imports section at the top for better code organization.


95-95: Document the new ItemsAsCSV method in the interface

The ItemsAsCSV method has been added to the Item interface. Ensure that this method is properly documented with comments explaining its purpose, parameters, and return values, following the GoDoc conventions.

server/internal/usecase/interactor/item_export_test.go (1)

18-100: Expand test coverage for csvFromItems function

The current tests focus on schemas with and without geometry fields. Consider adding test cases for:

  • Large datasets to test performance and memory usage.
  • Items with special characters or commas in field values to verify CSV escaping.
  • Different supported geometry types to ensure they are handled correctly.
server/internal/adapter/integration/item.go (2)

106-113: Simplify error handling in ItemsAsCSV method

The error handling for uc.Item.ItemsAsCSV can be streamlined. Since both error conditions return similar responses, consider simplifying the error checks to reduce code redundancy.

Apply this diff to simplify the error handling:

-	if err != nil {
-		if errors.Is(err, rerror.ErrNotFound) {
-			return ItemsAsCSV404Response{}, err
-		}
-		return ItemsAsCSV400Response{}, err
-	}
+	if err != nil {
+		return ItemsAsCSV400Response{}, err
+	}

106-113: Ensure consistent response types for error conditions

In the ItemsAsCSV method, both ItemsAsCSV404Response and ItemsAsCSV400Response are used for error responses. For consistency and clarity, consider using a single response type for all error conditions unless a specific distinction is necessary.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between eeca1e8 and f27d71e.

📒 Files selected for processing (5)
  • server/internal/adapter/integration/item.go (1 hunks)
  • server/internal/usecase/interactor/item.go (3 hunks)
  • server/internal/usecase/interactor/item_export.go (1 hunks)
  • server/internal/usecase/interactor/item_export_test.go (1 hunks)
  • server/internal/usecase/interfaces/item.go (2 hunks)
🔇 Additional comments (2)
server/internal/usecase/interactor/item.go (1)

1254-1257: Verify error handling for asynchronous CSV generation

The csvFromItems function initiates CSV generation in a separate goroutine. Any errors occurring during the generation are returned via the pipe. Ensure that these errors are appropriately handled by the caller to prevent unnoticed failures.

To confirm that errors are correctly propagated and handled, you can review how the io.PipeReader is consumed and check for errors during read operations.

server/internal/usecase/interactor/item_export.go (1)

41-64: ⚠️ Potential issue

Handle potential write errors during CSV generation

In the generateCSV function, errors from w.Error() are returned at the end. However, if w.Write() fails midway, subsequent writes are skipped silently. Ensure that the function handles write errors immediately after they occur to prevent incomplete CSV output.

Apply this diff to handle errors immediately after each write operation:

 func generateCSV(pw *io.PipeWriter, l item.VersionedList, s *schema.Schema) error {
 	w := csv.NewWriter(pw)
 	defer w.Flush()

 	headers := integrationapi.BuildCSVHeaders(s)
-	if err := w.Write(headers); err != nil {
+	if err := w.Write(headers); err != nil {
+		return err
 	}

 	nonGeoFields := lo.Filter(s.Fields(), func(f *schema.Field, _ int) bool {
 		return !f.IsGeometryField()
 	})

 	for _, ver := range l {
 		row, ok := integrationapi.RowFromItem(ver.Value(), nonGeoFields)
 		if ok {
 			if err := w.Write(row); err != nil {
 				return err
 			}
 		}
 	}

-	return w.Error()
+	return nil
 }

Likely invalid or redundant comment.

server/internal/usecase/interactor/item.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
server/internal/usecase/interactor/item.go (2)

30-31: Consider using consistent types for pagination constants.

Both constants are used together in pagination logic, but have different underlying types (untyped int vs int64).

Apply this diff for consistency:

-const maxPerPage = 100
-const defaultPerPage int64 = 50
+const (
+    maxPerPage     int64 = 100
+    defaultPerPage int64 = 50
+)

1245-1266: Simplify pagination logic and use constants for magic numbers.

The perPage validation logic can be simplified, and magic numbers should be constants.

Consider this refactoring:

+const (
+    minPerPage int64 = 1
+    defaultPage int64 = 1
+)

 func fromPagination(page, perPage *int) *usecasex.Pagination {
     p := int64(1)
     if page != nil && *page > 0 {
         p = int64(*page)
     }

     pp := defaultPerPage
-    if perPage != nil {
-        if ppr := *perPage; 1 <= ppr {
-            if ppr > maxPerPage {
-                pp = int64(maxPerPage)
-            } else {
-                pp = int64(ppr)
-            }
-        }
+    if perPage != nil && *perPage >= int(minPerPage) {
+        pp = int64(*perPage)
+        if pp > maxPerPage {
+            pp = maxPerPage
+        }
     }

     return usecasex.OffsetPagination{
         Offset: (p - 1) * pp,
         Limit:  pp,
     }.Wrap()
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between f27d71e and 0119c17.

📒 Files selected for processing (2)
  • server/internal/usecase/interactor/item.go (3 hunks)
  • server/internal/usecase/interfaces/item.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • server/internal/usecase/interfaces/item.go
🔇 Additional comments (1)
server/internal/usecase/interactor/item.go (1)

1236-1236: Verify CSV generation implementation.

Let's verify the implementation of csvFromItems to ensure it handles all field types correctly.

✅ Verification successful

Let me gather more information about the CSV generation implementation.


Let me check the generateCSV implementation to complete the verification.


Based on the comprehensive analysis of the codebase, I can now provide the final verification response:

CSV generation implementation is properly handled and verified.

The implementation is robust and handles all field types correctly:

  • CSV generation is properly implemented with standard csv.Writer
  • Headers are correctly generated using BuildCSVHeaders
  • All field types are handled in toCSVValue including:
    • Text-based fields (Text, TextArea, RichText, Markdown, Select, Tag)
    • Numeric fields (Integer, Number)
    • Boolean fields (Bool, Checkbox)
    • DateTime fields with RFC3339 formatting
    • URL fields
    • Geometry fields are properly filtered out
  • Error handling is implemented throughout the process
  • The implementation is consistent across different packages (usecase, publicapi, integration)
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check csvFromItems implementation and field type handling

# Find the csvFromItems implementation
ast-grep --pattern 'func csvFromItems($_, $_, $_) $_ {
  $$$
}'

# Check for field type handling in the implementation
rg -A 5 'case value\.Type'

Length of output: 27452


Script:

#!/bin/bash
# Let's check the handleCSVGeneration implementation and CSV-related code
ast-grep --pattern 'func handleCSVGeneration($_, $_, $_) {
  $$$
}'

# Also check for any CSV-related functions in the exporters package
rg -A 10 'func.*CSV' server/pkg/exporters/

Length of output: 6730


Script:

#!/bin/bash
# Check the generateCSV implementation
ast-grep --pattern 'func generateCSV($_, $_, $_) error {
  $$$
}'

# Also check for any CSV writer usage
rg -A 5 'csv\.NewWriter' server/

Length of output: 6855

server/internal/usecase/interactor/item.go Show resolved Hide resolved
server/internal/usecase/interactor/item.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Outside diff range and nitpick comments (2)
server/internal/usecase/interactor/item.go (2)

1188-1246: Add unit tests for CSV export functionality.

As mentioned in the PR objectives, unit tests are pending for this functionality. Tests should cover:

  1. Successful CSV generation
  2. Error handling for missing schemas
  3. Pagination scenarios
  4. Point field error cases

Would you like me to help generate comprehensive unit tests for this functionality?


1248-1269: LGTM: Well-implemented pagination helper.

The function handles edge cases well and properly enforces pagination limits. Consider adding documentation to explain the pagination logic.

Add a documentation comment:

+// fromPagination converts page and perPage parameters to a pagination object.
+// If page is nil or <= 0, it defaults to 1.
+// If perPage is nil, it uses defaultPerPage (50).
+// If perPage > maxPerPage (100), it's capped at maxPerPage.
 func fromPagination(page, perPage *int) *usecasex.Pagination {
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 0119c17 and 583ef20.

📒 Files selected for processing (1)
  • server/internal/usecase/interactor/item.go (3 hunks)
🧰 Additional context used
📓 Learnings (1)
server/internal/usecase/interactor/item.go (1)
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item.go:1235-1239
Timestamp: 2024-12-03T05:29:34.471Z
Learning: In the Go code file `server/internal/usecase/interactor/item.go`, the `csvFromItems` function already handles closing the pipe writer, so it's unnecessary to wrap it in a goroutine and close it again in the `ItemsAsCSV` function.
🔇 Additional comments (4)
server/internal/usecase/interactor/item.go (4)

30-31: LGTM: Pagination constants are well-defined.

The pagination limits are reasonable, with a maximum of 100 items per page and a default of 50 items.


1189-1192: LGTM: Proper operator validation.

The function correctly validates the operator before proceeding.


1207-1210: ⚠️ Potential issue

Add error handling for nil schema package.

The code should validate that all required schemas are available before proceeding with CSV generation.

Apply this diff to add validation:

 s := schemaList.Schema(lo.ToPtr(model.Schema()))
 if s == nil {
-    return nil, rerror.ErrNotFound
+    return nil, fmt.Errorf("schema not found for model %s", modelID)
 }
+
+if model.Metadata() != nil {
+    if ms := schemaList.Schema(model.Metadata()); ms == nil {
+        return nil, fmt.Errorf("metadata schema not found for model %s", modelID)
+    }
+}

Likely invalid or redundant comment.


1238-1242: Consider error handling in CSV generation.

The current implementation might lead to resource leaks if an error occurs during CSV generation. While the pipe writer is closed inside csvFromItems, we should ensure proper cleanup.

Let's verify the error handling in csvFromItems:

✅ Verification successful

The CSV generation error handling is properly implemented

The implementation shows a well-designed error handling pattern:

  • CSV generation is handled asynchronously in a goroutine via handleCSVGeneration
  • The pipe writer is properly closed in all cases:
    • On error: pw.CloseWithError(err) is called, propagating the error to the reader
    • On success: pw.Close() is called
  • Errors during generation are logged: log.Errorf("failed to generate CSV: %v", err)

The current implementation ensures there are no resource leaks and errors are properly propagated through the pipe.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check error handling in csvFromItems function
ast-grep --pattern 'func csvFromItems(pw *io.PipeWriter, items item.VersionedList, s *schema.Schema) error {
  $$$
  defer pw.Close()
  $$$
}'

Length of output: 144


Script:

#!/bin/bash
# Let's try to find the csvFromItems function implementation with a more general pattern
ast-grep --pattern 'func csvFromItems($$$) {
  $$$
}'

# Also search with ripgrep as backup
rg "func csvFromItems" -A 20

Length of output: 3286


p := fromPagination(request.Params.Page, request.Params.PerPage)
items, _, err := uc.Item.FindBySchema(ctx, sp.Schema().ID(), nil, p, op)
pr, err := uc.Item.ItemsAsCSV(ctx, request.ModelId, request.Params.Page, request.Params.PerPage, op)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me!
Just a quick note: we have two functions that handle exporting items as CSV. You've already completed ItemsAsCSV, so please apply the same approach to the ItemsWithProjectAsCSV function. Once that's done, kindly remove the csvFromItems function from the adapter level.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (2)
server/internal/usecase/interfaces/item.go (1)

105-112: Enhance method documentation with pagination details

The documentation comments should include information about the pagination parameters and their default values/constraints.

Apply this diff to improve the documentation:

-// ItemsAsCSV exports items data in content to csv file by modelID.
+// ItemsAsCSV exports items data in content to csv file by modelID.
+// page: Page number (1-based, defaults to 1)
+// perPage: Items per page (defaults to 50, max 100)
 func ItemsAsCSV(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*io.PipeReader, error)
 
-// ItemsAsGeoJSON converts items to Geo JSON type given a model ID
+// ItemsAsGeoJSON converts items to Geo JSON type given a model ID.
+// page: Page number (1-based, defaults to 1)
+// perPage: Items per page (defaults to 50, max 100)
 func ItemsAsGeoJSON(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*integrationapi.FeatureCollection, error)
server/internal/usecase/interactor/item.go (1)

1447-1468: Improve pagination helper method

The pagination helper could be improved for clarity and efficiency.

Consider these improvements:

 func fromPagination(page, perPage *int) *usecasex.Pagination {
-    p := int64(1)
-    if page != nil && *page > 0 {
-        p = int64(*page)
-    }
+    pageNum := int64(1)
+    if page != nil && *page > 0 {
+        pageNum = int64(*page)
+    }
 
-    pp := defaultPerPage
+    itemsPerPage := defaultPerPage
     if perPage != nil {
-        if ppr := *perPage; 1 <= ppr {
-            if ppr > maxPerPage {
-                pp = int64(maxPerPage)
-            } else {
-                pp = int64(ppr)
-            }
+        requestedPerPage := *perPage
+        if requestedPerPage >= 1 {
+            itemsPerPage = int64(min(requestedPerPage, maxPerPage))
         }
     }
 
     return usecasex.OffsetPagination{
-        Offset: (p - 1) * pp,
-        Limit:  pp,
+        Offset: (pageNum - 1) * itemsPerPage,
+        Limit:  itemsPerPage,
     }.Wrap()
 }
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 583ef20 and 35c1dc0.

📒 Files selected for processing (4)
  • server/internal/adapter/integration/item.go (2 hunks)
  • server/internal/usecase/interactor/item.go (3 hunks)
  • server/internal/usecase/interactor/item_export.go (1 hunks)
  • server/internal/usecase/interfaces/item.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • server/internal/usecase/interactor/item_export.go
  • server/internal/adapter/integration/item.go
🧰 Additional context used
📓 Learnings (1)
server/internal/usecase/interactor/item.go (1)
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item.go:1235-1239
Timestamp: 2024-12-03T05:29:34.471Z
Learning: In the Go code file `server/internal/usecase/interactor/item.go`, the `csvFromItems` function already handles closing the pipe writer, so it's unnecessary to wrap it in a goroutine and close it again in the `ItemsAsCSV` function.
🔇 Additional comments (1)
server/internal/usecase/interactor/item.go (1)

33-34: LGTM! Well-defined pagination constants

The pagination constants are clearly defined with reasonable limits.

server/internal/usecase/interactor/item.go Outdated Show resolved Hide resolved
server/internal/usecase/interactor/item.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Outside diff range and nitpick comments (1)
server/internal/usecase/interactor/item.go (1)

1207-1208: Remove unnecessary comments.

The "// fromPagination" comments before calling fromPagination are redundant and can be removed.

Also applies to: 1243-1244, 1282-1283, 1322-1323

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 35c1dc0 and 06101c7.

📒 Files selected for processing (1)
  • server/internal/usecase/interactor/item.go (3 hunks)
🧰 Additional context used
📓 Learnings (1)
server/internal/usecase/interactor/item.go (1)
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item.go:1235-1239
Timestamp: 2024-12-03T05:29:34.471Z
Learning: In the Go code file `server/internal/usecase/interactor/item.go`, the `csvFromItems` function already handles closing the pipe writer, so it's unnecessary to wrap it in a goroutine and close it again in the `ItemsAsCSV` function.
🔇 Additional comments (5)
server/internal/usecase/interactor/item.go (5)

33-34: LGTM! Constants for pagination limits are well-defined.

The pagination limits are reasonable with maxPerPage=100 and defaultPerPage=50.


1339-1370: LGTM! Well-structured helper method.

The buildSchemaPackage helper is well-implemented with proper error handling and comprehensive schema resolution including metadata, groups, and referenced schemas.


1372-1393: LGTM! Robust pagination parameter handling.

The fromPagination helper is well-implemented with proper boundary checks and graceful handling of nil values.


1225-1258: ⚠️ Potential issue

Add permission check for model access.

Similar to ItemsAsCSV, this method should verify if the operator has read access to the model.


1300-1337: ⚠️ Potential issue

Add permission check for project and model access.

Similar to ItemsWithProjectAsCSV, this method should verify if the operator has read access to both the project and model.

Comment on lines +1191 to +1223
// ItemsAsCSV exports items data in content to csv file by modelID.
func (i Item) ItemsAsCSV(ctx context.Context, modelID id.ModelID, page *int, perPage *int, operator *usecase.Operator) (*io.PipeReader, error) {
if operator.AcOperator.User == nil && operator.Integration == nil {
return nil, interfaces.ErrInvalidOperator
}
return Run1(ctx, operator, i.repos, Usecase().Transaction(), func(ctx context.Context) (*io.PipeReader, error) {
model, err := i.repos.Model.FindByID(ctx, modelID)
if err != nil {
return nil, err
}

schemaPackage, err := i.buildSchemaPackage(ctx, model)
if err != nil {
return nil, err
}

// fromPagination
paginationOffset := fromPagination(page, perPage)

items, _, err := i.repos.Item.FindBySchema(ctx, schemaPackage.Schema().ID(), nil, nil, paginationOffset)
if err != nil {
return nil, err
}

pr, pw := io.Pipe()
err = csvFromItems(pw, items, schemaPackage.Schema())
if err != nil {
return nil, err
}

return pr, nil
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add permission check for model access.

The method should verify if the operator has read access to the model before proceeding with the export.

Add the permission check after retrieving the model:

 		if err != nil {
 			return nil, err
 		}
+		if !operator.CanRead(model) {
+			return nil, interfaces.ErrOperationDenied
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ItemsAsCSV exports items data in content to csv file by modelID.
func (i Item) ItemsAsCSV(ctx context.Context, modelID id.ModelID, page *int, perPage *int, operator *usecase.Operator) (*io.PipeReader, error) {
if operator.AcOperator.User == nil && operator.Integration == nil {
return nil, interfaces.ErrInvalidOperator
}
return Run1(ctx, operator, i.repos, Usecase().Transaction(), func(ctx context.Context) (*io.PipeReader, error) {
model, err := i.repos.Model.FindByID(ctx, modelID)
if err != nil {
return nil, err
}
schemaPackage, err := i.buildSchemaPackage(ctx, model)
if err != nil {
return nil, err
}
// fromPagination
paginationOffset := fromPagination(page, perPage)
items, _, err := i.repos.Item.FindBySchema(ctx, schemaPackage.Schema().ID(), nil, nil, paginationOffset)
if err != nil {
return nil, err
}
pr, pw := io.Pipe()
err = csvFromItems(pw, items, schemaPackage.Schema())
if err != nil {
return nil, err
}
return pr, nil
})
}
// ItemsAsCSV exports items data in content to csv file by modelID.
func (i Item) ItemsAsCSV(ctx context.Context, modelID id.ModelID, page *int, perPage *int, operator *usecase.Operator) (*io.PipeReader, error) {
if operator.AcOperator.User == nil && operator.Integration == nil {
return nil, interfaces.ErrInvalidOperator
}
return Run1(ctx, operator, i.repos, Usecase().Transaction(), func(ctx context.Context) (*io.PipeReader, error) {
model, err := i.repos.Model.FindByID(ctx, modelID)
if err != nil {
return nil, err
}
if !operator.CanRead(model) {
return nil, interfaces.ErrOperationDenied
}
schemaPackage, err := i.buildSchemaPackage(ctx, model)
if err != nil {
return nil, err
}
// fromPagination
paginationOffset := fromPagination(page, perPage)
items, _, err := i.repos.Item.FindBySchema(ctx, schemaPackage.Schema().ID(), nil, nil, paginationOffset)
if err != nil {
return nil, err
}
pr, pw := io.Pipe()
err = csvFromItems(pw, items, schemaPackage.Schema())
if err != nil {
return nil, err
}
return pr, nil
})
}

Comment on lines +1260 to +1298
// ItemsWithProjectAsCSV converts items content to CSV given by project ID or project Alias and model ID or model Key
func (i Item) ItemsWithProjectAsCSV(ctx context.Context, projectIDorAlias project.IDOrAlias, modelIDOrKey model.IDOrKey, page *int, perPage *int, operator *usecase.Operator) (*io.PipeReader, error) {
if operator.AcOperator.User == nil && operator.Integration == nil {
return nil, interfaces.ErrInvalidOperator
}

return Run1(ctx, operator, i.repos, Usecase().Transaction(), func(ctx context.Context) (*io.PipeReader, error) {
project, err := i.repos.Project.FindByIDOrAlias(ctx, projectIDorAlias)
if err != nil {
return nil, err
}

model, err := i.repos.Model.FindByIDOrKey(ctx, project.ID(), modelIDOrKey)
if err != nil {
return nil, err
}

schemaPackage, err := i.buildSchemaPackage(ctx, model)
if err != nil {
return nil, err
}

// fromPagination
paginationOffset := fromPagination(page, perPage)

items, _, err := i.repos.Item.FindBySchema(ctx, schemaPackage.Schema().ID(), nil, nil, paginationOffset)
if err != nil {
return nil, err
}

pr, pw := io.Pipe()
err = csvFromItems(pw, items, schemaPackage.Schema())
if err != nil {
return nil, err
}

return pr, nil
})
}
Copy link

@coderabbitai coderabbitai bot Dec 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add permission check for project and model access.

The method should verify if the operator has read access to both the project and model.

Add the permission checks:

 		if err != nil {
 			return nil, err
 		}
+		if !operator.CanRead(project) {
+			return nil, interfaces.ErrOperationDenied
+		}

 		model, err := i.repos.Model.FindByIDOrKey(ctx, project.ID(), modelIDOrKey)
 		if err != nil {
 			return nil, err
 		}
+		if !operator.CanRead(model) {
+			return nil, interfaces.ErrOperationDenied
+		}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// ItemsWithProjectAsCSV converts items content to CSV given by project ID or project Alias and model ID or model Key
func (i Item) ItemsWithProjectAsCSV(ctx context.Context, projectIDorAlias project.IDOrAlias, modelIDOrKey model.IDOrKey, page *int, perPage *int, operator *usecase.Operator) (*io.PipeReader, error) {
if operator.AcOperator.User == nil && operator.Integration == nil {
return nil, interfaces.ErrInvalidOperator
}
return Run1(ctx, operator, i.repos, Usecase().Transaction(), func(ctx context.Context) (*io.PipeReader, error) {
project, err := i.repos.Project.FindByIDOrAlias(ctx, projectIDorAlias)
if err != nil {
return nil, err
}
model, err := i.repos.Model.FindByIDOrKey(ctx, project.ID(), modelIDOrKey)
if err != nil {
return nil, err
}
schemaPackage, err := i.buildSchemaPackage(ctx, model)
if err != nil {
return nil, err
}
// fromPagination
paginationOffset := fromPagination(page, perPage)
items, _, err := i.repos.Item.FindBySchema(ctx, schemaPackage.Schema().ID(), nil, nil, paginationOffset)
if err != nil {
return nil, err
}
pr, pw := io.Pipe()
err = csvFromItems(pw, items, schemaPackage.Schema())
if err != nil {
return nil, err
}
return pr, nil
})
}
// ItemsWithProjectAsCSV converts items content to CSV given by project ID or project Alias and model ID or model Key
func (i Item) ItemsWithProjectAsCSV(ctx context.Context, projectIDorAlias project.IDOrAlias, modelIDOrKey model.IDOrKey, page *int, perPage *int, operator *usecase.Operator) (*io.PipeReader, error) {
if operator.AcOperator.User == nil && operator.Integration == nil {
return nil, interfaces.ErrInvalidOperator
}
return Run1(ctx, operator, i.repos, Usecase().Transaction(), func(ctx context.Context) (*io.PipeReader, error) {
project, err := i.repos.Project.FindByIDOrAlias(ctx, projectIDorAlias)
if err != nil {
return nil, err
}
if !operator.CanRead(project) {
return nil, interfaces.ErrOperationDenied
}
model, err := i.repos.Model.FindByIDOrKey(ctx, project.ID(), modelIDOrKey)
if err != nil {
return nil, err
}
if !operator.CanRead(model) {
return nil, interfaces.ErrOperationDenied
}
schemaPackage, err := i.buildSchemaPackage(ctx, model)
if err != nil {
return nil, err
}
// fromPagination
paginationOffset := fromPagination(page, perPage)
items, _, err := i.repos.Item.FindBySchema(ctx, schemaPackage.Schema().ID(), nil, nil, paginationOffset)
if err != nil {
return nil, err
}
pr, pw := io.Pipe()
err = csvFromItems(pw, items, schemaPackage.Schema())
if err != nil {
return nil, err
}
return pr, nil
})
}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hi @yk-eukarya , I wonder if this is needed ?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Outside diff range and nitpick comments (5)
server/internal/usecase/interactor/item_test.go (5)

1180-1182: Avoid Hardcoding Expected CSV Output

The expected CSV output in the test case is hardcoded, which can make the test brittle if the data changes. Consider generating the expected output dynamically based on the test data to make the test more robust.

For example:

expectedCSV := fmt.Sprintf("id,location_lat,location_lng\n%s,139.28179282584915,36.58570985749664\n", i1IDStr)
assert.Equal(t, expectedCSV, string(result))

1310-1310: Avoid Global Variables for Test Parameters

Variables like perPage1 are defined globally but are only used within the test functions. To prevent unintended side effects and improve code clarity, define these variables within the scope of the test cases where they are used.


1466-1466: Define Pagination Variables Within Test Cases

The variables page1 and perPage1 are declared globally but are specific to the test scenarios. Defining them within each test case enhances encapsulation and prevents accidental modification from other tests.


1637-1638: Use Consistent Error Handling in Tests

In the loop over test cases, ensure consistent handling of errors and assertions. This enhances test reliability and makes it easier to diagnose failures.

Consider updating the error assertions:

- assert.Equal(t, tt.wantError, err)
+ if tt.wantError != nil {
+     assert.ErrorIs(t, err, tt.wantError)
+ } else {
+     assert.NoError(t, err)
+ }

1242-1243: Remove Redundant Context Initialization

Inside the test functions, ctx := context.Background() is called even though ctx is already initialized earlier. This creates a shadowed variable and is unnecessary.

Remove the redundant line:

- ctx := context.Background()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 06101c7 and 744091d.

📒 Files selected for processing (1)
  • server/internal/usecase/interactor/item_test.go (4 hunks)

server/internal/usecase/interactor/item_test.go Outdated Show resolved Hide resolved
server/internal/usecase/interactor/item_test.go Outdated Show resolved Hide resolved
server/internal/usecase/interactor/item_test.go Outdated Show resolved Hide resolved
@jasonkarel jasonkarel changed the title feat(server): move export csv items to usecase layer feat(server): move export csv items and GeoJSON to usecase layer Dec 12, 2024
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Outside diff range and nitpick comments (3)
server/internal/usecase/interactor/item_test.go (3)

1089-1262: Add test case for empty items list

The test function has good coverage but is missing a test case for when the items list is empty. This would help verify the behavior when no items are found for a valid model.

Add a test case like:

{
    name: "success with empty items list",
    args: args{
        ctx:     context.Background(),
        modelID: m1.ID(),
        page:    &page1,
        perPage: &perPage1,
        op:      op,
    },
    seedsItems:  item.List{},
    seedSchemas: s1,
    seedModels:  m1,
    want:        []byte("id,location_lat,location_lng\n"),
    wantError:   nil,
},

1443-1631: Add test case for invalid project ID

The test function should include a test case for when the project ID is invalid to verify proper error handling.

Add a test case:

{
    name: "error invalid project ID",
    args: args{
        ctx:              context.Background(),
        projectIDorAlias: project.IDOrAlias("invalid-project-id"),
        modelIDOrKey:     model.IDOrKey(m1.ID().String()),
        page:             &page1,
        perPage:          &perPage1,
        op:              op,
    },
    want:      []byte{},
    wantError: rerror.ErrNotFound,
},

1733-1748: Fix misleading test case name

The test case name suggests an error case but it's actually testing a successful scenario with a geometry editor type.

Rename the test case:

-           name: "error pointFieldIsNotSupportedError",
+           name: "success with geometry editor type",
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 744091d and c0876b1.

📒 Files selected for processing (1)
  • server/internal/usecase/interactor/item_test.go (4 hunks)
🧰 Additional context used
📓 Learnings (1)
server/internal/usecase/interactor/item_test.go (2)
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item_test.go:1238-1241
Timestamp: 2024-12-10T05:28:18.173Z
Learning: In unit tests within the `server/internal/usecase/interactor/item_test.go` file of the `reearth-cms` project, it's acceptable to ignore errors returned by `io.ReadAll` when reading from `pr`.
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item_test.go:1147-1151
Timestamp: 2024-12-10T05:29:09.961Z
Learning: In the test `TestItem_ItemsAsCSV` in `server/internal/usecase/interactor/item_test.go`, negative test cases are intentionally fulfilled by not setting operator permissions, which is acceptable for testing purposes.

server/internal/usecase/interactor/item_test.go Outdated Show resolved Hide resolved
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Outside diff range and nitpick comments (2)
server/internal/usecase/interactor/item_test.go (2)

1130-1131: Consider extracting pagination constants.

The magic numbers for pagination could be moved to named constants at the package level for better maintainability and reuse across tests.

+const (
+    defaultTestPage = 1
+    defaultTestPerPage = 10
+)
+
 func TestItem_ItemsAsCSV(t *testing.T) {
-    page1 := 1
-    perPage1 := 10
+    page1 := defaultTestPage
+    perPage1 := defaultTestPerPage

1497-1514: Improve test case readability with struct field tags.

The test case struct could be more readable with field tags that describe the purpose of each field. Also consider grouping related test cases.

type itemWithProjectTestCase struct {
    name        string    `desc:"Test case description"`
    args        args      `desc:"Input parameters"`
    setup struct {
        project *project.Project  `desc:"Project to be saved in db"`
        items   item.List        `desc:"Items to be saved in db"`
        schema  *schema.Schema   `desc:"Schema to be saved in db"`
        model   *model.Model     `desc:"Model to be saved in db"`
    }
    expect struct {
        result    []byte    `desc:"Expected CSV output"`
        err       error     `desc:"Expected error if any"`
    }
}

// Group related test cases
var (
    successCases = []itemWithProjectTestCase{
        {name: "success with geometry object",...},
        {name: "success with geometry editor",...},
    }
    errorCases = []itemWithProjectTestCase{
        {name: "error with non-geometry type",...},
        {name: "error with model not found",...},
        {name: "error with invalid operator",...},
    }
)

Also applies to: 1515-1593

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c0876b1 and ffc4052.

📒 Files selected for processing (1)
  • server/internal/usecase/interactor/item_test.go (4 hunks)
🧰 Additional context used
📓 Learnings (1)
server/internal/usecase/interactor/item_test.go (2)
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item_test.go:1238-1241
Timestamp: 2024-12-10T05:28:18.173Z
Learning: In unit tests within the `server/internal/usecase/interactor/item_test.go` file of the `reearth-cms` project, it's acceptable to ignore errors returned by `io.ReadAll` when reading from `pr`.
Learnt from: jasonkarel
PR: reearth/reearth-cms#1329
File: server/internal/usecase/interactor/item_test.go:1147-1151
Timestamp: 2024-12-10T05:29:09.961Z
Learning: In the test `TestItem_ItemsAsCSV` in `server/internal/usecase/interactor/item_test.go`, negative test cases are intentionally fulfilled by not setting operator permissions, which is acceptable for testing purposes.

Comment on lines +105 to +112
// ItemsAsCSV exports items data in content to csv file by modelID.
ItemsAsCSV(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*io.PipeReader, error)
// ItemsAsGeoJSON converts items to Geo JSON type given a model ID
ItemsAsGeoJSON(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*integrationapi.FeatureCollection, error)
// ItemsWithProjectAsCSV converts items content to CSV given by project ID or project Alias and model ID or model Key
ItemsWithProjectAsCSV(context.Context, project.IDOrAlias, model.IDOrKey, *int, *int, *usecase.Operator) (*io.PipeReader, error)
// ItemsWithProjectAsGeoJSON converts items content to Geo JSON given by project ID or project Alias and model ID or model Key
ItemsWithProjectAsGeoJSON(context.Context, project.IDOrAlias, model.IDOrKey, *int, *int, *usecase.Operator) (*integrationapi.FeatureCollection, error)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// ItemsAsCSV exports items data in content to csv file by modelID.
ItemsAsCSV(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*io.PipeReader, error)
// ItemsAsGeoJSON converts items to Geo JSON type given a model ID
ItemsAsGeoJSON(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*integrationapi.FeatureCollection, error)
// ItemsWithProjectAsCSV converts items content to CSV given by project ID or project Alias and model ID or model Key
ItemsWithProjectAsCSV(context.Context, project.IDOrAlias, model.IDOrKey, *int, *int, *usecase.Operator) (*io.PipeReader, error)
// ItemsWithProjectAsGeoJSON converts items content to Geo JSON given by project ID or project Alias and model ID or model Key
ItemsWithProjectAsGeoJSON(context.Context, project.IDOrAlias, model.IDOrKey, *int, *int, *usecase.Operator) (*integrationapi.FeatureCollection, error)
// ItemsAsCSV exports items data in content to csv file by modelID.
ItemsAsCSV(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*io.PipeReader, error)
// ItemsAsGeoJSON converts items to Geo JSON type given a model ID
ItemsAsGeoJSON(context.Context, id.ModelID, *int, *int, *usecase.Operator) (*integrationapi.FeatureCollection, error)

No need for 2 implementations, you can get the model id in adapter level and call the same implementation

Comment on lines +1339 to +1370
func (i Item) buildSchemaPackage(ctx context.Context, model *model.Model) (*schema.Package, error) {
schemaIDs := id.SchemaIDList{model.Schema()}
if model.Metadata() != nil {
schemaIDs = append(schemaIDs, *model.Metadata())
}
schemaList, err := i.repos.Schema.FindByIDs(ctx, schemaIDs)
if err != nil {
return nil, err
}
s := schemaList.Schema(lo.ToPtr(model.Schema()))
if s == nil {
return nil, rerror.ErrNotFound
}

groups, err := i.repos.Group.FindByIDs(ctx, s.Groups())
if err != nil {
return nil, err
}

groupSchemaList, err := i.repos.Schema.FindByIDs(ctx, groups.SchemaIDs().Add(s.ReferencedSchemas()...))
if err != nil {
return nil, err
}
groupSchemaMap := lo.SliceToMap(groups, func(g *group.Group) (id.GroupID, *schema.Schema) {
return g.ID(), schemaList.Schema(lo.ToPtr(g.Schema()))
})
referencedSchemaMap := lo.Map(s.ReferencedSchemas(), func(s schema.ID, _ int) *schema.Schema {
return groupSchemaList.Schema(&s)
})

return schema.NewPackage(s, schemaList.Schema(model.Metadata()), groupSchemaMap, referencedSchemaMap), nil
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
func (i Item) buildSchemaPackage(ctx context.Context, model *model.Model) (*schema.Package, error) {
schemaIDs := id.SchemaIDList{model.Schema()}
if model.Metadata() != nil {
schemaIDs = append(schemaIDs, *model.Metadata())
}
schemaList, err := i.repos.Schema.FindByIDs(ctx, schemaIDs)
if err != nil {
return nil, err
}
s := schemaList.Schema(lo.ToPtr(model.Schema()))
if s == nil {
return nil, rerror.ErrNotFound
}
groups, err := i.repos.Group.FindByIDs(ctx, s.Groups())
if err != nil {
return nil, err
}
groupSchemaList, err := i.repos.Schema.FindByIDs(ctx, groups.SchemaIDs().Add(s.ReferencedSchemas()...))
if err != nil {
return nil, err
}
groupSchemaMap := lo.SliceToMap(groups, func(g *group.Group) (id.GroupID, *schema.Schema) {
return g.ID(), schemaList.Schema(lo.ToPtr(g.Schema()))
})
referencedSchemaMap := lo.Map(s.ReferencedSchemas(), func(s schema.ID, _ int) *schema.Schema {
return groupSchemaList.Schema(&s)
})
return schema.NewPackage(s, schemaList.Schema(model.Metadata()), groupSchemaMap, referencedSchemaMap), nil
}

get the schema package from uc.Schema.GetByModel in the adapter and pass it to the UC.
it's not good to duplicate the implementation, we can refactor it to a better way later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants